This page has been archived and is no longer updated

 
January 19, 2015 | By:  Luke De
Aa Aa Aa

Dream Reader v.1.0


Dream Reader v.1.0

Edited by Allie Logerfo

Tin foil hat by Kley Gilbuena


Time to grab the aluminum foil and start thinking about stylish ways of wrapping up your head, people. . . the future is here. In my mind (pun intended) nothing says the future is here like direct brain-to-brain

communication. (Footnote: This is cool)

Can you imagine literally getting into someone’s head? You could activate your brain interface - “Ok Google Mind Warp” - and then in seconds, using only your mind, plan an elaborate practical joke on Dave. . . Oh silly Dave. . . (Footnote: Video of Brain Decoding)

The challenge in developing such technology isn't related to sending the thought, but the complications involved in reading the brain. In order to do that you’d need to be able to scan a person’s thoughts and decipher them. Well, Horikawa, a scientist from Japan, and friends, also scientists from Japan, showed that it can be done. I’ve heard rumors of dream scanning being done at top Tech Labs in Japan, but these guys laid out their data for public review in an article titled, “Neural Decoding of Visual Imagery During Sleep.”


While the article is a bit old, published on May 3, 2013, I thought it was a fascinating read and worthy of a little more publicity.

The Experiment

In order to understand what these scientists did, you have to understand that, to a degree, the idea that our thoughts are patterns of brain activation. In a gross oversimplification, the feeling of “hunger” and the image of yourself executing a triple toe loop to win the Gold medal at the Olympics involve different parts of your brain being activated at different times.

The first part of the experiment involved putting a patient inside of an fMRI machine and then letting them nod off. An fMRI machine monitors the consumption of oxygen. When you look at a brain with one of these machines, you can see which parts of the brain are heavily active. Active brain cells use up more oxygen. So to conduct this experiment, you let a person nod off, then rudely wake them up, and ask “what were you just dreaming about?” You verify that they actually nodded off by measuring their brain electrical activity with an electroencephalogram (EEG). The EEG listens to brain activity. Imagine all the neurons in the brain emitting a small signal when they fire, well using an EEG you can tell when a person is sleeping, not REM but Stage 1 and 2, by listening to how their brains hum. (Footnote: Experimental Protocol)

The next part of the experiment is cool because the scientists were elegant in the way that they thought about. . . well, thoughts. Rather than directly trying to match up active parts of the brain from the fMRI data, with any thought, the scientists focused on visual thoughts. The scientists grouped the visual thoughts, or images that people saw in their minds eye. If the scientists tried to be too specific in what they were looking for in the brain, they wouldn’t find a pattern. So their approach was more simple, for example, in the sentence, “Um, what I saw now was like, a place with a street and some houses around it” they first pulled out the visuals (visual thoughts), street and houses. Then they categorized them. Street is broad enough, but houses would then be categorized as buildings.

Training the algorithm.

The group took the data and began to train computers they called “decoders” to associate patterns of brain activation (fMRI

data) with what the dreamers had described. They used a self-directed learning algorithm, or in equally complex terms, intelligent programmatic instructions. The group worked with the theory that seeing an object and dreaming about an object involves similar brain activity. Therefore, to further train the computers, the group took images of objects and showed them to the subjects and then recorded the patterns of brain activation. The computer algorithm found the similarities between the dream data and the image-generated data. The computer was then further trained to distinguish brain activation of two different images, they use the example of male and car. It was a forced test.

The scientists also tried to determine which regions of the brain would give the most accurate object information and, as they suspected, the most complicated image processing parts of the brain were the most accurate. Your brain processes images at different levels, for example, a face could be described simply using dark and light patches, or with more accuracy with different lines, and then with even higher accuracy using facial features like eyes, nose etc. Though a face isn’t the best example this would be like describing someone on three different levels. Level 1: Dave looks like three dark splotches on a pasty typical-no-sun-New Jersey-winter skin tone. Level 2: Dave has a squarish face shape, large ovals on the top of his face, and a thin line on the bottom of his face. Level 3: Dave has a square jaw, Persian eyes, and thin lips.

In the end the scientists were able to train the decoders to analyze the data presented by the fMRI from a high level image processing brain region, and yes, predict what the subjects were seeing in their dreams.

Discussion


I have left an insane amount of information out about what this group did, but the genius of what they did wasn’t really in the brain scan (fMRI), but with the way they trained their computer/decoder to look at the information generated by that fMRI. It indicates the potential advances in technology associated with scientific studies, especially those involving the complex and confusing structure that exists in our heads.

Now, there are a lot of limits in this study. The subjects weren’t in REM sleep, they had just dozed off. The number of people they used in the study was low, but sometimes scientists just need to put some information out there, blow some minds, and show that that the “impossible” is within our grasp.

This study is ground-breaking, or perhaps, skull-cracking. Upon reflection there are three things that I found particularly interesting. First, thinking/dreaming of an image is very similar to actually seeing that image. Dreaming is particularly hard to study because people are generally asleep when they do it. Because of this it has become something of a black box problem and has generated a great deal of interest. Second, that there is enough information in a fMRI to predict some of what a brain is thinking. Finally, the method with which the decoders were trained to pick apart the fMRI data has so many potential applications.


But rather than getting lost in the science, the legal implications, the ethical ramifications, or the privacy debate that this is bound to drudge up, let’s just realize that somebody had a dream. . . and these scientists read it.


Citations



Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science (New York, N.Y.), 340, 639–42. doi:10.1126/science.1234330


Kioustelidis, J. (2011). Reading minds. New Scientist, 210, 32. doi:10.1016/S0262-4079(11)61504-2


Yates, D. (2011). Sleep: Visualizing dreams. Nature Reviews Neuroscience. doi:10.1038/nrn3149



Footnotes



The Aluminum Foil Hat Image

A big thank you to Kley Gilbuena for modeling this anti-dream scanning device. He’s my college roommate and a creative, awesome guy.


This Is Cool

Just so you know, if you don’t admit this is cool, you are lying to yourself. Stop lying to yourself.


Video of Brain Decoding

This is a video of brain decoding. By hooking up a brain to an EEG machine scientists generated a movie playing what the brain was thinking. (Link)


Experimental protocol

These poor people had to fall asleep while an MRI machine was going off. This video shows the process. It’s crazy!!! (Link)


0 Comment
Blogger Profiles

Connect
Connect Send a message

Scitable by Nature Education Nature Education Home Learn More About Faculty Page Students Page Feedback



Blogs