TED Talk given by Mary Lou Jepsen: Could future devices read images from our brains?
Starting at 5:33, the most exciting part for people with aphasia. Taken from the transcript at TED. You have to watch the video to see why this is so exciting because the transcript doesn’t show the results of the computer’s interpretation of neural activity.
Next let me share with you one other experiment, this from Jack Gallant’s lab at Cal Berkeley. They’ve been able to decode brainwaves into recognizable visual fields. So let me set this up for you. In this experiment, individuals were shown hundreds of hours of YouTube videos while scans were made of their brains to create a large library of their brain reacting to video sequences. Then a new movie was shown with new images, new people, new animals in it, and a new scan set was recorded. The computer, using brain scan data alone, decoded that new brain scan to show what it thought the individual was actually seeing. On the right-hand side, you see the computer’s guess, and on the left-hand side, the presented clip. This is the jaw-dropper. We are so close to being able to do this. We just need to up the resolution. And now remember that when you see an image versus when you imagine that same image, it creates the same brain scan.
This is incredible. Could we train a machine to interpret thoughts into images? And then to interpret images into words? Could we make that machine small and inexpensive? This has been the trajectory of all technology. Soon people with aphasia are going to be communicating without speech therapy if this technology or a similar one becomes widely available. Technology is replacing the field of speech pathology, and that is exciting because it means that more people will be able to communicate well.