What’s with Neuralink and the Monkey?
Ok, so it’s been nearly a month since Neuralink unveiled their Neuralink project which shows a nine-month-old monkey Pager, playing a game of Ping-Pong using nothing but his mind. If you haven’t checked out the video here’s a link to the video, do check it out. So, in this article, I just try to briefly explain what is going on in the video and what it could mean in the future.
What’s really going on in the video?
So, at the beginning of the video, Pager learns how to play the game of PingPong using a controller, and a smoothie which is delivered to him, each time he makes a correct move. This is kind of similar to reinforcement learning that we might have seen in which learning happens on the basis of a reward function. Soon, the monkey learns how to play ping-pong using the controller, and moves its hand according to how the game is progressing trying to win each time so that it can drink its tasty smoothie.
So, the monkey has a neuralink chip implanted in its skull, and this chip collects the data which is being sent through the neurons.
How does the Neuralink implant record the data coming from the neurons?
In the initial Neuralink demo, Elon had said that the Neuralink device uses something called two-photon microscopy. So, what this essentially is, is that the neurons in the brain that we are looking to monitor using the Neuralink implant are engineered to produce fluorescent indicators which change their brightness whenever there is neural activity. This change in the brightness can be captured using the 2-Photon microscope which can then transmit the data to a computer which can then be used to monitor the brain signals.
What do we do with this collected data?
The data on the neural activity which is collected from the implants in the brain is sent to the computer, which then tries to model a decoder for interpreting these received brain signals. The decoder essentially tries to mathematically model a function taking the brain signals as inputs and the corresponding joystick movement by the monkey’s hand as the output. Using this learned decoder function, it will essentially be able to predict the monkey’s hand movements at the moment the neurons are fired in the brain. The final goal of Neuralink aims to be to help people who are affected with paralysis restore their motor activity using their decoded brain activity alone. The monkey is then able to play the game using nothing but the neural activity in its brain, which is being simultaneously recorded and decoded by the decoder to predict hand movements.
Is this new technology?
Research on a brain-computer interface has been going on since the 1970s, with UCLA professor Jacques Vidal who demonstrated the movement of a graphical object in a computer through a maze. Since then, there has been rapid development taking place in this field, which has a huge potential in the future.