Machines, especially computers, have become an essential part of our daily life and are widely used in many aspects of our society. Hence, any development that improves or changes the way we work with computers can have an impact on the productivity and the well-being of our society at large.
But what is left to be improved? Although new products and upgrades do always come with better, sleeker interfaces, the fundamental paradigm they all still adhere to has been established and unchanged for a long time. We press buttons. We move mice. We touch screens.
All in all, whenever we interact with a computer, or a phone, or an ATM, we do the same thing: We translate what we want to do into a sequence of small steps (pressing buttons, touching screens) that the computer will understand. In essence, we spend a long time “explaining” the computer what it is that we want to do, or what we want it to do, in order for it to help us. The more complex our goals, the more time we need to spend explaining them to the computer.
We thus see a clear communication bottleneck limiting our ability to truly cooperate with the machines we have created, and limiting our ability to benefit from their potential.
There is no such bottleneck in human-to-human interaction. For example, we need only lift as little as half an eyebrow for our conversation partner to completely rethink their words. Given the context of the conversation and knowing more or less what we know, our conversation partner correctly interprets that often even unconscious gesture of ours as either skepticism, confusion, or surprise, and adapts the argument to match our response.
Now imagine a computer that understands us and our context in such a way that it, too, can infer what it is we want to know or intend to do, and then supports without us even asking it to. Imagine a computer that is an independent and autonomous, but empathetic and cooperative team player helping us reach our goals.
We recently made a step into that direction.
We have shown that a machine can learn directly from our brain. Using a passive brain-computer interface (pBCI) to interpret spontaneous, involuntary brain activity, the computer inferred the intentions of the human participants in our experiment. Even though the computer, based on this information, successfully reached the intended goals, the human participants were not aware of communicating any information to the computer.
In our experiment, a cursor moved over the nodes of a grid, and had to reach a certain target position. The grid had a limited number of nodes (4×4 or 6×6) and the cursor could only jump to nodes adjacent to its current position. In the beginning of the experiment the cursor moved randomly. Participants observed and mentally judged each movement. Rather than being given an explicit, artifical task, thus, the participants were doing what humans naturally and automatically do: interpreting whatever it is they were seeing. Each of their observations of cursor movements induced a specific response in the observers’ brain, depending on their own internal interpretation of what they just saw. We tracked these responses using an electroencephalogram (EEG), and taught the computer to discriminate between two categories of brain activity: Responses that followed “good” movements (movements bringing the cursor closer to the target), and responses that followed “bad” movements (doing the opposite). Having learned the distinct patterns of these responses, the computer could identify whether or not an observed movement had been good or bad based only on the observer’s brain activity. The computer then used this simple but reliable information to, in the end, identify the intended target position. The computer learned which directional movements led to “good” responses, and which to “bad” responses, and then steered the cursor into the direction evoking the “good” ones—i.e., the direction of the intended target.
Controlling a cursor on a grid is not yet the future we alluded to earlier. It does clearly show, however, that a computer can learn directly from our brain without us needing to explicitly communicate anything. In fact, the participants were not even aware of anything of the sort taking place.
In a broader sense, the experiment shows how computers can directly tap into human intelligence. The human observer interpreted the situation (in this case a cursor movement) using their own, subjective evaluation criteria and strategies, and the computer collected the result. Since the computer was the one presenting the situation in the first place (moving the cursor), rather than us pushing the computer’s buttons, the computer was pushing ours.
This is a novel type of human-computer interaction, and offers a new division of labor between the human and the computer—one that is more attuned to the respective strengths and weaknesses of humans and computers. We are good at higher-level skills such as reasoning, interpreting, evaluating, and judging. The computer can use the outputs of these processes to inform its own actions, supporting our goals with its precise, automated actions and calculations.
We call this neuroadaptive technology. Neuroadaptive technology utilizes real-time measures of neurophysiological activity within a closed loop in order to inform intelligent adaptation. Measures of for instance electrocortical or neurovascular brain activity are quantified to provide a dynamic representation of the user state, with respect to implicit psychological activity related to cognitions, emotions, and motivation. Based on the thus-collected information about the user, the system automatically adapts to the inferred needs, intentions, or preferences of that user.
The concept of neuroadaptive technology extends the idea of automated adaptation discussed in an earlier blog post.
We do not wish to overstate the significance of these initial results. The example of cursor control is quite limited, and the concept has yet to be translated into more complex environments. But we do consider this to be an inspiring demonstration of a new type of human-machine interaction, where human and machine intelligence cybernetically converge.
If we dare to envision even further, these concepts might provide a mechanism that allows us to transfer at least parts of our intelligence and our thinking directly to a machine, generating artificial intelligence in a new, exciting fashion.
The idea of neuroadaptive technology and its potential and limits will be discussed on a conference organized for July 2017, in Berlin. If you are interested in this, you can find details on this website.
The article describing our research in detail has been published in the Proceedings of the National Academy of Science of the Unites States of America (PNAS). It is titled “Towards Neuroadaptive Technology” in short, or “Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity” in full. It can be read free of charge.
Thorsten O. Zander
Laurens R. Krol