Advanced neurotechnology makes the mind, the new frontier
The year is 2033. We’re in the living room of a typical family in Buenos Aires.
We see a girl, she’s sitting on the couch, and she’s doing her engineering homework. Measures of her comprehension are being computed from her brain activity and her eye movements, adapting the information that is presented, to accelerate her learning.
Meanwhile, her sister and her dad are enjoying an interactive virtual reality experience together with friends from around the world. This is sort of a modern take on the classic choose your own adventure concept, but here an AI generated storyline is continually shaped based on the emotional state of the group.
Across the room, her mom, who is a sociologist, is reviewing an interactive report showing real time updates on her research study. This is a collaboration with researchers worldwide aimed at better understanding relationships between every day social interactions and brain activity and health in over 100,000 participants from around the globe.
In the background we can see a news update from the first human expeditions to Mars, where personalized brain health monitoring systems are being used to help assess and predict changes in astronauts’ cognitive health, helping to ensure the safety of long duration deep space exploration.
Now, this narrative may sound like science fiction. However, these are all realistic applications of scientific research and development that is already happening today, with the potential to positively transform many aspects of our daily lives. The key technologies that will someday enable such a future are right now being increasingly integrated into our daily lives, into the objects that we interact with and the environments we live in, and ultimately into us.
Today, the mind is the new frontier. I believe that we are at a point in time where advances in neurotechnology and computing and machine learning are converging on a singular point in history, beyond which lies a new era of discovery and exploration, or who we are and what we can become.
An early love for exploration and discovery was instilled in me as a young child, growing up in over a dozen countries worldwide and exposed to many different cultures and languages, but my fascination with computing and technology began when I was just 9 years old.
I was living in Brazil and my father acquired a used 386 Toshiba, on which was installed the QBasic programming software. I taught myself to code. I fell in love with this new medium where all I needed was this computer and my mind to enter into this whole new world of discovery and creative possibility.
Fast forward about a decade and a few countries later, and I found myself at U.C. Berkeley, completing undergrad degrees in computer science and cognitive science. It was my interest in the intersection of these fields and neuroscience that brought me to the field of brain computer interfacing, or BCI, developing computer algorithms to translate patterns of neural activity associated with specific cognitive states into meaningful instructions.
Around that time, as I was finishing up at Berkeley, I was fortunate to have the opportunity to work at Palo Alto Research Center. This is the legendary birthplace of so many key computing technologies that we use every day. For instance, the personal computer, or the graphic user interface, ethernet. Importantly, ubiquitous computing, the foundational idea that underlies mobile computing and the internet of things.
The father of ubiquitous computing was the late Mark Weiser. He led the computer science lab in the late ‘80s. Weiser articulated a powerful role for technology when he famously wrote in 1991, “The most profound technologies are those that disappear. They weave themselves into the fabric of every day life until they are indistinguishable from it.”
He also said that, the more you can do by intuition, the smarter you are. The computer should extend the unconscious. Technology should create calm.
Now, I resonated deeply with this view of technology’s ultimate purpose, especially as it applied to BCIs. That is to act as an invisible extension of ourselves, woven seamlessly into the fabric of our lives, calmly empowering us to be more capable and to connect and interact with the world and with each other in unprecedented new ways.
I envisioned a future in which advances in neural interface technology combined with distributed computing could enable anyone who wants to, to allow the devices and applications embedded in their environment to act as agents of their conscious and subconscious minds, a sort of cognitive halo extended our minds beyond the limits of our bodies.
My thinking on the topic was further influenced a few years later, as I was working on my PhD at U.C. San Diego’s Swartz Center for Computational Neuroscience. There, I was working with a group of fellow researchers on open source neuroscience software that was being used by thousands of researchers and labs around the world.
It became clear that a ubiquitous neuro tech platform, like the one that I envisioned, could also act as an accelerator, enabling researchers and developers to realize their own visions for neuro technology by easily building on state of the art validated pipelines for neurophysiological signal processing and BCI, without needing to reinvent the wheel or build their own solutions from the ground up.
It was from these visions that Intheon was founded, with the goal to empower anyone anytime or anywhere to access a lab’s worth of brain and body signal analysis and state decoding in the cloud through a simple interface. With just a few lines of code to be able to integrate various measures of one’s brain state into any application or device. We call this platform NeuroScale.
We believe that such empowerment can help catalyze growth and innovation across multiple domains and applications, ranging from improved human performance and wellbeing to new forms of communication and creative expression, to accelerated scientific research and discovery and new clinical solutions.
For example, in 2014 we codeveloped with UCSF’s Adam Gazzaley, The Glass Brain. This was the first VR experience where you could fly through a 3D model of your own brain, seeing its changing network activity in real time. This was all using state of the art EEG brain mapping.
Today, with just a few lines of code, a developer could access similar pipelines running on NeuroScale and use this to create their own mobile VR or AR experience that is designed to, for instance, reinforce specific brain functions through an adaptive reward mechanism. Or to create a brain-aware living place where devices connected through the internet of things respond dynamically to one’s neural state.
As we enhance and combine these technologies with machine intelligence, we envision powering new systems that are capable of augmenting our capabilities and increasing our productivity, health, and wellbeing.
For instance, we’re currently working with the Human Spaceflight Laboratory at the University of North Dakota, as well as with NTL Group Cognionics and researchers from NASA, on research testing the feasibility of using wearable EG sensors combined with specially designed cognitive assessment protocols to better characterize and predict changes in crew members’ brain performance during simulated lunar or martian missions.
If successful, technology like this could not only have useful applications here on Earth but could also help to increase the safety and feasibility of long duration deep space exploration of that other great frontier, the wild of deep space.
Meanwhile, back here on Earth, there’s still a great deal that we must learn about the brain and how it functions in the equally wild and complex and unpredictable world outside the lab. For instance, a typical cognitive neuroscience study would be carried out in relatively constrained environments, using a laboratory, using sample sizes ranging from tens to at most hundreds of individuals. These are mostly healthy college students.
But, how does brain activity and brain function vary across broader groups of individuals? In different parts of the world? Across a wide range of applications, both in health as well as in disfunction?
What if we could scale up our research to involve hundreds of thousands, or even millions of participants from around the globe? What if we could better understand how the brain learns across different environments and individuals?
This could enable us to design new kinds of personalized systems that are capable of accelerating learning or enabling individuals to overcome learning challenges.
What if we could enable researchers across disparate domains, for instance a sociologist or a geneticist, to easily test hypotheses at the intersection of neuroscience and their own fields of expertise? What new insights or discoveries might we achieve?
I’d like to invite on to the stage a very special guest, my wife Rachel. There she is. Everyone, meet Rachel. Rachel, meet everyone.
Now, as I’ve been speaking, miniature sensors worn in Rachel’s ears and on her chest are recording her brain and heart activity, and streaming this to the NeuroScale platform where in real time these are being translated into measures of her cardiac performance, relaxation, attention, and other states.
Rachel can grant permission to any internet connected device or application to access these states in real time. For instance, there’s web browser visualization, which if we were connected to the internet would be running. Rachel can also do something like this.
Rachel: Alexa, ask NeuroScale for my average relaxation level over the last five minutes.
Let’s hold it up so that she can respond.
Alexa: Your average relaxation level over the last five minutes was 36%. Not bad, but maybe you want to try taking a little break.
Which is how I feel right about now, too.
Here we can see her activity streaming in real time through NeuroScale.
Rachel can also provide her data in anonymized form to an automated lab in the cloud, enabling researchers to access the results of automated analysis pipelines, to generate reports and test new hypotheses using this and other data. Thanks, Babe.
Twenty-seven years ago, when Jacques Vidal first coined the term brain-computer interface, such a system would have occupied an entire room. Today it can be worn on our bodies and it can be held in our hands. In the coming decades, it will disappear altogether, woven invisibly into our bodies and into the world around us.
Researchers today are working on new kinds of miniaturized neural sensors that could someday be safely embedded throughout the brain, enabling much higher resolution neural recordings. Technology like this could help to enable the design of personalized therapies that are designed specific for a patient’s brain or body.
As scientific research progresses and neuro technology becomes ubiquitous, we will also have the opportunity to gain a deeper understanding of cognition and emotion. If we so choose, a future like the one described in that Buenos Aires household could become a reality.
Today we stand on the cusp of a paradigm shift. We don’t know what the future holds. The path ahead is not without obstacles and risks. Many important issues that we face in other technological domains – for instance ethics, privacy, and data ownership – all must be addressed for the emerging neuro tech domain.
As a global community, we must work together to engage in an informed discussion, open and inclusive debate, to ensure we make wise choices that set us on the right path.
The entire history of human exploration has held in balance both risk and reward in the face of the unknown. The mind is the new frontier, and we are all explorers embarking on a journey that will take us beyond this frontier to discover what we can become.