The neural interface is blurring the lines between human minds and machines on a massive scale. In the past, if someone told me about an electronic device that could essentially “read my thoughts” and interact with my nervous system, I’d picture something like the “Borg” from Star Trek.
Now, the concept is rapidly moving out of the realms of Sci-Fi, into the real world, in various ways. In particular, neural interfaces are revolutionizing the extended reality landscape. They’re introducing new ways for humans to immerse themselves into unique experiences, and interact with digital content.
Neural interfaces can build on the expanding potential of “spatial computing” in the XR space, and make controllers (like the Meta Quest Pro controllers) a thing of the past. As companies like Meta continue to increase investment in these technologies, now seems like the perfect time to really explore what neural interfaces are, how they work, and what they can do for XR.
What is a Neural Interface?
Neural interfaces are electronic devices that interact with the human nervous system. Implanted either inside (yikes) or outside of the brain, these systems can record and stimulate activity. They might sound like a new concept – but they’ve been around longer than you’d think.
Cochlear implants and exoskeletons in the healthcare landscape, as well as “TENS” systems, all use elements of neural interface technology. Electromyograms (EMGs) are another common type of neural interface device – and the one that’s getting the most attention in the XR space today.
On a broad scale, the different types of neural interface are categorized into three segments:
- Invasive interfaces: Electrical devices that need to be surgically implanted into the brain. Neuralink’s brain-computer interface chip is one of the most commonly referenced examples. However invasive neural interfaces are also present in the healthcare space, like the “Synchron Stentrode” which helps patients with severe paralysis.
- Semi-invasive interfaces: These systems place electrodes within the skull, but outside of the brain tissue. Again, these are most common in the healthcare world. They’re used for cochlear implants, or specific treatment strategies. For instance, the Neuropace RNS system uses neural implants to help treat epilepsy.
- Non-invasive interfaces: Non-invasive interfaces (the types with the most potential in the XR space), use external sensors to detect brain signals. For example, Meta’s wristband for the prototype Orion AR glasses leverage EMG capabilities to detect muscle movements without the need for additional cameras and sensors.
How Do Neural Interfaces Work?
The functionality of a neural interface will vary slightly depending on the “type” of interface in question. Mostly, they function by capturing, processing, and responding to human brain signals. The stages involved in a neural interface’s functions include:
- Capturing bioelectrical signals: Electrodes and sensors are placed close to the brain or the muscles – wherever the machine wants to draw information from. A recording array extracts neural signals from the brain, to inform its next action.
- Signal processing: After gathering signals, a decoding algorithm filters out excess noise and processes the information into meaningful insights and commands. Usually, translating information into commands relies on machine learning algorithms.
- Executing actions: The interpreted commands are then used to complete an action, like controlling an output device (such as a VR headset or a robotic limb). They can also sometimes provide sensory feedback, such as haptic responses.
The Evolution of Neural Interfaces in XR
Certain types of neural interfaces have been gaining attention in the extended reality landscape for some time now. Way back in 2019, Meta acquired a startup called CTRL-Labs, which was in the process of building an innovative EMG wristband.
This EMG wristband has since been developed further by Meta, which officially introduced it at Meta Connect 2024 as an accessory for its Orion smart glasses. Meta’s wristband (similar to other wristbands and EMG accessories being developed elsewhere) interprets electrical signals from the brain that control hand movements.
These bands don’t rely on cameras and sensors to enable hand tracking. They can noninvasively interpret the tiny electric motor nerve signals that move between your brain and hands.
It’s basically “mind reading for your wrists.” The signals that the band collects can be translated into highly precise and accurate commands. For instance, Meta’s wristband can detect when you’re trying to swipe through apps with your finger or grip an object in extended reality.
The idea is that devices like these will eliminate the need for traditional input devices like controllers, mice, and keyboards. With neural wristbands, you can control a smart device, navigate an interface, type, and more by moving your hands naturally.
More than just making extended reality experiences more immersive, the same neural interface solutions could transform the accessibility of technology. It would allow people even with limited mobility to interact with digital content in a more advanced, accurate way.
Beyond Meta: XR Companies Investing in Neural Tech
Thanks to all the hype around the impending Meta Orion glasses, Meta’s neural wristband is now getting the most attention in the XR space. However, other innovative companies are getting involved.
OpenBCI introduced a prototype of a neuro-powered headset in 2023. The mixed-reality device, named Galea, combined spatial computing and neurotechnology to upgrade existing headsets, like the Varjo XR-3.
Like Meta, this headset took advantage of various neural interface technologies, starting with EMG. Sensors around a face mask tracked the movements of facial muscles, translating them into actions and insights for the headset itself. Beyond this, however, the headset also featured electrodermal activity sensors, which tracked sweat and heat on a user’s skin—similar to the sensors in a Fitbit Sense smartwatch.
Plus, it included PPG technology (heart-rate sensing tech), similar to what you’ll find in many fitness trackers and smartwatches. These tools create a sensor array to gather user information in real-time and customize the XR experience to specific needs.
Beyond OpenBCI, other companies have also begun introducing revolutionary concepts into the XR space over the years. Neurable, for instance, made news in 2017 for creating the world’s first brain-controlled virtual reality game. Now, the company is working on new devices and collaborating with the military on new training systems.
NextMind offers an open-source devkit any developer can use to build mind-controlled applications. This made waves at CES 2020, alongside a wearable “brain-sensing” device from the same team. Emotiv, a builder of EEG monitoring headsets for scientific research and personal use, even offers a variety of top-of-the-line headsets with neural interface capabilities, like the Emotiv Pro.
The Benefits of Neural Interface Tech for Extended Reality
On a broad level, the main benefit of neural interface technology is that it will make interacting with immersive experiences feel more natural.
Solutions like neural wristbands in the AR, VR, and MR space enable more seamless integration between the digital and physical worlds. They allow us to use our hands and fingers to experiment with applications, build products, and even create art with exceptional accuracy.
If the future of the metaverse lies in making immersive experiences feel as natural and organic as possible, neural interfaces could be the key to success.
People could create prototypes in XR with more control than ever before. More granular tracking of muscle movements and direct feedback could improve training initiatives. Healthcare professionals could even use neural interfaces to perform surgeries from a distance.
This tech could transform how we interact with computers phenomenally, reducing the gaps between people and machines. Even in the gaming landscape, neural interfaces are opening up new frontiers. For instance, Wisear’s neural interface headphones allow gamers to control specific in-game actions with facial movements.
Plus, with the ability to capture so much data about usage patterns, movements, stress levels, heart rates, and more, neural interfaces could power a new age of research. They could help us unlock deeper insights that help us build more effective XR solutions for things like physical rehabilitation or personalized learning and development.
The Problems with Neural Technology
As interest in neural interface technology continues to grow, concerns are emerging, too. Although companies like Meta promise that they’re not using this technology to “read minds” , privacy and safety issues must be considered.
If XR devices can capture more personal information about users, what does this mean for data protection? Many consumers are concerned that retailers and other companies may use neural interface systems to learn more about them than they want to reveal.
For instance, a company could use neural data to learn what types of customers are more likely to respond positively to specific products. This would allow them to tailor their marketing and sales strategies more effectively. However, the same company could share that sensitive data with third parties, creating new privacy risks.
Then, there’s also the risk that a neural interface could incorrectly interpret data, leading to other potential dangers. For instance, if a neural interface was used to train an engineering employee on repairing a piece of machinery, a misinterpreted signal could prompt a headset to give the staff member the wrong instructions, which could put them at risk.
Beyond that, there are technical challenges to consider, too. External neural interfaces (non-invasive options) aren’t as effective at detecting signals as their invasive alternatives. Most people wouldn’t want to have a chip embedded into their heads just to upgrade their XR experience, so it’s unlikely XR vendors will start looking at “invasive” options to fix this problem.
Even if they did, invasive interfaces have biological challenges to overcome. Researchers are still exploring ways to minimize tissue damage caused by embedded chips.
The Future of the Neural Interface in XR
Advanced neural interfaces for XR technologies are still in their early stages. However, many large companies, beyond Meta, are already conducting research. Apple seems to be actively researching brain-computer interfaces, based on previous patent leaks.
Snap also acquired Nextmind, a Paris-based neurotechnology company, in 2022. However, they haven’t revealed any insights into whether they’re designing neural interfaces for their own smart glasses. On a broad scale, there seems to be a future for these kinds of “control” systems. For the most part, companies will likely focus on non-invasive options, like Meta’s neural wristband.
This could lead to many potential benefits in the XR space. For instance, imagine participating in an immersive meeting where you can manipulate 3D objects with accuracy and build prototypes without using controllers and traditional peripherals. Neural wristbands will make this a reality. Combined with haptic feedback solutions and AI, neural interfaces could pave the way for better training and development experiences.
A neural system could track your stress levels, heart rate, and minute muscle movements in real-time and tailor a training experience to your specific needs. Still, XR developers will have to navigate a rocky road.
They’ll have to find the right balance between building user-friendly tech and overcoming common ethical, security, and privacy issues. Now that Meta has shown off its own neural wristband, however, we think many new companies will begin to follow suit. In the next few years, non-invasive neural interfaces could become the ultimate must-have accessory for XR users.