Guest Column | November 13, 2025

Non-Invasive Brain-Computer Interface Trial Balances Regulatory Requirements With Patient Needs

A conversation with Cognixion Corporation’s CTO Chris Ulrich

metaverse technology-GettyImages-1372364665

People with amyotrophic lateral sclerosis (ALS) and conditions affecting verbal speech experience a variable, often progressing, loss of the ability to speak. Existing aids have helped improve their ability to communicate. Low-tech aids range from picture books to letter boards to simple gestures. More high-tech aids include devices responsive to eye gaze, switches (a reliable, voluntary movement), and head/facial movement. Unfortunately, many of these aids are clunky, slow, or inaccurate. What’s more, for more progressed stages of ALS, degrading muscle control can even make these simple movements impossible and thus render the aids useless.

Cognixion aims to overcome the issue with the study of its Axon-R Nucleus bio-sensing hub, a non-invasive brain-computer interface integrated with an Apple Vision Pro augmented reality headset to enable communication through thought.

In this Q&A, Cognixion CTO Chris Ulrich introduces the device and walks us through its feasibility trial, highlighting the importance of patient input, caregiver quality of life, and regulatory support. 

Clinical Leader: Cognixion’s Nucleus bio-sensing hub and Apple Vision Pro are configured together to enable seamless hands-free, voice-free communication. How does it work?

Chris Ulrich: We've developed a brain-computer interface (BCI) technology to enable communication for persons with motor speech impairments. There's a brain sensing component that sits on the back of the user's head that senses the occipital cortex EEG signals. And the front of the device has an augmented reality display.

In that display, we present to the user specific types of stimuli and associate those stimuli with things they might want to do or say. Imagine you have three or four blinking artifacts in your field of view, and beside each one is a little tag that says, “I want to say hello,” “I want to say goodbye,” and “I want to ask a question.”

In the human visual system, if I present a stimulus that has a specific frequency, then the V1 layer of your visual cortex will synchronize with that stimulus. If I flash a strobe in your eyes at 7 hertz, your visual cortex will respond at 7 hertz. But if there's more than one such stimuli at different frequencies, your prefrontal cortex decides which one you're actually synchronizing with. And that's a decision process you do through attention. There's no other physical motion required — no eye tracking, no muscle control. Individuals who have late-stage ALS may not reliably use their muscles, so this attention-based mechanism allows them to interact with the computer and make choices using only their attention.

On top of that, we've added a generative AI layer. We interview each person at the beginning to understand their history, their family, their likes, their dislikes, their biography, and if they have any written materials. We use that information to generate an AI chatbot of that user. While the person listens to the conversation, the chatbot also listens. The generative AI component transitions the conversation from simple single words to fully formed phrases and sentences that sound very conversation-like. This allows the user to have more meaningful turn-taking conversations that go into more depth.

So far, what do you understand about your device and its usability?

There's sort of two parts to it. One part is the way the protocol was structured. We're constantly iterating the software system as we move through the study to refine the usability. One of the key measures is a system usability survey (SUS), which is a questionnaire where each question is the opposite of the previous one, and it gives a score from zero to 100 of how usable the user perceives the system to be. A score above 70 is considered to be top tier. We actually have a few participants in our study that have given a 71. So, we feel we're on the right track.

The secondary one is the rate at which they can communicate. The challenge is that there are two pieces to it. One piece is the information transfer rate (ITR) between the user and the device. A formula can give you a measure of how many bits per minute the user exchanges between them and the interface. We have seen participants get as high as 30 in ITR, which is about 30 choices per minute and is extremely fast. Then that can be translated into words per minute (WPM). But words per minute is kind of a weird measure because, for example, if my generative AI composes an essay for me, and I just say yes, that's one choice, and I get a hundred words or a thousand words per minute. WPM is a very commonly requested measure for us, but it actually doesn't mean that much in the context of generative AI. It's more about the ITR, the rate at which you can actually control the device.

And so, what are regulatory bodies looking for?

It's actually a hot topic. There is actually an implantable BCI committee that includes companies such as Precision Neuroscience and Neuralink, the FDA, and researchers who've been working in the BCI space for many years. There's also a consortium called BrainGate that does implantable BCI research, and their key debate topic is what the appropriate measure of efficacy is. And they don't really have a great answer.

In our Breakthrough submission to the FDA, we had this exact debate with the reviewer, and we kind of aligned on ITR, but they were definitely leaning toward WPM as being meaningful. The question is meaningful in what context?

The trick is to make sure that it's a well-defined measure so you can use it as a meaningful demonstration of efficacy. When we're looking at the competition, which is a letter board or a Tobii Dynavox tablet, the published literature is on words per minute and turn taking per minute. And so, we're looking to at least demonstrate equivalence with those existing measures, even though we believe that with generative AI you can do much better. From an FDA perspective, I don't know that that's going to be an easy measure to regulate around.

As for the outcome measures, what do patients and caretakers expect? How do you reconcile that with what the regulators want?

I don't think that they're necessarily inconsistent. Regulators are just looking for a scientific methodology that gives you a repeatable measure you can rely on to make decisions. When you're talking about communication, there's a bunch of subjective or soft components that have a big impact on the user experience and the user's perception of the system. And so, my team is primarily thinking about how someone who's communicating with a letterboard experiences a very common situation of being left behind in the conversation.

Even with the Tobii Dynavox system, the user will compose a reply, and they'll speak it, but it will be three to four minutes after the conversation point where you would normally say such a thing. What you end up with is a lot of confusion about what the person's talking about or the sort of social dynamic that moves faster than they can keep up with, and so they feel left out. And you can measure that in terms of quality of life. Like, “I feel like now I can actually participate in my conversation with my wife or my husband, and that really makes me feel a lot better.” But we don't yet know over what time period that would happen or how precisely that would be measured by that kind of survey. So, we're still trying to find the right way to articulate that.

But subjectively, when you talk to the participants, they're saying this is much better. We're starting to look at agency enablement as well. It's actually not part of the current study, but in the study with the Apple device, we're starting to incorporate things like WhatsApp and YouTube control.  So, beyond composing phrases, you can actually compose requests to have the agent do stuff on your behalf, like send a text message. It’s a similar kind of interface, but now it's giving them agency to be self-sufficient. But the measures around that are even more complicated. So, is it quality of life? Is that the only thing you can do, or is there some other measure? And that's something we still just don't know.

Speaking about the participants, tell me about the ideal patient population.

The patient has to have a caregiver who is confident enough to use this kind of bleeding-edge technology without being intimidated by it. The caregiver is instrumental, because our goal here is to understand to what degree the caregiver can be replaced by this type of device. Ideally, we get to the place where the caregiver's job is just to put it on and take it off. But there are someusability issues setting it up and getting it configured properly and making sure that they're comfortable; all this kind of stuff is what we're working through with the study.

And we’re also interviewing the caregivers to understand what the pain points are. We actually have a pivotal plan wherein quality of life of the caregiver is a secondary endpoint.

And when we screen candidates, we're looking specifically for participants who have that level of support and if there's a problem with the power connector or the USB cable, for instance, they're okay with that situation.

In addition, we were looking for participants who feel comfortable spending hours per week in this kind of device, perhaps initially struggling to get it to work. Just having that commitment to using a prototype device. And that's hard to articulate in a screener in the IRB proposal. So, we're having one- to two-hour interviews with them at the beginning to make sure that all those kinds of squishy, non-precise aspects are well understood by both parties.

What, then, does the future look like for a larger feasibility trial or a pivotal trial for efficacy?

Our goal is to move as quickly as we can into pivotal, but not unnecessarily quickly, probably in the first half of next year. One of the things we hope to learn from the usability trial is the effect size of these key measures and what we should expect from an effect size perspective that will inform our randomized trial’s study population size. That’s going to be the biggest thing that drives the pivotal cost and time.

When you're recruiting mid- to late-stage ALS, it's actually quite hard to find people. Maybe 25,000 to 30,000 people in the U.S. at any one time have this condition, and it is highly heterogeneous in terms of presentation, so it can be challenging to find a sufficient number of participants. When we first started the trial we're doing right now, we thought we were going to restrict the recruiting to the greater Los Angeles area, but we quickly had to expand that to almost the entire Western U.S. to find enough participants.

For a pivotal, if we're talking about 30 to 40 participants, it's probably an entire U.S. population. And the challenge with ALS is that people drop out for a lot of different reasons because their health can change rapidly.

How does that impact site and/or PI selection?

We have a really good relationship with the ALS Association, which is the biggest advocacy group in the U.S. They actually invested in Cognixion in 2025. They have a network of about 200 centers of excellence in the U.S. and part of our relationship is for them to provide access to those sites.

Of all possible recruiting networks, I don’t think I can imagine a better one. We've been able to do the feasibility study ourselves, but when we get to the pivotal, we're going to use that network to expand. And there are sites in the U.S. where there's a greater concentration of care centers focused on ALS,  such as Georgia, Florida, the Bay Area, and New York. There are a few places where we can try to centralize these things as much as possible.

We often hear about the importance of having a relationship with the patients and their advocates. How does Cognixion see things?

That's actually one of the things about Cognixion that gives us competitive positioning. We established the Brainiac Council about six years ago. And this consists of about 200 people with lived experience — not just the patients, but also their caregivers and healthcare professionals — in motor speech impairment. So, it's not just ALS, but cerebral palsy, Parkinson's, stroke, and many other conditions that cause motor speech impairment. This community helps inform our product strategy and our ability to understand the actual problems that those users are having, so we can focus on finding solutions.

About The Expert:

Chris Ullrich is the CTO at Cognixion Corp. Mr. Ullrich holds a M.Sc. and B.Math in Applied Mathematics. He is a prolific researcher and inventor with more than 100 issued patent families (more than 500 individual patents) that span VR/AR, mobility, automotive and human-machine interaction. These inventions are embodied in technologies licensed into more than 3B consumer and professional devices and representing more than $300M in licensed revenue from customers including Apple, Samsung, Google, Meta, Medtronic, and Stryker. Over his 25-year career, Mr. Ullrich has specialized in algorithmic innovation for human-machine interaction. He has successfully led early stage full-stack hardware/software/UX research projects from conception to productization. Prior to his current role, Mr. Ullrich was CTO at Immersion Corp. where he led a team of 30 researchers and engineers and managed a litigation-grade portfolio of more than 3,000 patent assets covering mobile, gaming, medical, and automotive markets.