It's Never a One-Sided Problem
A few tales as old as time plague the Brain-Computer Interface (BCI) world:
"Invasive BCIs are the only way to get accurate information about the brain, eventually people will come around to using them" vs "Non-invasive is the path to building BCIs that people will want to use, engineering innovation will make the path feasible"
As someone who's building a non-invasive BCI control system I'm sure you won't have to think too hard about which camp I fall in.
Within each camp there's often further discourse on where innovation will come from. There's a group that advocates that data from your brain (more specifically data collected non-invasively) is noisy and no amount of computational improvements will change this. The data will only become useful if your hardware improves. On the other side there's the belief that computation alone will solve all of the noisy problems, as long as there's hardware that's standard quality.
Despite the polarizing stances, there seems to be an unvocalized, and perhaps subliminal, expectation that the technology that other party is advocating for will eventually get "good enough" for further advancements in your field to cause a breakthrough. Those advocating that revolution comes from computation alone seem to have an underlying assumption that signal acquisition devices will improve slowly and is therefore not worth working on, and vice versa.
In my mind this brings up two interesting points: A) You always need a foundation to build on, and B) Building in immature industries requires solving multiple technical problems.
Let me give you a non-technical example to explain what I mean.
Imagine a painter and his canvas. Whenever he wishes to, he picks his paintbrush and paints away with the paints he has to create the vision in his (or his customer's) mind. As you reduce the number of paints available, the painter would need to get more creative to be able to produce the same message, constructing new colors to fill the palette. However, with no paint, canvas, or paintbrush, the painter can't create. The art is made possible with the foundation of tools and paints available. A GPT would be useless without language data to learn or computers to train. However, when there's been massive jump or improvement in one sphere or another, it opens the door for another. When AlexNet hit the deep learning sphere it famously showed what was possible when you trained models on GPUs and kickstarted a massive deep learning revolution.
AlexNet showed what a painter could do with a canvas, brush, and a variety of paints. The limit was then the artist's expression, and computational innovation. But what happens when the artist travels to a different country, is asked to paint a sunset, and out of red, yellow, and blue needed to make the color palette, red is missing. Without foundational paints in his toolkit, his ability to express and innovate from his thoughts is limited.
This what we've found developing an electroencephalography based real-time control system for upper-limb prosthetics. There is a point of emphasis in being as creative as possible, allowing the artist to paint beautiful seascapes with as few paints as possible, creating computational innovation. But also ensure that the three primary colors needed for the artists color palette can be found with him no matter where in the world he goes and his hardware is reliable.
When technology in its practical application is still immature, sometimes it takes tackling the problem from both sides, innovating on both the hardware and the algorithms, to deliver to the customers you care so much about.
At the end of the day, we all want more beautiful paintings. What you have to keep in mind is that it's not always a single solution that can make the change.