7 min read

Introducing - Olive Intelligence

Improving therapies for speech-language disorders.

I had recently introduced Olive Intelligence to the world on my personal 𝕏 account. I am sharing it here for easier access for the curious people of the future.

Today, I'm excited to announce Olive Intelligence!

Olive is my master plan to drastically improve the care provision for millions of people with speech-language disorders (a related set of communicative neurological impairments) whose ability to communicate is deeply impaired.
Olive Intelligence intro image
Communication is the cornerstone of a society. But 80M stutter. 40M have post-stroke aphasia. 10M have Parkinson's-induced dysarthria. And much more - apraxia, SSD, fluency, voice disorders.

Life becomes miserable and hard. You feel helpless. Powerless. Jailed by your own brain.
There's no cure yet! These are complex neurological problems. These conditions often require long-term rehabilitation and have very high variance - no two cases are alike.

The band-aid "solution" found was traditional (human) speech-language pathology (SLP). Better than nothing!
Speech pathology illustration
But it rarely produces the desired outcomes. It is fundamentally inadequate.

Can't afford 100 x $150/hr sessions? Timely AND personalized care not available? No post-session care available? Couldn't find that rare effective SLP? Sucks for you, hard!

Countless other reasons.
Got super lucky? Chances are you're still stuck at a minimum-viable-bearable outcome.

But why is that? Why is the current care provision so inadequate?

More importantly, what will it take to solve it?
I have a stutter. I had a very severe case until recently. Speaking a sentence made me gasp for breath. I hated being alive.

I went through therapy for years. It was a horribly clunky process. Inefficient by design. Outdated. Luddite, complacent practitioners. It just sucked!
By pure luck, I briefly found that one rare Michael Jordan-tier SLP who completely changed my life (thanks, Camille!), even thought it took 2 more years of hard work. I can finally speak now. Finally, glad to be alive!

But majority never meet their Michael Jordan™ pathologist.
The problem is so grave that I couldn't bear to leave it alone. My uncle has post-stroke aphasia.

I tried looking for ways to solve the core problems in this field by optimizing the tech stack around human SLPs. But its very structure heavily restricts serious improvement.
Got aphasia after a stroke? Optimal outcomes require ~100 sessions that year with a good SLP. Big challenge! Need tons of post-clinic exercising. Currently the best digital aid is so bad, that it should be a civilizational shame.

Same with stuttering. Same with Parkinson's.
To 10x improve something, however, you need to question every seated assumption of that problem:

Why so many sessions? Can't this rehab period be optimized more, given how time-critical recovery is with every passing day?
Now, the goal at @oliveintel is to drastically improve both accessibility to speech-language pathology and improve what therapy can achieve in the first place.

We will do it as a single company, with one focused mission, by fixing what is possibly the gravest issue in the field:
Think of the problem broadly in terms of read-process-write in computing:

A therapist:
- reads biomarkers.
- processes/simulates possible internal states.
- writes some therapeutic payload to the brain to prompt potential rewiring.

Currently, this entire chain (feedback loop) is broken.
READ: A human therapist naturally has very serious limitations in both the throughput of the READ and the resolution, types, and volume of biomarkers they can READ. This acts as sensor inputs to the therapist. Without it, they are blind to what's happening to the patient.
PROCESS: Every WRITE given to the patient works towards a therapeutic intervention. But to do that, a therapist needs to simulate how that WRITE would manifest in the brain, in light of longitudinal changes in biomarkers.

99% of human pathologists devastatingly fall flat here.
WRITE: The brain is highly chemical, stochastic, and nonlinear. Any successful WRITE that drives rewiring (neuroplasticity) is difficult. The less adaptive the PROCESS, the less precise the WRITE, and the lower the chances of the therapeutic payload sticking.
This feedback is incredibly tight. Given the nature of the brain, the loop needs an incredible number of cycles to be optimally effective.

A human therapist is ill-suited for this. The PROCESS stack is critical, a battle that data+compute constrained entities will lose.
The loop has to be able to regularly refine the agent's PROCESS model given the READ-WRITE pairs. This, abstractly, is key to improving therapeutics. More than read-writes, this is the biggest broken link in the human therapist chain.

We will fix this!
The arbitrary therapist doesn't have to be in a standard conversation-only chatbot form. But it is the fastest to start with.

Over time, we plan on testing with neurostimulation technologies (AI-in-the-loop) and improved comp neuro models plugged to better READ data (EEG, etc).
The prospect of future pharmacological and neurostimulation being augmented by an AI-in-the-loop-therapist is exciting!

Without all this, we will forever be constrained by the massive limitations of the human therapist model.

These conditions are too important to be a Luddite!
So... what are we doing, then?

First, we're working on creating an AI SLP app to jumpstart the above loop. The tech (and data) for enabling at a valuable enough level most of it, frankly, doesn't exist. So we're building it from scratch.
For fluency disorders (stuttering), understanding disfluencies (biomarker) in the voice and its type is critical, since therapy personalizes on it.

We now have a stuttering disfluency event detection/classification model competing with/beating public SOTA models.
Understanding breathing (biomarker) of a patient is important for many speech-language disorders. We started work on transcribing and understanding breaths from mic audio.

Several acoustic vocal markers give insight into the person's speech. Working on signal data -> insight.
In short, we're currently building for the READ part of the problem (diagnostics) and working on augmenting that with our AI SLP.

The tools and workflows to enable this loop will need to be created. We will do it. We love tech! We love neuroscience! Will be a fun adventure! 🚢☀️
*Not to say improved diagnostics, better longitudinal assessments and better therapeutic protocols aren't as important as finding better interventions. They will be vastly improved too, by us.*

We will improve every part of the stack to enable improved therapeutic outcomes.

The original tweet thread is here.

We see a future where these communication disorders go from being manageable to as close to treatable as possible (given reasonable constraints). And we’re working on improving therapeutics themselves to get us there, starting with conversational therapy.