Beyond the pilot: why most healthcare AI fails (and how to make it stick)

There has been no shortage of AI talk in healthcare over the past few years. What has been harder to find is a useful conversation about where AI actually fits, what makes it fail, and what it takes to make it stick. That is why Throw AI at the Wall and See What Sticks, a session at ViVE 2026, stood out. It was less about “where can we add AI?” and more about “what problem deserves it, and what has to be true for it to work?”
If you are currently evaluating an AI initiative and feeling the pressure to 'have an AI story', but you aren't yet sure where it will actually move the needle, you are not alone.
In healthcare, AI stops being useful the moment it becomes a generic layer dropped on top of a broken workflow. This article is for the teams trying to figure out if a tool will reduce real operational burden or just create more noise.
We believe the standards discussed at the ViVE 2026 panel offer a roadmap for anyone looking to build something that actually sticks
Where the conversation is finally getting sharper
One of the clearest takeaways from the talk was that the healthcare AI conversation is starting to mature. The discussion kept coming back to intentionality: what is actually improving care or operations, what is only generating noise, and what needs to be in place before an AI initiative can scale.
In other words, AI often breaks down long before it reaches the stage where model quality is the only thing that matters. That is an important correction to the way these projects are often discussed: it is easier to say a tool underperformed than to admit the workflow was never redesigned, the staff was never brought along, and the team never got clear on why the tool was there in the first place.
Strategy first: defining the "why" before the "how"
Another strong point from the panel was how often organizations started with the technology instead of the need. One speaker described the common request they heard from healthcare leaders in the early generative AI wave: help us implement generative AI. Their response was the right one: what problem are we trying to solve?
That may sound obvious, but it still cuts against a lot of current AI planning. In healthcare especially, the pressure to “have an AI story” can distort prioritization.
The panel pushed back on that. AI is not a fix for everything. It is better suited to some problems than others, and part of the work is being honest about the difference. They pointed to areas like chronic disease management, where the issue is often not a lack of knowledge or protocols, but the inability to scale knowledge into action. That is a much better fit for AI than vague ambitions to make an organization “more innovative.”
Where healthcare AI is already delivering value
The panel did not argue that healthcare AI is mostly smoke. In fact, it highlighted examples where real value is already showing up.
Ambient documentation came up as one of the more credible current use cases. But even there, the interesting part was that organizations got more serious about measuring impact. The discussion moved beyond simple productivity claims and into a broader view of ROI that included clinician burnout, retention, quality metrics, and downstream operational effects.
One health system example described improvement in both hard and soft outcomes, including productivity, coding-related metrics, and a reported reduction in burnout symptoms over the course of the trial. That broader definition of ROI matters.
In healthcare, a project can look “efficient” on paper while making clinical work worse. The panel was clear that this is not enough. If you save clinician time, the next question is what happens with that time: is the result better care, better working conditions, better retention, or just one more dashboard claiming success?
The use cases that feel more durable
One example that stood out involved diabetes management at Montefiore Health System. The challenge was familiar and operational: there was a broad middle group of patients who were neither low risk nor the highest risk, and the opportunity was to improve proactive care for them.
The work described in the talk involved pulling together clinical records, social needs data, and CGM data, which today often sit across separate systems and portals. From there, the goal was to support risk stratification, prediction, and decision support for both clinicians and patients.
What made that example compelling was the fact that it showed AI being used where healthcare often struggles most: making fragmented information more usable and helping teams act sooner. It also reflected something important about implementation. The work involved clinical stakeholders, IT, and population health teams, which is usually what it takes for a care-facing use case to have a real shot at sticking.
Data infrastructure and governance are not side topics
The panel also spent a lot of time on something that tends to be treated as boring compared to models: information infrastructure. Data quality, data coverage, resilience, security, interoperability, governance, and post-deployment monitoring all came up as prerequisites for scale. That emphasis was useful because it shifts the conversation from “can we pilot this?” to “can we run this responsibly?”
One speaker put it bluntly: many organizations have gotten better at controlling what comes into their environment, but still do not have a strong grip on how those AI tools perform once they are in production:
- Are they safe?
- Are they effective?
- Are they drifting?
- Are they biased?
- Are they delivering the ROI they promised?
- Are they compliant?
Those are not theoretical governance questions. They are operational ones. This is probably where the panel felt most mature: it presented it as part of what makes real adoption possible.
Workflow redesign still matters more than the feature
One of the more memorable comparisons in the session was to the EHR era. The point was that healthcare often digitized existing workflows without doing enough to improve how the work itself was structured. That history matters here. AI will have the same problem if it is placed into workflows that already create unnecessary burden, fragmentation, or confusion.
That observation lands because so much of healthcare software has followed that path. Speed alone does not make a workflow better. Trust grows when the tool fits the environment, reduces friction people actually feel, and supports decisions in a way that feels reliable. The more meaningful opportunity is to connect broken steps, ease coordination, and remove low-value work across the care journey.
From experimentation to ownership
By the end of the session, the strongest signal was one of focus: the organizations most likely to see durable value from AI are those that choose narrower problems, take implementation seriously, and remain honest about the outcomes they want to improve. The conversation has clearly shifted from novelty for its own sake toward real operating value.
At Vinta, our perspective echoes this shift. While we agree that AI is a "point of no return" in healthcare, its effectiveness depends on the same fundamentals discussed by the panel:
- Solving targeted problems - AI must be applied to specific needs, like scaling chronic disease protocols, rather than acting as a generic layer on top of a broken system.
- Bringing people to the table - Success requires a "big tent" of stakeholders, from clinical leaders to IT and frontline staff, to ensure the tool fits the culture.
- Workforce education and buy-in - We must educate the workforce not just on how to use the tool, but on the "why" behind it to ensure long-term adoption.
- Journey-first engineering - True progress comes from evaluating the entire patient and provider journey. Sometimes, the most significant improvements happen by using design thinking to fix a clunky workflow or reduce friction; improvements that provide immense value even before a single line of AI code is written.
Ultimately, the surrounding system (the data infrastructure, the governance, and the human workflow) decides whether an AI tool becomes truly useful. At Vinta, we help healthcare products turn these complex AI ideas into working software that doesn't just sit on a shelf but becomes a permanent, valuable part of the care journey.

.webp)




%201.webp)
