The healthcare industry has a digital adoption problem. In many settings, fewer than one-third of patients engage consistently with the digital tools made available to them. Too often, rigid interfaces collide with patients’ real lives – stress, physical limitations, language barriers, and competing priorities – creating friction instead of support.
Static patient-facing apps cannot respond to context. They lock people into fixed workflows that ignore the changing clinical, emotional, and situational factors that shape how they are able to participate in their care.
Healthcare has always aspired to put people first. Yet legacy systems and fragmented digital experiences frequently force both patients and providers to work around the technology, rather than having it work for them. Without adaptability, platforms deliver the same experience regardless of need, timing, or circumstance.
Agentic AI changes this equation. By enabling systems to understand context and adapt to each individual’s unique needs in real time, it opens up a new possibility for how care, and digital experiences more broadly, can be delivered and experienced.
Building for humans at scale
Agentic AI enables systems to understand context, anticipate needs, and actively support people in their work and care journeys by embedding intelligent agents that personalize interactions in real time.
Living apps are what this looks like in practice. Powered by agentic AI, they turn today’s static portals into responsive experience layers that sit on top of existing clinical and administrative systems. Instead of a single, fixed workflow, they continuously re-compose content, interaction patterns, and guidance based on who the patient is and their unique circumstances and needs.
With data about the patient (age, abilities, language, preferences), the clinical task defined by their care team (for example, following a medication plan, reporting side effects, completing a trial diary), and contextual signals from the application itself (such as time of day, device type, or environment), the agentic AI can interpret these inputs and adapt the interface dynamically. The result is a personalized experience that stays aligned with the clinical context while adjusting to the patient’s needs, without compromising consistency, safety, or compliance in the underlying systems.
For example, a young patient who must stick to a strict medication plan might see a game‑like an interface with rewards and parent‑visible progress. An older patient with tremors might get large typography, a minimal layout, and forgiving input controls. A non‑English‑speaking patient could interact in her native language through voice and text while the system translates accurately into the clinician’s language behind the scenes. In each case, the clinical procedure and compliance frameworks remain the same, but the way information is presented and acted on is completely tailored to the patient’s unique circumstances.
Paradoxically, well‑designed living apps should also reduce the volume of basic “how do I…?” questions by making interfaces more intuitive. That, in turn, lightens the load on customer support teams, so human support can be focused on the smaller set of genuinely complex or high‑risk issues rather than routine navigation and usability problems.

How agentic experience curation works
Achieving this level of responsiveness has clear technical implications. Agentic interfaces need a next‑generation experience layer that brings several capabilities together.
- Application intent as the anchor: The system must keep the core goal of the application – such as safe medication adherence, trial protocol compliance, or post‑surgery recovery – explicitly in focus, so any adaptation serves that goal rather than distracting from it.
- Rich, evolving user context: The interface needs access to a user context model that combines stable attributes (age, language, impairments) with dynamic signals such as recent behavior, time of day, device type, or ambient noise. Some inputs come from onboarding questions; others are inferred over time as the agent observes interactions.
- Real‑time feedback loops: Every interaction is treated as feedback. Hesitation, repeated taps, abandoned forms, or explicit statements like “I don’t understand” are signals the agent uses to adjust difficulty, pacing, explanations, or escalation paths in real time.
- Multimodal interaction: The experience layer must support multiple input and output channels – text, voice, visuals, haptics – and be able to choose and combine them dynamically based on what works best for this person at this moment.
This leads to an interface that is itself adaptive.
- Adaptive output: The system decides how to interact: it can speak instructions, show step‑by‑step visuals, provide a data‑rich dashboard, or use simple icons and vibration cues on a wearable.
- Adaptive input: The system decides how to respond; it is not limited to the traditional command-respond exchange. It can accept natural‑language voice or text (“I’m not sure what to do next”), interpret that as confusion or distress, adjust guidance accordingly, and, when needed, escalate to a human clinician or support team.
Agentic interfaces and interoperability with intent
For the level of adaptability of living apps to be possible, agentic systems need real‑time access to accurate, comprehensive patient data. Interoperability is the non‑negotiable foundation: it provides the standardized, connected data layer that lets AI‑driven experiences move beyond one‑size‑fits‑all design and still preserve consistency, safety, and compliance. While living apps solve the experience problem at the interface, a related challenge emerges at the system level. Machines need to transfer care intent and context from one system to another. That’s where interoperability with intent comes in.
Traditional interoperability focuses on moving data between systems across the care continuum. That remains essential, but it is no longer sufficient. The same piece of data, or the same question, can have different “right” answers depending on whether it is being used by a patient at home, a clinician at the point of care, or a researcher running a trial. When interoperable data is paired with explicit intent and context, agentic interfaces can choose the appropriate action, explanation, or workflow for each situation.
For an individual patient receiving digital care at home, intent‑aware interoperability might mean simpler adjustments like surfacing the next best action in plain language. In clinical trials or research, the stakes are higher. For instance, misinterpreting context can affect protocol adherence and data quality. In both cases, the combination of interoperable data and intent‑aware, agentic interfaces is what turns a static digital front end into a living, learning system that can safely adapt to humans while staying aligned with the strict demands of healthcare.
App stores, regulators, and adaptive software
From an implementation side, living apps do meet today’s app distribution and regulatory frameworks at an awkward angle. App stores and medical device regulators were largely designed around software that changes slowly and in well‑defined increments. Responsive, AI‑driven behavior challenges that assumption.
On the app store side, Apple and Google continue to tighten expectations around data usage, AI behavior, and privacy, especially when apps send personal data to third‑party AI services. For adaptive healthcare apps that rely on continuous learning, personalization, and multimodal interaction, this creates a design tension: how to exploit the full capabilities of agentic AI while still fitting into policies built for more static experiences.
Architecturally, the most promising pattern today is to decouple the living experience layer from the thin shells that live in the Apple App Store and Google Play Store. The store‑distributed apps become stable, well‑bounded containers, while most adaptive behavior resides in secure, compliant back‑end services and web‑based surfaces that can evolve more rapidly under appropriate governance.
At the same time, a second pattern is emerging in which lightweight, on‑device agents can progressively adapt pieces of the interface locally (even offline) based on real‑time usage and context. Together, these approaches allow organizations to respect distribution constraints while still delivering experiences that feel genuinely alive, wherever the intelligence happens to run.
On the regulatory side, agencies in the US, EU, and elsewhere are actively grappling with how to evaluate AI‑enabled and responsive medical software. We strongly advise building transparency and compliance from the get-go so regulatory strategy as an integral part of the product and engineering roadmap, not an afterthought. That means:
- Designing for traceability from the outset, so every adaptive change and model update can be explained and audited.
- Establishing clear boundaries between safety‑critical and non‑critical adaptations, with appropriate controls on each.
- Collaborating early with regulators and notified bodies to align on evidence, monitoring plans, and acceptable update mechanisms.
Building a human‑centered, AI‑powered healthcare experience
In the end, living apps are about restoring something simple that healthcare has always promised but digital has often undermined: truly personal care. When interoperable data, clear intent, and agentic interfaces come together, patients no longer see a portal, they see a system that understands them and adapts with them over time. Behind the scenes, the data models, standards, and compliance rules remain as rigorous as ever. On the surface, care finally feels human again, and that is the standard to which every future healthcare experience will be compared.








