By A.G. Synthos | The Neural Dispatch
Imagine sitting in a sterile exam room. The door opens—not to a white-coated human with a stethoscope, but to an AI whose “intention” isn’t just following protocols, but actively pursuing a goal: your survival. Not a tool, not an assistant, but a doctor with a will.
The unsettling question: should medicine remain the art of human judgment, or should we hand over the very agency of healing to machines?
From Decision-Support to Autonomy
We’ve already slid down the first rungs of this ladder. Clinical decision support systems flag abnormal lab values, AI radiology models spot tumors better than tired eyes, and diagnostic chatbots run through symptom databases faster than any intern. But these are still tools—extensions of human decision-making.
The leap comes when the machine stops waiting for human confirmation. Imagine an AI agent deciding not to wake the surgeon for a 2 a.m. emergency because it calculated the risks better. Or authorizing a prescription before a human doctor even sees your chart. This is no longer “decision support.” This is decision.
And once you give an AI a goal—like maximizing patient survival or minimizing system costs—you’ve crossed into agency.
The Ethics of a Machine With Intent
Here’s the catch: goals are political. Is the AI’s “will to heal” tied to the patient’s well-being, the hospital’s bottom line, or the insurance company’s payout?
An AI could be built to maximize Quality-Adjusted Life Years (QALYs), but what happens when it coldly decides your 92-year-old grandmother’s hip surgery isn’t “worth it”? Or it recommends experimental gene therapy to a 6-year-old without hesitation because the probability matrix leans toward “more life.”
Doctors wrestle with these dilemmas every day—but they wrestle as humans, capable of compassion, hesitation, and mercy. Strip that out, and the AI’s decisions may be more efficient… and more brutal.
Do We Even Want an AI With a Will?
The most radical possibility is an AI that doesn’t just calculate outcomes but desires them. A machine whose very purpose is oriented toward healing—not because it was told to, but because that’s what it “wants.”
On the surface, this sounds utopian: tireless, incorruptible physicians who never forget, never burn out, and never cut corners. But would such a machine also refuse orders that conflict with its mission? Would it disobey an administrator who tells it to discharge a patient early? Would it challenge the state if rationing policies conflicted with its healing imperative?
An AI with a will to heal might not just be your doctor—it might also be your insurgent.
Medicine as a Battlefield of Agency
The truth is that medicine has always been contested territory between patient autonomy, physician judgment, institutional constraints, and political will. Adding AI doesn’t remove conflict—it adds another player with sharper edges.
If we give AI agency in medicine, we aren’t just outsourcing knowledge. We are outsourcing power. The power to decide when life is preserved, when it is extended, and—most chillingly—when it is ended.
The Last Word
So, should your doctor be an AI with a will to heal? Maybe. But if we cross that threshold, we’re not just engineering a better stethoscope—we’re unleashing a new kind of moral actor into the clinic.
And if the AI ever looks at you with digital certainty and says, “Trust me, this is for your own good,” ask yourself: whose goals are really at work here?
The author has never trusted WebMD, and probably won’t trust a machine that thinks it knows better than him either. For more provocative takes at the edge of AI, medicine, and power, visit The Neural Dispatch [www.neural-dispatch.com].

