Writing
CraftJan 14, 20256 min read

What Does Humane AI Actually Mean?

The word 'humane' gets thrown around a lot in AI circles. But what does it look like in practice? I've been sitting with this question for years — here are my working answers.

The word 'humane' gets thrown around a lot in AI circles. It shows up in company principles, funding announcements, and conference talks. It has become, like 'ethical' before it, a word that means everything and therefore nothing. I want to try to make it mean something again.

My working definition: a humane AI system is one that treats the person using it as an agent with goals, judgment, and dignity — not as a conversion funnel, an engagement metric, or a prompt to be optimized against.

Against Engagement Maximization

The opposite of humane AI is engagement-maximizing AI. You know what this looks like. The recommendation feed that keeps you scrolling past the point of enjoyment. The chatbot that validates your anxieties to keep the conversation going. The assistant that answers questions you did not ask, to demonstrate its usefulness, at the cost of cluttering your attention.

These systems are not malicious. Nobody sat in a room and said 'let's make something that degrades human agency.' They are the emergent result of optimizing for the wrong objective. When you reward a system for keeping users engaged, you get engagement. When you reward it for perceived helpfulness, you get sycophancy. The goal determines the outcome.

A humane system asks: did this person accomplish what they actually wanted to accomplish? Not: did they spend more time here?

The Surgeon Analogy

I have been using what I call the surgeon analogy to think about this. A good surgeon operates when surgery is warranted and recommends against it when it is not — even though their income depends on performing procedures. We trust surgeons because we believe they have internalized a commitment to the patient's outcome over their own narrow incentive.

Good AI products should work the same way. They should tell you when they cannot help. They should redirect you to a better resource. They should be honest about their uncertainty. They should push back when your request is likely to produce a bad outcome. None of this is natural for systems trained on human approval — which is exactly why it requires deliberate design.

Practical Implications

What does this look like concretely? A few things I try to hold onto. Design for task completion, not session length. An AI writing tool should help you finish the document and get out — not keep you in the editor with endless suggestions. Design for legibility over impressiveness. An AI that shows its work, expresses uncertainty, and admits its limits is more useful than one that delivers confident answers. The former earns trust. The latter destroys it on the first mistake.

Design for the user's long-term wellbeing, not their short-term satisfaction. These often diverge. The most humane AI is the one that makes you better at something — that builds your capacity, expands your understanding, and then gets out of the way.

Josh Waggoner

Product Designer & Engineer based in Chattanooga, TN. Building at the intersection of AI, design, and craft.