Identity Is Governed, Not Authenticated
A response to "What If Digital Security Was About Care, Not Control?"
Samara McIlroy recently asked a question that sounds simple but isn’t: what if cybersecurity was built around care rather than control?
She wasn’t just complaining about fingerprint scanners. She was pointing at something structural — that the current security model was built on a particular set of assumptions about people and trust, and those assumptions quietly exclude a lot of people while failing to protect anyone particularly well.
I think she’s right. And I think I know why.
The current model has a hidden premise.
Every password, every biometric, every “layer of protection” rests on the same foundational idea: identity is something you prove. You possess a secret (a password), a physical trait (a fingerprint), or a device (your phone). You present it. The system accepts or rejects you. Transaction complete.
This makes the user responsible for maintaining the proof. Lose the password, get locked out. Die, and your executor is locked out forever. Change — through transition, through trauma, through witness protection, through simply growing into a different person — and the system may not recognize you at all.
The model isn’t broken. It’s doing exactly what it was designed to do. The problem is what it was designed to do.
The question Samara is really asking.
Her “care-based” framing points at something deeper than aesthetics or inclusivity, though it’s both of those things too. She’s asking: what would security look like if it started from the observation that identity is relational?
You don’t exist in a vacuum. You are known — partially, imperfectly, from different angles — by many people across many contexts over time. Your friend from college knows things about you your coworkers don’t. Your doctor knows things your friends don’t. None of them has the complete picture. But together, their overlapping and sometimes contradictory knowledge of you is remarkably hard to fake.
That’s not a philosophical nicety. It’s a security primitive.
Imagine someone loses their phone, moves cities, and changes their name within a year.
In a credential system, identity collapses: passwords fail, biometrics mismatch, recovery channels disappear. The system concludes uncertainty means denial.
In a relational system, uncertainty is expected. Old signals weaken gradually while new ones accumulate. Identity doesn’t reset — it transitions.
What a care-based system actually requires.
If identity is relational rather than credential-based, a few things follow necessarily:
Identity can’t be binary. No observer has complete information. Every observation is partial, noisy, and contextual. This means certainty is impossible — and any system that demands certainty is lying to itself. Identity becomes a probability that rises over time as evidence accumulates, not a gate that’s either open or closed.
Contradiction is normal, not exceptional. Your friend misremembers a date. Your GPS drifts. Someone gets your haircut wrong. These aren’t failures of the system — they’re the expected texture of imperfect observation. A well-designed system absorbs noise rather than treating every discrepancy as an alarm.
Change has to be expressible. People change jobs, cities, names, gender presentations, social circles, religions. A security model that can only recognize the you of five years ago isn’t protecting you — it’s imprisoning you. Legitimate transformation needs a pathway that doesn’t require you to either stay frozen or start over as a stranger.
You have to be a participant, not an object. A system that determines your identity entirely from external observation, without your input, isn’t care-based — it’s surveillance. You should be able to contribute evidence, dispute signals you know are wrong, and control which parts of your life are allowed to inform which decisions.
The system itself has to be accountable. Any mechanism powerful enough to determine identity is powerful enough to be corrupted. A care-based system isn’t just designed for users — it applies the same standards to itself. Evaluators have track records. Decisions are contestable. No component is beyond review.
Where these assumptions lead.
When you follow these requirements to their logical end, you don’t arrive at a better authentication system. You arrive at a governance architecture — something that looks a lot like constitutional democracy applied to identity.
Separation of powers. Due process. The right to appeal. Legitimacy that comes from continued voluntary participation rather than formal authority. The rule that no actor is beyond the process it administers.
This isn’t a coincidence. Constitutional systems and care-based security are both solutions to the same underlying problem: how do you make consequential determinations about people in a way that’s accurate, contestable, recoverable from error, and resistant to capture — without requiring perfect information or perfectly trustworthy actors?
The answer, it turns out, is the same in both cases. You don’t authenticate identity. You govern it.
So does this satisfy the ask?
I think it does — in principle. A system built on these foundations would center relationships over credentials, distribute responsibility across networks rather than dumping it on individuals, handle life transitions gracefully, and remain accountable to the people it serves.
The harder question is implementation. The formal version of this framework exists — it’s a Bayesian inference system over a constraint graph with evaluator plurality, contradiction grammar, authorized discontinuity pathways, and reflexive accountability built in. The implementation is complex. The principle is not.
Security shouldn’t ask can you prove who you are?
It should ask: does the story your life tells still hold together?
If identity is governed rather than authenticated, a system must answer new questions:
How do independent observers emerge without central authority?
How does evidence decay without erasing continuity?
How can disagreement strengthen identity instead of breaking it?
Those are engineering problems — not philosophical ones.
If identity is governed, then authentication was never the solution — it was only the first approximation.
The real question is what a governed identity system actually looks like when it runs.
That’s a question relationships are actually equipped to answer.
This piece grew out of a long conversation about whether care-based security was technically feasible. Spoiler: it is, but the destination was surprising. A more formal treatment — including the mathematical framework — is in progress.



