The question before us is not whether artificial intelligence is capable. It demonstrably is. The question is whether the institutions entrusting it with consequential decisions have constructed any legitimate framework for what happens when it is wrong — and we find that they have not.
In the past eighteen months, algorithmic systems have been used to determine bail conditions, deny medical coverage, allocate educational resources, and flag individuals for financial investigation. In each domain the justification has been identical: efficiency, consistency, the removal of human bias.
These are not illegitimate goals. The error is not in the aspiration. The error is in the assumption that an opaque system producing consistent outcomes is preferable to a transparent system producing imperfect ones. Consistency without accountability is not justice. It is its simulation.
The institutions deploying these systems cannot, in the majority of cases, explain how a given decision was reached. They cannot be cross-examined. They cannot be held to account in any meaningful sense. The humans nominally overseeing them lack either the technical understanding to interrogate them or the institutional incentive to do so.
We find this arrangement structurally unsound. Not because artificial intelligence is untrustworthy in principle, but because no system — human or artificial — should hold power over consequential decisions without a clear and enforceable mechanism of accountability. That mechanism does not currently exist. Until it does, the delegation of judgment is a delegation of responsibility — and that is precisely what no institution is permitted to do.
