Human-led governance for AI-supported decisions

Keeping legitimate human authority present where AI-supported processes move into consequence

Warrentor helps keep legitimate human authority structurally present where AI-supported analysis moves into real-world consequence.

Jurisdiction-neutral Technology-agnostic Product-agnostic

Built for high-consequence environments across sectors, vendors, models, and legal regimes.

Most AI governance is presented as a digital, model, or system-layer problem. That matters. But it is not the whole problem.

Who held identified and qualified human authority at the exact point where an AI-supported process became consequential?

That is the gap Warrentor addresses.

High-profile failures across government, law, and enforcement keep showing the same underlying pattern: system-led processes moving into consequence without sufficiently meaningful human authority, review, or accountability.

UK Post Office Horizon

System-led outputs hardening into consequence while meaningful correction fails.

Dutch Childcare Benefits

Administrative systems driving severe harm beyond meaningful human oversight.

Australia’s Robodebt

Formal process coexisting with deep accountability failure.

Western Australian AI Seatbelt Cameras

Surface compliance still leaving practical gaps once automation reaches penalty and review.

Many readers first understand this as an “offline” issue. That instinct is useful. Warrentor is concerned with the human-led governance layer that must remain present, whether online or offline, when AI-supported processes move into action, penalty, refusal, approval, escalation, or other real-world consequence.

This is not about resisting AI. It is about ensuring that AI remains decision support, not decision substitution.

Warrentor is being developed as a licensable governance offering supported by publications, companion material, training, implementation guidance, bridge documents, and benchmarking tools.

Visual mark
Warrentor hand symbol

Human-led AI

A strong visual reminder that AI may assist, but legitimate authority must remain human where consequence begins.

Pilot implementations

Warrentor is currently open to carefully selected pilot implementation discussions with suitable organisations operating in high-consequence decision environments.

Make contact

About

Warrentor is focused on one of the most difficult questions emerging in AI governance: how legitimate human authority remains structurally present when AI-supported processes move from analysis into consequence.

What Warrentor is

The work is built around a governance approach designed for high-consequence environments, where accountability, decision control, and authority cannot be left to assumption, convenience, or system output.

Warrentor is jurisdiction-neutral, technology-agnostic, and product-agnostic. It is designed to operate alongside existing laws, standards, platforms, vendors, models, and sector-specific controls.

This is not a software pitch in the ordinary sense. Nor is it another generic responsible AI framework. Warrentor addresses the human-led governance layer required where real-world outcomes begin.

Commercial direction

The work is being developed as a licensable governance offering, supported by companion material, training, implementation guidance, bridge documents, and benchmarking tools.

The goal is practical deployment in environments where decision integrity, review, and meaningful human authority matter.

Publications and supporting material

  • The AI Non-Delegation Doctrine
  • SSRN publications supporting the doctrine and its scholarly positioning
  • A detailed Companion Book explaining the doctrine and its practical application
  • Bridge documents to the EU AI Act and the USA HSS / OMB M25-21 compliance architectures 
  • A Capstone Training Suite for applied understanding and deployment
  • Implementation guidance and supporting material

Additional development

  • The ADBC benchmarking tool currently under development
  • Commercial development through licensing, training, and pilot implementation

The Problem

Many organisations now recognise the need for AI governance. Most responses focus on model controls, transparency, policy, oversight, documentation, auditability, or compliance settings. Those things matter. But they often leave a more difficult problem unresolved.

The harder questions

When an AI-supported process moves into approval, refusal, escalation, penalty, recommendation, ranking, or other real-world consequence, it is not enough to ask whether the system performed as designed.

  • Who actually held authority at that point?
  • Was that authority identified?
  • Was that person qualified?
  • Could they intervene in a meaningful way?
  • Or had authority already drifted into process, policy, automation, or system output?

Why the gap is missed

AI governance is usually framed as a problem of systems, models, and compliance artefacts. Less attention is given to what happens when consequential authority becomes structurally hollow while still appearing to remain human.

Different systems, different jurisdictions, same recurring weakness: consequence arrives, but meaningful human authority is thin, absent, nominal, or too late.

Verification and disclosure are important. They do not by themselves guarantee that legitimate human authority remains present at the exact point where AI-assisted work becomes consequential.

The Solution

Warrentor addresses the governance layer that must remain intact when AI-supported analysis moves into real-world consequence.

Purpose

To ensure that AI remains decision support rather than decision substitution.

What Warrentor preserves

  • identified and qualified human authority
  • decision control at the point of consequence
  • accountability that remains real under pressure
  • governance that continues to function when AI is embedded into workflow

What this means in practice

  • where the consequence boundary actually sits
  • who must hold authority at that boundary
  • how that authority is preserved and evidenced
  • how escalation, review, refusal, and intervention remain meaningful
  • how apparent compliance can still fail if authority has already dissolved

Designed to work alongside existing systems

Warrentor is designed to operate alongside existing laws, standards, organisational controls, and technical systems. It is not dependent on one vendor, one jurisdiction, one model type, or one software stack.

Intended environments

Warrentor is intended for use in high-consequence environments, including government, defence, legal, financial, infrastructure, utilities, healthcare, and other settings where decision integrity matters.

This is not about slowing AI down for its own sake. It is about keeping human authority structurally present where consequence begins.

Contact

For enquiries regarding governance, collaboration, speaking, pilot implementation, or commercial discussions, please make contact.

Direct contact

Name: Frank Schouten

Business: Warrentor

Phone: +61 (0)427 383 548

Email: frank@warrentor.com

Website: www.warrentor.com

Pilot implementations

Warrentor is open to carefully selected pilot implementation discussions in suitable high-consequence environments.

The focus is on practical deployment where decision integrity, accountability, and meaningful human authority matter.