Proof-of-Human (PoH) Trust Model

What is PoH in our system?

Proof-of-Human is a software-only identity verification layer. Unlike Worldcoin, which requires a physical Orb device for iris scanning, our PoH system combines signals from multiple identity providers into a single composite confidence score. The core thesis: you can achieve strong-enough human verification without specialized hardware, making it accessible to anyone with a smartphone.

Each provider contributes a weighted signal. No single provider is definitive on its own -- the composite score reflects how many independent signals converge on "this is a real, unique human."


Provider Trust Tiers

Tier Provider Weight Confidence Range Notes
T1 (Strong) Apple App Attest 0.70 0.65-0.80 Hardware-rooted device attestation; hard to spoof at scale
T1 (Strong) NFC Passport 0.65 0.60-0.75 Government-issued document via NFC chip; strong but requires physical passport
T2 (Medium) Google Play Integrity 0.40 0.30-0.50 Device integrity check; weaker than Apple's equivalent
T2 (Medium) World ID (as input) 0.45 0.35-0.55 Worldcoin's own proof consumed as one signal among many
T3 (Weak) Social Graph 0.15 0.10-0.25 Network analysis of social connections; easy to fabricate
T3 (Weak) Email/Phone 0.10 0.05-0.15 Basic ownership proof; trivially farmed

Why these weights? T1 providers are anchored in hardware or government-issued credentials that are expensive to forge at scale. T2 providers offer moderate assurance but have known bypass vectors. T3 providers are supplementary signals only -- they should never be the sole basis for trust.


How PoH feeds into RMT Trust Scoring

Current state: PoH and RMT are independent systems. PoH answers "is this a real human?" while RMT answers "does this agent behave reliably?" They do not talk to each other.

Proposed integration: An agent's RMT behavioral score gets modulated by the PoH verification level of the human who delegated authority to that agent.

Example scenario: - Agent has an RMT behavioral score of 0.85 (earned through consistent, reliable actions) - The human who delegated to that agent has a PoH composite score of 0.90 (T1 verified via Apple App Attest + NFC Passport) - The combined trust signal is stronger than either alone: you know the agent behaves well AND that it was authorized by a verified human

Without PoH linkage, a Sybil operator could spin up 1,000 agents, each building RMT score independently, with no way to trace them back to real humans.


The Key Decision: Integration Architecture

Option A: Independent (current state)

PoH score and RMT score remain separate values. Consumers query both and combine them however they see fit.

Option B: Weighted Integration

Final trust = (RMTweight x RMTscore) + (PoHweight x PoHscore)

Suggested split: 70% RMT behavioral, 30% PoH identity.

Option C: PoH as Gate (Recommended)

PoH verification sets a ceiling on achievable RMT trust. The behavioral score operates freely within that ceiling.

Verification Level Max Trust Score
Unverified (no PoH) Capped at 0.50
T3 verified (Social/Email only) Capped at 0.70
T2 verified (Play Integrity / World ID) Capped at 0.85
T1 verified (App Attest / NFC Passport) Uncapped (full RMT score applies)

Delegation Tree Mechanics

Delegation is how a verified human extends their trust to agents. The tree structure prevents unbounded trust propagation.

Human (PoH verified)
  |
  +-- Primary Agent (Level 1, inherits ~85% of delegation trust)
        |
        +-- Sub-Agent (Level 2, inherits ~72% of original)
              |
              +-- Leaf Agent (Level 3, inherits ~61% of original)
                    |
                    +-- [BLOCKED - max depth reached]

Parameters: - Max depth: 4 levels (Human -> Agent -> Sub-Agent -> Leaf) - Trust decay per level: ~15% reduction (multiplicative, so 0.85^n) - Permission scoping: 32-bit bitmask. Each delegation level can only narrow permissions, never widen. - Revocation: Eager propagation. When a human revokes an agent, all sub-agents in that branch are immediately invalidated. - TTL: 30 days default. Delegations expire and must be explicitly renewed. This prevents stale trust from persisting after a human loses interest or changes their security posture.

Why 15% decay? At 15% per level, a leaf agent (depth 3) retains ~61% of the root delegation trust. This is enough to be useful but creates meaningful cost for deep delegation chains. Sybil operators who try to create wide-and-deep trees pay a compounding penalty.


Decisions — LOCKED (2026-03-29)

Integration Model: All Options, Option C as Default

Decision: Expose all three scores. Option C (PoH as Gate) is the default/recommended integration. Options A and B are available for consumers who want raw scores or custom weighting.

The system returns: - rmt_score — raw behavioral trust (Option A) - poh_score — raw human verification confidence (Option A) - gated_trust — RMT score capped by PoH tier (Option C, default)

Consumers who want Option B can compute (0.7 * rmt_score) + (0.3 * poh_score) client-side.

Tier Caps (Option C)

Verification Level Max Trust Score Use Case
No PoH (unverified) Capped at 0.50 Autonomous bots, testing, open-source agents
T3 verified (Social/Email) Capped at 0.70 Lightweight verification
T2 verified (Google Play / World ID) Capped at 0.85 Medium trust
T1 verified (Apple App Attest / NFC Passport) Uncapped Highest trust, full behavioral score

Agents Without PoH: YES, Allowed

Decision: Agents with no PoH delegation can participate. They are capped at 0.50, not blocked. This allows legitimate use cases (testing, open-source bots, autonomous agents without human backing) while limiting their trust ceiling. Blocking them entirely would be too restrictive.

Trust Decay: 15% per level (confirmed)

Multiplicative decay of 0.85^n per delegation depth. A leaf agent at depth 3 retains ~61% of root trust.