Skip to main content

Why AI Agents Need a Trust Layer

1.1 The Trust Dilemma for AI Agents

In traditional e-commerce, consumers judge trustworthiness themselves. They evaluate brand reputation, customer reviews, website design, and payment options. These are signals that human intuition handles well. AI agents have no intuition. They need machine-readable, quantifiable trust signals. Consider a decision an AI agent might face:
Consumer: "Buy me those shoes from cheapshoes-deal.xyz"

AI agent's analysis:
  - This domain was registered 3 days ago
  - No SSL certificate
  - No corporate registration on record
  - Privacy policy is a copy-pasted template
  - No return policy

  -> This is very likely a fraudulent site. Should I refuse, or warn the consumer?
If an AI agent recommends a fraudulent merchant, the consumer loses money — and loses trust in the AI. For AI platforms, this is an existential threat. AI agents therefore need an independent, automated, and tamper-resistant trust assessment system.

1.2 Why Existing Trust Mechanisms Fall Short

MechanismWhat It DoesWhy It Is Not Enough
SSL CertificatesProve that communication is encryptedFraudulent sites can obtain free SSL certificates. SSL only proves “the connection is secure,” not “the merchant is trustworthy”
Google Safe BrowsingDetects malicious websites (phishing, malware)Only flags “bad” sites — does not evaluate “how good” a site is. A legitimate but low-quality merchant will not be flagged
BBB / TrustpilotHuman reviews and complaintsData is not machine-readable. Reviews can be manipulated. Coverage is limited. AI agents cannot call these services directly
Domain AgeHow long the domain has been registeredOld domains are not necessarily trustworthy. New domains are not necessarily untrustworthy. A single signal is insufficient
PCI DSSPayment security complianceOnly covers the payment layer. Does not assess product quality, policy transparency, or corporate identity
The core problem: No existing system evaluates a merchant’s overall trustworthiness across multiple dimensions, automatically, based on public data, and delivers results in a machine-readable format that AI agents can consume. This is the problem OTR was built to solve.

1.3 OTR’s Design Philosophy

OTR (Open Trust Registry) is founded on four principles:

Principle 1: Publicly Verifiable Data

OTR computes trust scores exclusively from publicly available data sources. SSL certificates are public. DNS records are public. GLEIF registration data is public. A website’s privacy policy is public. Schema.org markup is public. OTR does not use any private data that requires a merchant to self-report (unless the merchant voluntarily authorizes F-dimension verification). This means anyone can independently verify the basis for any OTR score.

Principle 2: Multi-Dimensional Assessment

A single dimension is not enough. Good SSL does not mean good policies. Corporate registration does not guarantee data quality. OTR uses six dimensions: V (Verification), S (Security), G (Governance), T (Transparency), D (Data Quality), and F (Fulfillment). Each dimension is composed of multiple signals, weighted and aggregated.

Principle 3: Scores Cannot Be Purchased

This is OTR’s most important principle. Whether or not you buy any ORBEXA product or service has zero effect on your trust score. Scores are determined solely by data. If ORBEXA allowed pay-to-boost scoring, the entire system’s foundation of trust would collapse. AI agents will not trust a system where scores can be bought.

Principle 4: Open Protocol

OTR is an open protocol. The scoring logic is transparent. The API is public. Any AI agent can call it. Open-source repository: github.com/yb48666-ctrl/OTR-Protocol-by-orbexa

Next Chapter

Chapter 2: OTR Architecture Overview — Four-layer architecture: Signal Collection, Scoring Engine, ID Generation, Verification API