How this works
Goblin House is not asking you to trust it. It is asking you to verify it. This page explains, in three layers of depth, exactly how that verification works.
If you're new here
This site tracks people and companies whose public statements don't match their public records. Every claim on this site has a primary source URL — usually a court filing, an SEC document, a sustainability report, an annual letter, or a government database. Nothing is anonymous. Nothing is rumored. If we can't show you the receipt, we don't publish the claim.
What makes the platform unusual is that the receipt itself is independently checkable. Every source we cite is hashed when we read it; the hash is published; and any reader, journalist, or court can verify that the source we quoted today still says what we claimed it says.
What's different from "investigative journalism" or "fact-check sites"
→ Self-disclosure only, where it matters
When we say a company is missing its emissions target, we cite that company's own annual sustainability report — not a critic's analysis of it. When we say a CEO contradicted themselves, we cite both statements from sources they themselves chose to publish. This makes the strongest possible evidentiary stance: there is no third-party we're asking you to trust, because there is no third-party in the loop. The contradictions are between the subject and their own past self. See Promise vs. Footprint for the cleanest expression of this.
→ Independent verification by a second AI
The platform uses AI to extract facts from sources at scale — that's how it covers 5,000+ entities. But AI hallucinates. The worst failure mode is a fabricated fact paired with a real-looking source URL — invisible to the model that wrote it. So every extracted fact is independently re-checked by a different model with different training data. The verifier reads the source the original model saw, and answers: does the source actually contain text that supports this fact? Disagreements flag the fact for human review and reduce its display weight. This pattern (DeepSeek extracts, Gemini verifies) catches the failure mode that defeats most AI-assisted journalism.
→ Chain of custody for every source
When the platform reads a web page or document, the URL and full text are hashed with SHA-256 and the hash is stamped with a timestamp. The hash is public. Anyone, at any later date, can compare the current page against the platform's hash and prove whether the source has been edited since we read it. Every fact in the database carries the hash of the source it came from. This is the audit trail that traditional journalism does not have: when a story is later disputed, the question "did the source really say that on the day you cited it" has a forensic answer, not a he-said-she-said.
→ Daily corpus attestation
Every day, all source hashes from that day are combined into a single root hash (a Merkle root — the same construction Bitcoin uses to bind transactions to blocks). That root hash is published at /attestation and timestamped. Two consequences: (1) we cannot retroactively change history without it being detectable, and (2) external mirrors can verify their copy of the corpus matches ours by comparing one number. The system is engineered to survive a hostile takedown — not just legally, but cryptographically.
For journalists, researchers, and skeptics
If you're considering citing this platform, every claim is auditable end-to-end:
- Each fact links to its primary source. Click through and read it yourself. We don't ask you to take our word.
- Each source has a content hash. Find it via the source's receipt page. The hash proves the page hasn't been altered since we read it. If the page is later edited, the hash mismatch is the proof.
- Each fact has a verification badge. Look for ✓ verified (an independent LLM confirmed the source supports the fact) or ⚠ flagged (the verifier disagreed and a human is reviewing). Unverified facts are labeled.
- The daily corpus root is public. See /attestation. The root binds every source the platform held that day into a single fingerprint.
- External mirrors are welcome. The Federation API exposes the corpus + attestation so anyone (Internet Archive, university libraries, researchers) can run an independent copy.
- Promise vs. Footprint methodology — baseline assumptions (2024 US suburban household) + the conversion math
- Corpus attestation — Merkle root construction, signing, and audit procedure
- Federation API — for mirrors and downstream researchers
- Platform health — live status of the verification pipeline, ingestion, deploy version
What we don't do
We don't do original reporting. No interviews. No FOIA requests. No on-the-ground work. The platform's contribution is at the synthesis and verification layer: aggregating what's already in public records, mapping the connections between them, and surfacing the contradictions that get buried because no individual journalist has the time to read every annual report.
We don't use anonymous sources. We don't publish anything we can't link to a primary record. We don't accept advertiser money or platform-affected-party donations. The goblin-parody framing is satire on top of source-anchored facts — the receipts are always real, even when the names are mangled.
Corrections
If a claim on this site is wrong, the verifier's job is to catch it before publication. If it gets past the verifier, the corrections log is public. Every correction is timestamped and the original fact is preserved alongside it — both the error and its repair stay on the public record.
Built so the receipts survive the lawsuit, the takedown, and the news cycle.