About Ensanid

Restoring trust in human knowledge is the defining challenge of the AI era. Ensanid is a foundational infrastructure built to meet that challenge by providing a universal standard for verifiable human accountability.


The Crisis of Trust

The integrity of the global research ecosystem is under unprecedented threat from a sophisticated shadow economy built on identity fraud. This is not a theoretical problem; it is a clear and present danger corrupting the scholarly record and eroding public trust in science. The evidence includes:

  • Industrial-Scale Paper Mills: Investigations by publishers and academic watchdogs have uncovered thousands of fraudulent papers authored by "synthetic researchers." These entities use fabricated data and sell authorship slots to individuals seeking credentials, creating a vast network of verifiably false research.
  • Peer-Review Rings: Fraudulent actors create networks of fake expert identities to manipulate the peer-review process, either guaranteeing publication for their own low-quality work or sabotaging competitors. This compromises the core quality-control mechanism of scholarly communication.
  • AI-Generated Fraud: The advent of powerful generative AI has armed malicious actors with the ability to produce plausible but entirely fabricated text, data, and even researcher personas at a scale and speed that legacy verification systems cannot handle.
  • Reputational Hijacking: Established researchers increasingly face the risk of having their identities impersonated to lend false credibility to fraudulent grant applications, commercial products, or disinformation campaigns.

This crisis creates unacceptable risks for all parties: institutions waste funding on fraudulent R&D, insurers cannot accurately price liability for research misconduct, and the public loses faith in the scientific enterprise.

Ensanid is the Universal Standard for Accountability

Ensanid addresses this crisis by re-establishing a clear, enforceable, and privacy-preserving link between digital content and a verified human researcher. It is not a social network or a profile system; it is a stateless, cryptographic utility designed to serve as a universal layer of trust.

It works by providing a single, unambiguous signal of human accountability that satisfies the core concerns of every stakeholder:

  • For Institutions & Funders: Ensanid offers an auditable, cryptographic assurance that resources are being directed to legitimate human researchers, drastically reducing exposure to fraud and reputational damage.
  • For Publishers & Platforms: It provides a simple, low-friction method to verify human authorship at the point of submission, protecting the integrity of their publications and platforms from synthetic content and paper mills.
  • For Researchers: It empowers them to prove their humanity and take accountability for their work—including AI-assisted content—without exposing their personal identity, credentials, or being subjected to invasive surveillance.
  • For AI Companies: It provides a vital framework for the ethical and responsible deployment of their technologies in sensitive fields, ensuring a human expert remains legally and editorially accountable for all outputs.

The Cutoff Year for ORCID

Eligibility for an Ensanid passport is currently anchored to a verifiable scholarly record of five or more publications authored before 1 January 2020. This is not an arbitrary date; it is an evidence-based decision designed to establish a foundational network of trust rooted in a pre-generative AI baseline.

The logic is simple: the widespread public release and subsequent explosion of powerful large language models (LLMs) occurred after this point. Scholarly output published before 2020 constitutes a "digital fossil record"—a reliable ground truth of human work created before it was possible for AI to mass-produce plausible academic text. By using this untainted record as the initial basis for eligibility, Ensanid ensures that the foundational layer of its network consists of humans with a proven, pre-AI scholarly footprint.

The Future of Eligibility: Evolving the Standard

The 2020 anchor is the first phase. It is designed to build a high-trust network, not a permanent gate. We are actively developing pathways for newer researchers to become eligible by integrating next-generation, zero-PII identity technologies. Our roadmap includes the integration of privacy-preserving systems that can verify a user is a living, unique human without requiring biometrics or personal data, ensuring the Ensanid standard can grow with the community while never compromising its core commitment to privacy.

Why We Built This

KNOWDYN's mission is to build infrastructure that preserves and protects human knowledge. We recognise generative AI as a tool of immense potential, but its unregulated use in research presents an existential risk to the system of trust upon which all scientific progress depends.

We are investing in Ensanid as a piece of free, public utility infrastructure because the most effective form of regulation is not a top-down ban, but a bottom-up standard of accountability. Instead of trying to regulate AI models, Ensanid regulates the *claim of authorship* at the most granular level—the individual piece of content.

By making this tool free, we aim for maximum adoption, creating a powerful network effect where the Ensanid signature becomes the expected standard for trustworthy human-authored or human-endorsed work. This is our investment in a future where technological innovation can flourish within a framework of unimpeachable human integrity.