Resonant Echoes logo

AI Makes Mistakes. Here's What We Do About It.

Every AI system built on large language models can and will produce inaccuracies. That includes ours. Authenticity and trust start with honesty about what the technology can and cannot do, therefore we make this conversation a first-class part of what we offer.

The Reality of AI-Generated Content

Large language models (LLMs) do not retrieve facts from a database. They generate responses by predicting what comes next, informed by their training and the sources they're given. This means they sometimes produce text that sounds authoritative but is factually wrong. The industry calls this “hallucination.” It is a known property of the technology, not a bug that will be patched away in the next release.

Most AI companies downplay this. They lead with capability and leave accuracy as an afterthought. We think that approach is wrong, especially when the content involves history, education, and the scholarship of real institutions. If a museum puts its name behind an Echo, that Echo needs to earn the trust placed in it. And earning trust means being honest about where the risks are and what we do about them.

Our approach is not to promise perfection. It is to build layered systems that catch mistakes, reduce their frequency over time, and keep a human expert in the loop at every stage.

Layers of Defense

No single mechanism eliminates AI error. Instead, we use multiple reinforcing layers, each catching what the others miss.

Research Grounding

Every Echo draws from a curated knowledge base of primary sources, scholarly articles, collection records, and expert-authored FAQ entries. The AI is constrained to this foundation. It is not browsing the internet or simply improvising from general training data. When an Echo speaks, it is drawing from the same research a scholar would cite.

Why this matters

This does not eliminate error, but it dramatically narrows the space where errors can occur. The AI generates from vetted scholarship, not from the open web.

Moderation Rules

Scholars and curators define explicit guardrails for every Echo: topics to avoid, sensitive subjects to handle carefully, factual boundaries the Echo must not cross. These rules act as hard constraints on what the AI can say, independent of what it might otherwise generate.

Why this matters

Moderation rules are not suggestions. They are enforced boundaries that prevent entire categories of inaccurate or inappropriate responses before they reach the visitor or learner.

Multi-Echo Cross-Checking

In Assemblies, multiple Echoes engage in structured discussion. Each one is grounded in different sources and perspectives. When one Echo makes a claim, the others can challenge it from their own knowledge base. This creates a natural fact-checking dynamic that does not exist in single-voice AI experiences.

Why this matters

A Structured Debate or Panel Discussion is not just a learning format. It is an accuracy mechanism. Competing perspectives surface contradictions that a single voice might let pass unchallenged.

Expert Oversight via Insights

Every conversation generates data. The Insights dashboard surfaces common questions, moderation events, flagged topics, and knowledge base accuracy metrics. Scholars see what visitors and learners are asking, where the Echo is confident, and where it struggles.

Why this matters

This is not passive analytics. It is an active monitoring system that tells experts exactly where to focus their attention and what to fix next.

When Mistakes Get Through

Layers of defense reduce errors. They do not eliminate them. Some inaccuracies will reach visitors and learners. What matters is what happens next.

Every conversation an Echo has feeds back into the system. Questions that reveal gaps in the knowledge base, responses that trigger moderation flags, patterns that suggest the Echo is uncertain or inconsistent: all of this surfaces in the expert's Insights dashboard. The expert reviews, adds new sources or FAQ entries, tightens moderation rules, and the Echo improves.

This is not a one-time fix. It is a continuous cycle. The more conversations an Echo has, the more the expert learns about where it needs work. The system gets better over time, guided by the people who know the history best.

The refinement cycle

  1. 01

    Conversations happen

    Visitors and learners ask questions the expert did not anticipate. Some responses are accurate. Some are not.

  2. 02

    Insights surface patterns

    The system identifies common questions, knowledge gaps, flagged responses, and areas of low confidence.

  3. 03

    Experts refine

    New sources, FAQ entries, and moderation rules are added. The expert addresses exactly the gaps the data revealed.

  4. 04

    The Echo improves

    The next conversation is better than the last. Over weeks and months, accuracy compounds as the knowledge base deepens.

  5. 05

    The cycle continues

    New questions surface new gaps. The expert refines again. The Echo is never finished. It improves under expert guidance.

Why We Built It This Way

Most AI products treat accuracy as a technical problem to be solved in the model layer. Train a better model, get fewer errors. That approach has limits. Models improve, but hallucination does not go to zero. It may never go to zero.

We treat accuracy as a system design problem. The model is one layer. The curated knowledge base is another. Moderation rules are another. Multi-Echo discussion is another. Human oversight is another. The feedback loop that connects all of these together is what makes the difference.

The expert behind an Echo is not a passive administrator. They are the scholar whose scholarship grounds the Echo, whose judgment defines its boundaries, and whose ongoing attention makes it better over time. The AI provides the voice. The expertise comes from the people who know the history.

Defense in Depth

No single layer is responsible for accuracy. Each layer catches what the others miss. This is the same principle used in aviation safety, medical protocols, and any system where failure has real consequences.

Human Expertise in the Loop

The expert is not reviewing AI output after the fact. They are shaping the system from the start: the knowledge base, the persona, the guardrails, and the ongoing refinement. The AI works within the boundaries they define.

What This Means for You

For Educators

You are right to ask whether AI-generated content belongs in your classroom. The answer is: it depends on the safeguards. Every Echo in the Lending Library is backed by an institution, grounded in curated research, and continuously refined by the scholar who built it. You can see which institution created each Echo, what sources it draws from, and how it handles topics outside its expertise. If an Echo says something wrong in your classroom, that feedback reaches the expert and makes the Echo better for the next teacher who uses it.

For Institutions

Your reputation is on the line. We understand that. An Echo carries your institution's name, and every response it gives reflects on your scholarship. That is why you control every layer: the knowledge base, the persona, the moderation rules, and the ongoing refinement. The Insights dashboard shows you exactly what your Echo is saying, what questions it faces, and where it needs attention. You are not handing your expertise to an AI and hoping for the best. You are guiding a system that gets better under your direction.

Ask Us the Hard Questions

Trust is not a footnote. It is part of what we build. The institutions and educators we serve deserve a direct conversation about accuracy, and we think that conversation should stand alongside the product itself. If you have questions about how we handle specific scenarios or what happens when something goes wrong, we want to hear them.