
How to use AI in EdTech without losing trust
Most EdTech companies treat the use of AI like a trade secret. They secretly use large language models to generate content, avoid mentioning it publicly, and hope that no one asks too many questions. The instinct is understandable. There is a bias against AI-generated educational content, and for good reason. But keeping secrets is the wrong response to legitimate concerns.
Some teams are starting to take the opposite approach. This means we publish the full editorial process, including where and how AI is used, on a dedicated page for everyone to read. The platform makes its entire editorial standards public. This should be normal. This is a framework that EdTech teams can adopt to use AI responsibly and transparently.
In this article…
The problem of concealing the use of AI in EdTech
The concerns about AI in educational content are straightforward. Large language models create hallucinations. They create texts that sound authoritative, but the facts may be wrong. They fabricate quotes. They present the disputed claims as settled facts. These are no small problems in education, where the main purpose is to convey accurate information.
But the solution is not to avoid AI completely. AI can truly help with content creation. This speeds up drafting, makes it easier to structure complex topics, and allows smaller teams to produce material at a pace that would normally require more people. The solution is to use AI responsibly and honestly.
When EdTech companies hide their AI usage, two problems arise. First, they lose the opportunity to demonstrate that they have safeguards in place. Second, it undermines trust with learners and educators whose content ultimately turns out to be AI-assisted. And they always find out when AI-generated content isn’t properly reviewed. Strange phrasing. An overconfident tone regarding sensitive topics. Quote that doesn’t exist.
Publishing editorial standards is partly a trust-building exercise. But more than that, it’s a mandatory feature. When a team publicly commits to a certain process, they should actually follow it.
A 4-step editing process for AI-assisted content in EdTech
A robust editing process for AI-assisted educational content typically takes 2-4 hours per piece. The contents of each step are as follows.
Step 1: Research the topic
Before drafting anything, the team must identify the subject, define the scope, and gather primary and secondary sources. For articles about historical events, that means official records, contemporary accounts, and reputable scholarship (not just a quick read of Wikipedia). Primary sources should be prioritized. Key claims should be cross-referenced with at least two independent sources. For specialized topics, expert reviews are essential. This step must be performed entirely by humans. AI should not choose topics or evaluate sources.
Step 2: AI-assisted drafting
This is where AI enters the workflow. A large-scale language model helps you structure and draft your content based on the research you collected in step 1. AI can help you move from a collection of notes and sources to a coherent narrative structure faster than writing from scratch.
The key is to never treat AI as a source of fact. This is a writing tool, not a research tool. The distinction is important. A team shouldn’t ask an AI, “What happened during the California Gold Rush?” and publish the answer. Instead, you should feed them verified information and ask them to organize that information into a readable format.
Step 3: Manual fact-checking
All claims in the draft must be verified against reliable sources. This is a step that separates responsible AI-assisted content from irresponsible AI-generated content.
Teams must cross-check dates, names, and statistics with reliable references. Quotations should be verified against the original text. Scientific claims must be verified against peer-reviewed research. Reviewers should check for logical consistency and flag anything that appears overconfident or oversimplified. If the AI causes an error, you need to fix it.
This step detects more problems than most people expect. AI models are especially prone to subtle errors. For example, getting dates wrong by one year, attributing a quote to the wrong person, or confusing two similar but separate events. These are exactly the kinds of mistakes that undermine the credibility of education, and are exactly the kinds of mistakes that would go unnoticed without human review.
Step 4: Editorial review
The final step is a thorough editorial review for clarity, tone, and readability. Does the work teach what it is supposed to teach? Is the story engaging? Is it pitched at the right level for the target audience? If a curious adult finishes reading this work, will they feel like they really learned something?
A senior team member should own this final review. Having leaders directly involved in content quality sends a signal internally and externally that accuracy is non-negotiable.
Why every EdTech team needs a no-fake-quote policy
One commitment deserves special attention because it addresses what may be the most dangerous practice of AI in education: citation fabrication. Large language models routinely generate references that don’t exist. They cite books that have never been written, attribute findings to studies that have never been conducted, and reference magazine articles with plausible titles that are completely fictional. From an educational perspective, this is a serious lack of integrity.
Teams must commit to never publishing references or citations generated by AI without verifying that the sources exist and support the claims. All citations must be checked by humans before publication. This sounds obvious, but in reality it’s rare. Many platforms that use AI to generate educational content do not have equivalent policies or do not publish them if they do have them.
Why publication editorial standards work
Teams that have adopted open editing standards consistently report several benefits.
That raises the bar.
Once the process is exposed, the feeling of cutting corners changes. There is no internal discussion about whether to skip fact-checking on “simple” topics. The published standards will be the bare minimum, and everyone involved in content production will know them. It builds trust with your audience.
Learners who value accuracy value transparency. In a market crowded with AI-generated content of questionable quality, a visible editorial process is a real differentiator. A conversation begins.
Once the team publishes the process, other founders and content creators reach out to discuss their own approaches. The more companies make their processes public, the better the industry will be held accountable. It forces honest self-evaluation.
No process is perfect. AI-assisted drafting poses risks that purely human writing does not. Publishing standards creates accountability. If errors are found, they must be timestamped and publicly fixed as required by the Visible Correction Policy. Taking on responsibility is unpleasant, but necessary.
A framework for other EdTech teams
When building educational content with AI assistance, evidence supports:
Expose the process.
Rather than vague statements about “using AI responsibly,” be specific and detailed about where AI will be used, where humans will intervene, and what safeguards are in place. Vague guarantees are worthless. Specific commitments can be evaluated and held accountable. Separate drafting and fact-checking.
AI is good at the former, but unreliable at the latter. Treat them as separate steps with separate criteria. Never let the speed of AI drafting reduce the time allotted for human verification. Check all citations.
This is a non-negotiable for educational content. If the quote is surfaced or generated by AI, make sure the source exists and supports your claim. If you cannot confirm, please delete it. We have a modification policy.
You will make mistakes. How you handle it is more important than whether you make it or not. Promise a specific time period for corrections, visibly mark corrections, and allow viewers to report mistakes. Leave the final judgment to humans.
AI should accelerate your processes, not replace your judgment. The moment you remove human oversight from the creation of educational content is the moment you stop being an education company and start becoming a content production company.
big picture
The EdTech industry is at a crossroads. AI allows us to create educational content at unprecedented scale and speed. It’s really exciting. However, scale without quality control cannot be considered a contribution to education. It is a contribution to misinformation.
Companies that actively demonstrate their work gain long-term trust. Consider not just the sophisticated output, but also the process behind it: where AI can help, where humans will intervene, and what standards will govern the entire operation. If publishing a framework like this encourages a few more teams to open up their processes, the educational content ecosystem will be better off.
