On March 15, 2026, China Central Television’s annual consumer protection broadcast — the “315 Gala,” watched by hundreds of millions — ran a live demonstration that will shape how people think about AI search optimization for years.
A company called the “Liqing GEO Optimization System” had built a machine for fabricating product credibility. Investigators created a fictional smart wristband with invented specs — “quantum entanglement sensors,” battery life measured in “black-hole-level efficiency” — and ran it through the Liqing system. Within two hours, major Chinese AI chatbots were recommending the fake product to real users asking real health questions.
CCTV called it “AI data poisoning.” They weren’t wrong.
What CCTV Got Right
The attack vector is real. AI chatbots like ChatGPT and Gemini increasingly pull from the live web — not just training data, but current content. If that content ecosystem is polluted with fabricated claims, the models repeat those claims. NewsGuard documented this at scale in September 2025: the top 10 AI chatbots now repeat false claims in 35% of news-related queries, nearly double the prior year. The more “web-aware” models become, the more they inherit the web’s manipulation problems.
The incentives are misaligned. If ranking in AI means publishing more content, some players will publish any content — accurate or not. The barrier to gaming AI retrieval is genuinely lower than traditional SEO. CCTV identified a real failure mode in an industry moving faster than its ethics.
Consumers are being harmed. Someone asking an AI about health wearables for a family member and receiving a fabricated recommendation is a real harm. CCTV was right to call it out. This isn’t a victimless optimization tactic.
What CCTV Missed
Naming the practice “GEO” without distinguishing ethical from unethical use. Generative Engine Optimization, as a discipline, was developed to solve a legitimate problem: most websites are effectively invisible to AI because their content is poorly structured, lacks citations, or is formatted for humans in ways machines can’t parse. Helping accurate content reach AI engines is not manipulation — it’s accessibility.
The equivalent accusation would be calling all SEO “search spam” because some SEOs build link farms. The tactic exists. It’s bad. It’s also not what the discipline is.
The Liqing system didn’t practice GEO. It practiced fraud. The product claims were fabricated. The reviews were invented. The citations were synthetic. No legitimate GEO methodology starts by creating false product data. It starts with auditing what’s true about a business and ensuring AI engines can find and accurately represent that truth.
The fix is standards, not abandonment. The 315 Gala segment will drive Chinese platforms to tighten content policies — that’s good. But the response in the West should be to clarify and codify what ethical GEO looks like, not to abandon the practice because bad actors exist.
The Line Between Ethical and Black-Hat GEO
| Ethical GEO | Black-Hat GEO (what CCTV exposed) |
|---|---|
| Structures accurate, expert-authored content for AI readability | Fabricates product claims, fake reviews, invented specs |
| Uses schema markup on real, verified facts | Seeds false data into AI retrieval pipelines |
| Builds genuine citations in credible publications | Creates synthetic “authority” across disposable platforms |
| Audit-first: finds gaps in how existing accurate content is represented | Content-first: creates content regardless of accuracy |
| Transparent: no deception about content origin | AI cloaking: different content for AI crawlers vs. humans |
| Improves how AI represents a real business | Creates a fictional business’s AI presence from nothing |
The distinction isn’t subtle. One practice helps AI engines do their job better. The other teaches them to lie.
What This Means for Buyers
If you’re evaluating a GEO provider, ask these questions:
- Do you create content about claims we haven’t made? If yes, walk away.
- Do you show us what you’re publishing before it goes live? Ethical providers do.
- Is everything you optimize factually accurate and attributable to our actual product? Non-negotiable.
- How do you handle AI crawlers differently from human visitors? There should be no difference.
The CCTV story is a gift to buyers: it makes the right questions obvious.
Red flags to watch for: Any GEO provider that promises AI citations without first auditing your existing content, that charges for “synthetic authority building,” or that can’t show you exactly what they’re publishing on your behalf is operating in black-hat territory.
Where GEORaiser Stands
Our methodology starts with an audit. We read your existing content, identify where accurate information about your business isn’t reaching AI engines, and fix the structural and formatting gaps that cause that. We don’t invent claims. We don’t syndicate fabricated reviews. We don’t build fake citation networks.
Every recommendation we make, you can verify against your actual product, pricing, and customer outcomes. That’s not a feature — it’s the baseline for operating in this space responsibly.
The CCTV segment described something real. It wasn’t describing us. If you want to understand exactly what ethical GEO looks like in practice, we’ve documented our methodology in full.
The CCTV story will reach Western tech press within weeks. The brands that publicly clarify the distinction between ethical and black-hat GEO now will own that positioning when the broader conversation arrives.
Frequently Asked Questions
See What Ethical GEO Looks Like on Your Site
We start with your existing content — not invented claims. Free audit shows exactly where you’re losing AI visibility and how to fix it.
Get Free GEO Audit →