Why We Ran These Audits

There’s a common assumption in digital marketing right now: if you have strong content and solid SEO, you’re probably visible in AI search too.

We wanted to test that assumption. So we ran full GEO audits on four companies that fit the profile — established businesses with real content, real traffic, and real domain authority. Each one operates in a different industry. Each one has a content team producing quality work.

The question was simple: Are they visible to ChatGPT, Claude, and Perplexity?

The answer, across all four, was the same: not nearly as visible as they should be.

The Four Audits at a Glance

Company Industry GEO Score llms.txt Schema AI-Extractable Content
9Sail Legal Marketing 22/100 F — Missing F — None Blocked by JS SPA
Baremetrics SaaS Analytics 49/100 F — Missing F — Minimal Human-optimized only
Sunrise Integration Shopify Plus Dev 52/100 F — Missing F — No Article schema No author attribution
Titan Web Agency Dental Marketing 52/100 F — Missing F — Minimal Human-optimized only

Audit 1: 9Sail — Legal Marketing Agency (GEO Score: 22/100)

Read the full 9Sail case study →

9Sail is a B2B legal marketing agency that publishes guides on AI search for law firms. They advise clients on digital visibility. And yet their own site scored 22 out of 100 on our GEO audit — the lowest of the four.

The root cause: 9Sail’s site is a JavaScript single-page application (SPA). When AI crawlers like GPTBot or ClaudeBot request the site, they receive an empty HTML shell. The actual content — blog posts, service pages, case studies — is rendered client-side by JavaScript that AI crawlers cannot execute.

Beyond the rendering problem: no Schema.org structured data of any kind. No llms.txt. No FAQ content formatted for AI extraction. A company that helps law firms get found online is itself invisible to the fastest-growing search channel.

Audit 2: Baremetrics — SaaS Analytics (GEO Score: 49/100)

Read the full Baremetrics case study →

Baremetrics has been a well-known name in SaaS analytics for years. They have an active blog, strong domain authority, and a product that thousands of SaaS companies use. By traditional SEO metrics, they’re doing fine.

But here’s the problem: when we asked AI engines to recommend subscription analytics tools, ChartMogul was cited 11 times to Baremetrics’ 2. That’s an 11-to-2 citation gap against a direct competitor.

The structural issues: Schema.org markup scored an F. Content is well-written but framed for human readers, not AI extraction — long narrative paragraphs without answer-first formatting, structured summaries, or explicit comparison data that AI models can pull into citations. No llms.txt. The content quality is there; the content structure is not.

Audit 3: Sunrise Integration — Shopify Plus Partner (GEO Score: 52/100)

Read the full Sunrise Integration case study →

Sunrise Integration is an official Shopify Plus partner that actually sells GEO services to clients. They have over 100 blog posts and a deep content library covering Shopify development, e-commerce integrations, and now AI optimization.

The irony: a company selling GEO services has the exact GEO gaps they help clients fix. No llms.txt (scored F). Zero author attribution across 100+ blog posts — meaning AI engines have no author signal to evaluate content authority. No Article schema on any blog post. The content exists, but it’s structurally invisible to AI crawlers.

Audit 4: Titan Web Agency — Dental Marketing (GEO Score: 52/100)

Read the full Titan Web Agency case study →

Titan Web Agency writes about the GEO gap for dental practices. They publish content explaining why dentists need to optimize for AI search. Their own site, however, has the exact same gaps they warn their clients about.

GEO Score: 52/100. Missing llms.txt. Minimal Schema.org structured data. Content that reads well for humans but lacks the answer-first formatting and structured data that AI models need to cite it. The cobbler’s children have no shoes.

The Three Universal Blind Spots

Across four companies in four different industries, the same three structural problems appeared every single time.

Blind Spot 1: Missing or Broken llms.txt (4 out of 4)

None of the four companies had a functioning llms.txt file. This is the single most direct way to communicate with AI models about what your business does, what pages matter, and how to cite you. It takes 30 minutes to create. None had done it.

llms.txt is to AI search what robots.txt was to Google in 2004 — a plain-text file that tells crawlers what they need to know. The difference is that robots.txt controls access while llms.txt provides context. Both are essential.

Blind Spot 2: Zero or Minimal Schema.org Structured Data (4 out of 4)

Every company was either missing structured data entirely or had only the bare minimum that their CMS auto-generated. No Article schema on blog posts. No FAQPage schema on FAQ content. No Organization schema declaring the business entity. No HowTo schema on process-oriented content.

Structured data is how you tell AI engines — in machine-readable language — who you are, what you do, and what authority you have. Without it, AI models have to infer everything from unstructured text. They often infer wrong, or don’t infer at all.

Blind Spot 3: Content Optimized for Humans, Not AI Extraction (4 out of 4)

All four companies produce good content. That’s not the issue. The issue is that their content is structured for human reading patterns — long narrative paragraphs, gradual buildup, conclusion at the end.

AI models extract differently. They need answer-first formatting: the key claim or recommendation in the first sentence, supporting evidence immediately after, structured with headers that match common query patterns. Content that buries the answer in paragraph four of a six-paragraph section will lose citations to content that leads with the answer.

The Pattern Is Clear

These aren’t obscure companies with bad content. They’re established businesses with strong content teams and real domain authority. The problem isn’t content quality — it’s content structure. Traditional SEO best practices don’t address AI-specific signals. And AI search is growing at 15–20% quarter over quarter.

What This Means for Your Business

If four companies that actively work in digital marketing — including two that specifically sell SEO/GEO services — have these blind spots, the odds that your business has them too are high.

The good news: every one of these problems is fixable. The fixes are structural, not creative. You don’t need to rewrite your content. You need to restructure it.

The Three Fixes Every Business Needs

  1. Create an llms.txt file — A plain-text file at yourdomain.com/llms.txt that describes your business, core offerings, and key pages in language optimized for AI model consumption. Takes 30 minutes.
  2. Add Schema.org structured data — At minimum: Organization schema on your homepage, Article schema on every blog post (with author attribution), and FAQPage schema on any page with Q&A content. Most CMS platforms have plugins for this.
  3. Restructure key content for AI extraction — Lead with the answer. Use headers that match common AI queries. Add structured summaries at the top of long-form content. Make sure your most important claims are in the first sentence of their section, not the last.

Check Your AI Visibility — Free GEO Score

See how your site scores on the same audit we ran on these four companies. Get your GEO Score in 60 seconds — no email required.

  • AI crawler access analysis
  • Schema.org structured data coverage
  • llms.txt and content extractability check
  • Prioritized fix list with implementation steps
Sources & Methodology
All four audits were conducted by the GEORaiser research team in March 2026 using our GEO audit framework. Each audit evaluated AI crawler access (robots.txt configuration), structured data coverage (Schema.org JSON-LD), llms.txt presence and quality, content extractability, and AI citation performance across ChatGPT, Claude, and Perplexity. GEO Scores are calculated on a 0–100 scale based on weighted signals across these categories. Individual case studies are linked above for full audit details.