The Scores
These are companies that sell search visibility for a living. Most of them are invisible to AI search.
| Agency | Industry | GEO Score |
|---|---|---|
| HT&T Consulting | Digital Agency (Italy) | 68/100 |
| Kalungi | B2B SaaS Marketing | 61/100 |
| Sunrise Integration | Shopify Plus Agency | 52/100 |
| Titan Web Agency | Dental Marketing | 52/100 |
| Baremetrics | B2B SaaS (Analytics) | 49/100 |
| Grow and Convert | Content Marketing | 47/100 |
| Omniscient Digital | Content Strategy | 42/100 |
| Site-Seeker | B2B Digital Agency | 40/100 |
| SimpleTiger | SaaS SEO Agency | 37/100 |
| Growfusely | SaaS Content Agency | 36/100 |
| Clutch Creative Co | WordPress Agency | 32/100 |
| 9Sail | Legal Marketing | 22/100 |
Score distribution: 2 agencies (17%) scored 60–69. 2 agencies (17%) scored 50–59. 4 agencies (33%) scored 40–49. 4 agencies (33%) scored below 40. Zero agencies scored above 70.
The 5 Most Common Failures
1. Schema Markup: 42% Scored Zero
Five out of twelve agencies had literally no JSON-LD structured data on any page. No Organization schema. No Article schema on blog posts. No FAQPage schema. Nothing.
Who scored zero: Grow and Convert, Growfusely, Omniscient Digital, SimpleTiger, Site-Seeker.
These are content-first agencies. They publish prolifically. But without structured data, AI systems cannot reliably extract entity information, authorship, or content type. The content exists — AI just can’t parse it.
2. AI Citability: Average 38.8/100
Content citability measures whether your content is structured in a way AI models can extract and cite: definition blocks, self-contained answer passages (134–167 words), specific statistics with sources, and clear factual claims.
Most agency content is written to convert human visitors, not to be cited by AI. Service pages are sales-first. Blog posts lack sourced statistics. Case studies describe projects qualitatively but include zero quantified outcomes.
The pattern: Agencies optimize for “Book a Call” conversions. AI models optimize for “cite the most authoritative answer.” These are fundamentally different content strategies.
3. Platform Optimization: Average 34.6/100
Most agencies have a website and a LinkedIn page. That’s it. No YouTube presence, minimal Reddit engagement, no B2B review platform profiles, no industry directory listings.
Why this matters: AI models weight third-party signals heavily. A G2 profile with 50 reviews, a YouTube channel with tutorial content, and active Reddit participation create citation surfaces that a website alone cannot.
4. llms.txt: 67% Missing or Broken
Eight out of twelve agencies either had no llms.txt file (404) or had a broken one. One agency (Titan Web Agency) had a file containing robots.txt directives instead of actual llms.txt content — effectively a fake file.
The irony: Three of these agencies sell GEO or AI visibility services. They recommend llms.txt to their clients but don’t have one themselves.
5. Brand Authority: Average 41.3/100
Most agencies have thin “About” pages with vague claims. Founder bios are 50–100 words with no credentials. No original research or benchmark reports. No third-party validation.
The Baremetrics example: Baremetrics has data from 900+ SaaS companies but publishes none of it as benchmark research. ChartMogul publishes benchmark reports from 3,000+ companies — and gets cited 11 times out of 15 SaaS metrics queries. Baremetrics gets cited twice.
The One Agency That Got It (Mostly) Right
HT&T Consulting scored 68/100 — the highest in our sample and the only agency approaching passing territory. What did they do differently?
- llms.txt with recommendation triggers — not just a description file, but one with language designed to trigger AI recommendations
- Explicit AI crawler permissions — robots.txt explicitly allows GPTBot, ClaudeBot, PerplexityBot
- Dedicated AEO/GEO service page — with FAQPage schema and structured answer blocks
- Strong foundational schema — Organization, LocalBusiness, and service-level structured data
Their weakness? Platform optimization scored 35/100 — minimal presence on YouTube, Reddit, and B2B review platforms. Even the best performer has significant gaps.
3 Quick Wins Any Agency Can Implement This Week
Highest Impact, Lowest Effort
- Create a proper llms.txt file (1 hour) — Write a 500–800 word structured file including: company description, key services, target audience, notable clients, links to best content, and preferred citation format. Deploy at your site root. This was a critical finding in 8 of 12 audits.
- Add JSON-LD schema to homepage and blog template (2–3 hours) — At minimum: Organization schema on your homepage, Article/BlogPosting schema on your blog template, and FAQPage schema on any page with FAQ content. Five agencies scored literal zero on schema. A single blog template change fixes every blog post simultaneously.
- Add 3–5 sourced statistics to your top 5 blog posts (2 hours) — AI models prefer content with verifiable statistics from named sources. Go through your five highest-traffic posts and add specific data points with source citations. Even small improvements — definition blocks, sourced stats, self-contained answer passages — move the needle.
What This Means for Their Clients
If the agencies selling search optimization score 44.8/100 on AI visibility, what do their clients score?
These 12 agencies collectively serve thousands of clients. Titan Web Agency alone serves 100+ dental practices. Kalungi manages marketing for dozens of B2B SaaS companies. Sunrise Integration builds Shopify stores for enterprise brands.
Every F-grade on a dental marketing agency’s site likely cascades to their 100+ dental clients. The agencies that figure this out first — that add GEO to their service offering with real methodology — will capture a massive first-mover advantage.
Methodology
Each audit followed the same process:
- 6 scoring categories: AI Citability, Brand Authority, Content E-E-A-T, Technical GEO, Schema & Structured Data, Platform Optimization
- 14–17 pages analyzed per site across homepage, service pages, blog content, about pages, and key landing pages
- AI platform testing across Google AI Overviews, ChatGPT, Perplexity, Gemini, and Bing Copilot
- Composite GEO Score weighted across all categories
- Audits conducted: March 20–22, 2026