A B2B SaaS company with solid Google rankings and zero AI citations used the 5-step GEO lifecycle to go from "Not Yet Visible" to "Emerging" in 9 weeks. Here is what they did at each step -- and the timeline that got them there.
A note on this case study
This case study is illustrative. The company, metrics, and timeline are based on patterns we see across our users -- not a single named client. We built it to show the GEO lifecycle in action with realistic numbers, timelines, and decisions. The process is real. The specifics are representative.
The Starting Point
Meet Claro Analytics -- a fictional B2B SaaS company that sells customer feedback analysis software to product teams at mid-market companies. 45 employees, Series A funded, 6 years in business. Their traditional SEO was working. They ranked on page 1 for "customer feedback analytics tool" and several related long-tail keywords.
The problem was not Google. The problem was what happened when a VP of Product asked ChatGPT or Perplexity, "What are the best customer feedback analytics tools for SaaS product teams?"
Claro was not in the answer. Not once. Not on any variation of the query.
Google Rankings
Page 1 for 12 keywords
AI Citations
0 out of 8 test queries
llms.txt File
Did not exist
AI Readiness Score
22 out of 100
Their AI Readiness Check revealed the gaps: no llms.txt file, no structured data beyond basic organization schema, robots.txt blocking two of the eight major AI crawlers, and page descriptions that were marketing copy instead of factual summaries AI could parse. Strong traditional SEO. Zero AI readiness.
The 5-Step GEO Lifecycle in Action
Claro's marketing lead ran across llmstxt.studio after reading about generative engine optimization. Instead of hiring an agency or buying a $400/mo enterprise tool, they worked through the 5-step lifecycle we recommend: generate, deploy, monitor, enhance, and check citations.
Here is exactly what happened at each step.
Generate: Build the llms.txt File
Week 1 -- Day 1
Claro's marketing lead entered their domain into llmstxt.studio's llms.txt Generator. The tool parsed their sitemap (87 pages), identified the most important URLs, and generated a spec-compliant llms.txt file with proper markdown structure -- heading, blockquote, sections, and links.
The initial file was functional but generic. Each URL had the page title as its description, pulled from the sitemap. No context about what the page actually covered or why it mattered. The file looked like a site index, not an AI profile.
Time spent: 4 minutes.
Deploy: Put the File Live
Week 1 -- Day 2
Their developer uploaded the llms.txt file to the website root -- claroanalytics.com/llms.txt. Took 5 minutes in their CI pipeline. They also updated robots.txt to explicitly allow all 8 major AI crawlers that the Crawler Access Analysis had flagged.
llmstxt.studio's Deploy Status check confirmed the file was accessible. The Quality Score came back at 54 out of 100 -- passable structure, but the descriptions needed work.
Time spent: 15 minutes (including the robots.txt changes).
Monitor: Watch for Staleness
Week 1 onward
They activated Sitemap Monitoring on the Pro plan. This runs daily, comparing the current sitemap against the snapshot taken when the llms.txt was last generated. If Claro published a new case study, added a pricing page, or restructured their docs, monitoring would flag that the llms.txt was out of date.
During the 9-week period, monitoring fired twice: once when they added a new integrations page (week 4) and once when they published a benchmark report (week 7). Each time, they regenerated the file and redeployed. Total effort per update: 10 minutes.
Time spent: About 20 minutes total across 9 weeks.
Enhance: Add AI-Written Descriptions
Week 2
This was the biggest single improvement. Claro ran AI Enhancement on their llms.txt file. The tool visited each linked page, read the content, and wrote a factual one-sentence description summarizing what the page covers and why it matters.
Before enhancement, a typical entry looked like this:
- [Sentiment Analysis](https://claro.io/features/sentiment)
After enhancement:
- [Sentiment Analysis](https://claro.io/features/sentiment): NLP-powered sentiment classification across support tickets, NPS responses, and app store reviews. Supports 14 languages with 91% accuracy on domain-specific terminology.
The difference is night and day. The enhanced version tells AI exactly what the feature does, what data sources it covers, and what makes it competitive. AI now has enough context to cite Claro when someone asks about sentiment analysis tools for product teams.
After enhancement and redeployment, the Quality Score jumped from 54 to 83 out of 100.
Time spent: 12 minutes (mostly reviewing and tweaking a few descriptions).
Check: Measure AI Citations
Weeks 3, 5, 7, and 9
This is where the data starts. Claro ran an AI Citation Check every two weeks. The tool uses Smart Query Generation to create 8 queries in 3 tiers -- Brand Discovery, Topic Authority, and Competitive Landscape -- tailored to Claro's llms.txt content.
Each check queried AI search engines and reported whether claro.io appeared in the citations. Equally important: it showed exactly which competitors were getting cited instead.
Time spent: 3 minutes per check (click the button, read the results).
The Results: Week by Week
Here is how Claro's AI visibility changed across the 9-week period. Each row represents an AI Citation Check with 8 queries across 3 tiers.
| Week | Cited In | Status | What Happened |
|---|---|---|---|
| Week 3 | 0 of 8 | Not Yet Visible | Baseline check. File deployed 2 weeks prior. AI had not yet indexed the llms.txt content. |
| Week 5 | 1 of 8 | Not Yet Visible | First citation -- a Brand Discovery query. AI mentioned Claro by name in a list of feedback tools. Still below Emerging threshold. |
| Week 7 | 2 of 8 | Emerging | Cited in 1 Brand Discovery and 1 Topic Authority query. AI described Claro's NLP sentiment feature specifically. |
| Week 9 | 3 of 8 | Emerging | Cited in 2 Topic Authority and 1 Brand Discovery. Competitors reduced from 6 unique domains cited to 4. |
The progression is typical. Weeks 1 through 4 feel like nothing is happening. The file is deployed but AI has not re-crawled or re-indexed the content yet. Then citations start appearing -- usually on Brand Discovery queries first (where someone asks about your company by name or close category) before expanding to Topic Authority queries (where someone asks a general question and AI decides to cite you).
The Competitor Intelligence That Changed Their Strategy
The citation checks did not just tell Claro whether they were cited. Every check returned Competitor Intelligence -- the specific domains AI cited instead. This data reshaped their content strategy.
What they saw
Two competitors had llms.txt files with detailed case study sections
What they did
Claro added 3 case studies to their llms.txt with quantified results -- "Reduced ticket resolution time 34% for a 200-person SaaS team."
What they saw
The top-cited competitor included integration documentation in their llms.txt
What they did
Claro added an Integrations section listing their 40+ connections with descriptions of each major integration.
What they saw
AI consistently cited a competitor's benchmark report in Topic Authority queries
What they did
Claro published their own benchmark report on customer feedback response times and included it in their llms.txt Resources section.
This is the part most businesses miss. Even when your citation count is zero, the competitor data is immediately actionable. You can see exactly what AI considers authoritative in your space and reverse-engineer it.
Total Investment: Time and Money
One of the reasons we built this case study is to show that generative engine optimization does not require an enterprise budget or a dedicated team. Here is the total investment across 9 weeks:
Under 2 hours
Total Time
Spread across 9 weeks. Most was the initial setup.
$19/mo (Pro)
Monthly Cost
For monitoring, citation checks, and competitor data.
1 marketer + 1 dev
People Involved
Developer only needed for initial file deployment.
Compare that to enterprise GEO tools that charge $99 to $579 per month and require dedicated analyst time. The GEO lifecycle is not complicated. It is a sequence of small, well-timed actions that compound over weeks.
What Made the Difference
Looking at the full 9-week arc, three things mattered most:
AI Enhancement was the single biggest lever
The Quality Score jump from 54 to 83 happened entirely because of enhanced descriptions. Generic page titles give AI nothing to work with. Factual, specific descriptions give AI a reason to cite you. If you do one thing, do this.
Competitor intelligence drove strategic content decisions
Claro did not guess what content to create. They looked at what AI was already citing in their space and built to match. The benchmark report they published in week 7 was directly inspired by seeing a competitor's report cited in Topic Authority queries.
Consistency mattered more than perfection
Claro did not over-optimize. They generated, deployed, enhanced, and then ran citation checks every two weeks. When monitoring flagged changes, they regenerated. No heroics. Just the lifecycle, executed consistently.
What Comes Next: Emerging to Growing
Reaching Emerging is the hardest part. Going from zero to non-zero citations means AI has recognized your content as citable. The next milestone -- Growing -- requires expanding citation coverage from a few query types to most of them.
For Claro, the roadmap to Growing includes:
- Expanding the llms.txt blockquote to include more specific ICP details and competitive differentiators, so AI can match Claro to a wider range of queries.
- Publishing comparison content -- Claro vs. specific competitors -- to give AI structured data for head-to-head evaluation queries.
- Increasing citation check frequency to track whether new content additions actually move citation rates. Premium plan enables this at higher volume.
- Adding structured data (FAQ schema, HowTo schema) to their key pages, reinforcing the signals their llms.txt already provides.
The lifecycle does not end at Emerging. It repeats: generate, deploy, monitor, enhance, check. Each cycle refines the AI profile and expands the query coverage.
Start Your Own GEO Case Study
Claro's story is based on patterns we see across our users. The specifics vary -- different industries, different timelines, different competitor landscapes -- but the lifecycle is the same. Generate a spec-compliant llms.txt. Deploy it. Monitor for changes. Enhance with AI descriptions. Check whether AI actually cites you.
The first step takes 30 seconds. Run a free AI Readiness Check and see where you stand.
See where you stand
Run a free AI Readiness Check on your website. 30 seconds. No signup.
Check your AI readiness →