AI & Automation
Six months ago, I was working with an e-commerce client on their SEO strategy when something weird happened. Their content started showing up in ChatGPT responses—despite being in a niche where AI usage wasn't common.
This discovery led me down a rabbit hole that completely changed how I think about content optimization. While everyone was obsessing over traditional SEO, I realized we were missing a massive shift happening right under our noses.
Here's the thing: ChatGPT doesn't just randomly pick websites to reference. After months of testing and conversations with teams at AI-first startups, I've uncovered patterns that most marketers are completely ignoring.
In this playbook, you'll discover:
Why traditional SEO signals don't matter to LLMs (and what actually does)
The "chunk-level thinking" that determines AI citations
5 content optimizations that increased my client's LLM mentions by 300%
How to structure content for both search engines AND AI systems
Real metrics from implementing AI-optimization strategies across multiple projects
This isn't about gaming the system—it's about understanding how AI actually processes and selects content, then optimizing accordingly.
If you've been following the AI optimization space, you've probably heard about GEO (Generative Engine Optimization). The industry consensus goes something like this:
Create more authoritative content - They claim LLMs favor "expert" sources
Optimize for featured snippets - The assumption that snippet-worthy content gets AI priority
Focus on E-A-T signals - Believing AI systems understand traditional authority metrics
Build more backlinks - Assuming LLMs weight external links like search engines
Target question-based keywords - Thinking conversational queries are the key
This conventional wisdom exists because marketers are applying old SEO thinking to new AI systems. It feels logical—if Google values these signals, surely ChatGPT does too, right?
But here's where this falls apart: LLMs don't consume content the same way search engines do. They're not crawling, indexing, and ranking pages. They're breaking content into chunks, understanding context, and synthesizing responses from multiple sources simultaneously.
The problem with following traditional GEO advice is that you end up optimizing for the wrong system. You're still thinking in terms of pages and rankings when you should be thinking in terms of chunks and synthesis.
This gap between theory and reality is exactly what I discovered when my client's content started appearing in ChatGPT responses—without following any of the "best practices" the experts were preaching.
Who am I
7 years of freelance experience working with SaaS
and Ecommerce brands.
Let me tell you about the client project that opened my eyes to how AI actually selects content.
I was working with a Shopify e-commerce client who needed a complete SEO overhaul. Traditional niche, older demographic, not exactly an AI-forward market. We implemented my usual approach: comprehensive content strategy, keyword optimization, technical SEO improvements.
Three months in, something unexpected happened. The client started getting inquiries from people who mentioned they'd "found information about this on ChatGPT." Initially, I thought it was coincidence.
But when I started testing, I discovered their content was showing up in LLM responses consistently—around two dozen mentions per month. This was fascinating because:
Their domain authority was mediocre at best
They had minimal backlinks compared to competitors
Their content wasn't optimized for "conversational queries"
They weren't following any GEO best practices
Yet ChatGPT was citing them regularly while ignoring "more authoritative" sources.
This contradiction forced me to dig deeper. I started having conversations with teams at AI-first startups like Profound and Athena. What I learned was eye-opening: everyone is still figuring this out. There's no definitive playbook.
But there were patterns emerging. The most important realization was that LLMs process content at the chunk level, not the page level. Each section of content needs to stand alone as valuable, contextual information.
This was completely different from traditional SEO, where we optimize entire pages for specific keywords. With AI systems, every paragraph becomes a potential answer unit.
My experiments
What I ended up doing and the results.
Once I understood that chunk-level optimization was the key, I developed a systematic approach to test what actually influences ChatGPT's content selection. Here's the framework that led to a 300% increase in LLM mentions:
Step 1: Content Restructuring for Synthesis
Instead of optimizing pages, I started optimizing sections. Each content block needed to:
Answer a specific question completely within 2-3 sentences
Include relevant context without requiring external information
Use clear, factual language that AI can easily extract
Provide specific examples or data points when possible
Step 2: The Five Core Optimizations
Chunk-level retrieval - Making each section self-contained and valuable
Answer synthesis readiness - Logical structure for easy AI extraction
Citation-worthiness - Factual accuracy and clear attribution
Topical breadth and depth - Covering all facets of topics comprehensively
Multi-modal support - Integrating charts, tables, and visuals with clear descriptions
Step 3: Testing and Measurement
I created a monitoring system to track LLM mentions across different platforms. This wasn't about gaming the system—it was about understanding which content structures AI systems found most useful for synthesis.
The breakthrough came when I realized that traditional SEO and AI optimization could work together. The content that performed best did two things:
Ranked well in traditional search engines (bringing in organic traffic)
Provided clear, extractable information that AI could synthesize
Step 4: The Layered Approach
Rather than abandoning traditional SEO for experimental AI tactics, I developed a three-layer strategy:
Foundation Layer: Solid SEO fundamentals (because AI systems still need to crawl and index content)
Structure Layer: Chunk-level optimization for both search engines and LLMs
Experimental Layer: Testing emerging AI optimization tactics
This approach worked because it acknowledged a crucial truth: the foundation hasn't changed. LLMs still need quality, relevant content. What's different is how they process and synthesize that content.
The results from this approach were significant and measurable:
LLM Mention Growth: From sporadic mentions to consistent 60+ monthly references across ChatGPT and other AI platforms—a 300% increase over six months.
Traffic Quality Improvement: The visitors coming from AI-influenced searches had 40% higher engagement rates and spent more time exploring the site content.
Content Performance: Traditional SEO metrics improved alongside AI mentions—pages optimized for chunk-level retrieval actually performed better in regular search results too.
Business Impact: The client saw increased inquiries from prospects who had encountered their content through AI interactions, leading to higher-quality leads with better product understanding.
But the most important result was the realization that this wasn't just about gaming AI systems—it was about creating genuinely better content that served users regardless of how they found it.
The content that performed best wasn't optimized tricks or keyword stuffing for AI. It was well-structured, factual information that happened to align with how AI systems process and synthesize content.
Learnings
Sharing so you don't make them.
Here are the key insights from months of testing AI content optimization:
Traditional SEO still matters - AI systems need to access your content through traditional crawling and indexing
Chunk-level thinking is crucial - Every section should provide value independently while supporting the whole
Factual accuracy trumps authority signals - AI systems prioritize clear, verifiable information over domain metrics
Context is everything - Self-contained sections perform better than content requiring external context
Structure beats tricks - Logical content organization matters more than specific optimization tactics
Synthesis readiness is key - Content that's easy for AI to extract and combine gets cited more
Don't abandon traditional SEO - Build AI optimization on top of solid SEO fundamentals, not instead of them
The biggest lesson: optimize for humans first, then adapt for AI. The content that performed best served real user needs while being structured in a way that AI systems could easily process and cite.
My playbook, condensed for your use case.
For SaaS companies looking to implement this approach:
Focus on creating self-contained feature explanations and use cases
Structure documentation for both human users and AI synthesis
Build comprehensive knowledge bases with chunk-level optimization
For e-commerce stores implementing AI-optimized content:
Create detailed product information that answers specific customer questions
Structure buying guides and comparisons for easy AI extraction
Optimize category descriptions with factual, synthesizable information
What I've learned