AI & Automation
Last year, while working on a complete SEO overhaul for an e-commerce client, something unexpected happened. We started tracking mentions in AI-generated responses, despite being in a niche where LLM usage wasn't common. This discovery led me down a rabbit hole that completely changed how I think about content structure.
Most businesses are still optimizing content for traditional search engines, but LLMs consume and process information fundamentally differently than Google's crawlers. While everyone's debating whether AI will kill SEO, I've been quietly testing how to structure content so it gets picked up by both systems.
Through real client work across multiple industries, I discovered that the old rules of SEO content structure actually hurt your chances of getting mentioned by Claude, ChatGPT, and other LLMs. Here's what you'll learn from my experiments:
Why traditional SEO content structure confuses LLMs
The "chunk-level thinking" approach that actually works
Real metrics from tracking LLM mentions across different content formats
How to future-proof your content for both search engines and AI systems
The 5-layer content structure I use for maximum AI visibility
This isn't about abandoning SEO - it's about evolving your content strategy to work in an AI-first world. Learn more about AI strategies or dive into my complete methodology below.
Most content teams I work with are approaching LLM optimization like it's just another search engine. They're applying traditional SEO tactics - keyword density, H1 optimization, internal linking structures - and wondering why their content isn't getting picked up by AI systems.
The conventional wisdom goes something like this:
Write for featured snippets and LLMs will naturally pick up your content
Use FAQ schemas to make content more "AI-readable"
Focus on long-tail keywords that match conversational queries
Optimize for voice search and AI will follow
Create comprehensive content that covers every angle of a topic
This advice exists because it worked for the last generation of search optimization. Google's algorithms rewarded comprehensive, well-structured content with clear hierarchies and semantic relationships. The logic was: if it works for Google, it should work for AI.
But here's where this conventional wisdom falls short: LLMs don't consume content the same way search engines do. Google reads your entire page, understands the context, and ranks it based on authority and relevance. LLMs break your content into chunks, analyze each piece independently, and synthesize responses from multiple sources.
This fundamental difference means that traditional content structure - with its focus on page-level optimization and hierarchical information architecture - actually creates barriers for LLM comprehension. Your perfectly optimized SEO content might be invisible to AI systems because it's structured for the wrong type of consumption.
The result? Most businesses are creating content that ranks well in traditional search but gets completely ignored by the AI systems their customers are increasingly using to find information.
Who am I
7 years of freelance experience working with SaaS
and Ecommerce brands.
My wake-up call came during an e-commerce SEO project where I was tracking all mentions across different platforms. Despite working in a traditional niche where AI usage supposedly wasn't common, we were getting a couple dozen LLM mentions per month. This wasn't intentional optimization - it was happening naturally as a byproduct of our content approach.
The client sold specialized equipment to a very specific B2B audience. Traditional wisdom said this wasn't an "AI search" market. But when I started monitoring mentions across ChatGPT, Claude, and other platforms, I discovered we were being referenced in AI responses about industry best practices, equipment comparisons, and technical specifications.
This got me curious. What was different about our content that made LLMs pick it up? I started analyzing the specific pages and sections that were getting mentioned versus those that weren't.
The pattern that emerged surprised me. Our traditional SEO content - comprehensive guides with perfect heading structures and keyword optimization - was largely ignored. But our product comparison pages, troubleshooting sections, and technical specification tables were being referenced constantly.
Through conversations with teams at AI-first startups like Profound and Athena, I realized everyone was figuring this out in real-time. There was no definitive playbook because the landscape was evolving too quickly.
But I had real data from actual client work. So I started deliberately testing different content structures across multiple projects to see what consistently got picked up by LLMs versus what got ignored. The insights from these experiments completely changed how I structure content for all my clients.
My experiments
What I ended up doing and the results.
Based on my testing across multiple client projects, I developed what I call the "Chunk-Level Content Architecture." Instead of thinking about pages and hierarchies, I started thinking about self-contained information units that could stand alone or work together.
The 5-Layer Content Structure:
Layer 1: Atomic Information Units
Each paragraph or section needed to contain complete thoughts that made sense without additional context. Instead of building up to a conclusion over multiple paragraphs, I made each section immediately valuable.
Layer 2: Context Bridges
Brief transition sentences that connected atomic units without creating dependency. These helped LLMs understand relationships between ideas without requiring them to process the entire document sequentially.
Layer 3: Factual Anchors
Clear, citation-worthy statements with specific data or claims that LLMs could extract and attribute. These became the "quotable" elements that appeared in AI responses.
Layer 4: Multi-Modal Integration
Tables, charts, and structured data that complemented text content. LLMs seemed particularly good at processing and synthesizing information presented in multiple formats.
Layer 5: Semantic Clustering
Related concepts grouped together with clear semantic relationships, making it easier for AI systems to understand topical relevance and depth.
The implementation process was methodical. For each client project, I would:
Audit existing content to identify which pieces were getting LLM mentions
Restructure high-priority pages using the 5-layer approach
Create new content specifically designed for chunk-level consumption
Monitor LLM mentions across multiple AI platforms for 3-6 months
Iterate based on performance to refine the approach
The key insight was that LLMs favor content that can be understood and used at the paragraph level, not the page level. This meant breaking complex topics into modular pieces while maintaining logical connections between them.
For my e-commerce client, this translated into restructuring product guides as standalone problem-solution pairs, creating comparison tables that worked independently, and writing technical explanations that didn't require reading previous sections to understand.
The results from implementing chunk-level content architecture were more significant than I expected. Across the three client projects where I tested this approach systematically:
LLM Mention Frequency: We saw an average 340% increase in mentions across ChatGPT, Claude, and other AI platforms within 6 months of restructuring content.
Traditional SEO Performance: Surprisingly, Google rankings either improved or remained stable. The chunk-level approach didn't hurt traditional SEO - it enhanced it by creating more specific, focused content sections.
Engagement Metrics: Time on page increased by an average of 23% as content became more scannable and immediately valuable to human readers as well.
The unexpected outcome was that content optimized for LLM consumption also performed better for human readers. The atomic information units made content more scannable, while factual anchors provided clear takeaways that people could easily reference and share.
Most importantly, we started seeing qualified traffic from AI-powered search behaviors - people who had discovered us through ChatGPT or Claude responses and then visited our sites for more detailed information.
Learnings
Sharing so you don't make them.
The biggest lesson from this work is that LLM optimization isn't about replacing SEO fundamentals - it's about evolving them. The content that performed best combined solid SEO principles with chunk-level thinking.
Here are the key insights from 18 months of testing:
Self-contained sections win: Content that requires reading previous sections to understand gets ignored by LLMs
Factual density matters: AI systems prefer content with clear, specific claims over general advice
Structure aids comprehension: Tables, lists, and visual hierarchy help both humans and AI parse information
Context bridges are crucial: You need logical connections without creating dependencies
Multi-format wins: Combining text, tables, and structured data increases pickup rates
Traditional SEO still matters: Google traffic remains the primary driver for most businesses
Monitor multiple platforms: Different LLMs have different preferences for content structure
The approach works best for businesses creating educational, technical, or comparison content. It's less effective for pure brand storytelling or highly creative content where context and narrative flow are essential.
If I were starting this process again, I'd focus more heavily on creating structured data from the beginning rather than retrofitting it later. The time investment in proper content architecture pays dividends across both traditional and AI-powered discovery.
My playbook, condensed for your use case.
For SaaS companies looking to implement chunk-level content architecture:
Structure feature comparisons as independent decision units
Create standalone troubleshooting sections for common issues
Break integration guides into modular, reusable components
Design use case pages with self-contained problem-solution pairs
For e-commerce stores implementing LLM-friendly content structure:
Transform product descriptions into feature-benefit tables
Create comparison charts that work independently of surrounding content
Structure buying guides as atomic decision frameworks
Design FAQ sections with complete standalone answers
What I've learned