What This Measures

Category Benchmarks aggregate AI visibility data across all ILLIXIS tenants in your business category. This gives you industry-standard metrics to compare against.

Metrics Tracked

Your Mentions vs. Category Average: How many AI citations you're getting compared to the typical company in your category.

Category Median: The middle performer. Half the companies in your category are above this number, half below.

Top Performer Mentions: The industry leader's citation count. This shows what's possible.

Platform Distribution: How Google AI and ChatGPT mentions compare to category averages. Helps identify platform-specific gaps.

Your Position vs. Industry: Percentage of top performer's mentions. Example: 45% means you're 45% of the way to industry-leading performance.

How Benchmarks Work

Data Collection

Benchmarks are calculated monthly on the 1st at 3 AM via an automated process.

The process:

  1. Groups all active tenants by business category
  2. Pulls the latest AI visibility data from each tenant's history
  3. Calculates aggregate metrics (average, median, max)
  4. Stores results for dashboard display
  5. Updates existing benchmarks or creates new ones

Sample size requirement: A category needs at least 3 tenants with visibility data to generate benchmarks. Categories with fewer than 3 active tenants show "No benchmark data available."

Automation Schedule

Category benchmark calculation runs automatically on a monthly schedule to keep your industry comparisons current.

When It Runs

Schedule: Monthly on the 1st at 2:00 AM UTC

The automated task compares your metrics to industry averages across several dimensions:

  • Content velocity: How quickly you publish compared to category peers
  • Ranking distribution: Your position spread across keyword rankings vs. competitors
  • Traffic trends: Month-over-month traffic patterns relative to industry norms

Accessing Benchmark Reports

After the monthly calculation completes, benchmark reports are available in the Analytics dashboard. Navigate to Analytics > AI Visibility to see the updated Industry Comparison card with fresh data.

Note: If the calculation runs overnight (2:00 AM UTC), updated benchmarks typically appear by the time you check in the morning.

Your Category

Your category is set in Settings > General > Business Category. Examples:

  • Fashion & Lifestyle
  • Technology & Software
  • Health & Wellness
  • Professional Services
  • E-commerce

If your category isn't set, you won't see benchmark comparisons.

Viewing Your Benchmarks

Navigate to Analytics > AI Visibility and scroll to the Industry Comparison card.

What You'll See

4 Key Numbers:

  • Your Mentions: Total AI citations (Google AI + ChatGPT)
  • Category Average: Mean citations across all companies in your category
  • Category Median: Middle performer
  • Top Performer: Industry leader's citation count

Status Badge:

  • Green "Above Average" = Outperforming most competitors
  • Yellow "Below Average" = Room for improvement
  • Gray "Average" = Right at the mean

Progress Bar: Visual representation of your position vs. top performer. Shows percentage completion toward industry-leading performance.

Platform Breakdown: Two bars showing:

  • Google AI: Your mentions vs. category average for Google AI Overview
  • ChatGPT: Your mentions vs. category average for ChatGPT

Insight Message: Context-specific guidance based on your performance:

  • Above average: "Great work! You're outperforming [X] other companies in [category]."
  • Above median: "Good progress! You're above the median. Focus on creating more AI-optimized content to reach the top."
  • Below median: "Room to grow! Create more authoritative content with data, statistics, and expert insights."

Interpreting Your Position

Above Average

You're doing better than most companies in your category. This means:

  • Your content strategy is working
  • AI platforms trust your content
  • You're likely getting organic traffic from AI citations

Next step: Analyze what's working. Which content types get cited most? Double down on those patterns.

Average

You're in the middle of the pack. Half the companies in your category perform better, half worse.

Next step: Study the top performers. Run competitor comparisons to see what they're doing differently.

Below Average

Your AI visibility is lower than most competitors in your category.

Next step: Create more AI-optimized content. Focus on:

  • Data-driven articles with statistics
  • Comprehensive guides (2,000+ words)
  • Clear structure (H2s, numbered lists, tables)
  • Author credentials and expertise signals

How Benchmarks Are Calculated

Average Mentions

Sum of all tenant mentions in category ÷ number of tenants.

Example: 10 tenants with mention counts [5, 12, 8, 20, 15, 6, 10, 18, 9, 14] = average of 11.7 mentions.

Median Mentions

Middle value when all tenant mentions are sorted. More resistant to outliers than average.

Using same example: Sorted [5, 6, 8, 9, 10, 12, 14, 15, 18, 20] = median is 10.

Top Performer

Highest mention count in the category. This is the ceiling—the best anyone in your industry has achieved.

Platform Averages

Separate calculations for Google AI and ChatGPT mentions. Shows which platform your category performs better on overall.

When Benchmarks Update

Update frequency: Monthly (1st of each month at 3 AM UTC)

Cache duration: Benchmarks are calculated once per month and stored in the database. You see the same benchmark data for 30 days.

Manual updates: Not available. Benchmarks require aggregating data from multiple tenants, so they're only recalculated on the monthly schedule.

Why monthly? AI visibility changes gradually. Weekly updates would show minimal movement and waste compute resources. Monthly provides meaningful trend data without excessive recalculation.

Using Benchmarks to Improve

1. Set Realistic Goals

If the top performer has 50 mentions and you have 8, don't aim for 50 immediately. Target the median first, then the average, then top-tier performance.

Timeline: Expect 3-6 months to move from below average to above average with consistent effort.

2. Identify Platform Gaps

If the category average for Google AI is 12 mentions and you have 3, but ChatGPT is closer to average, prioritize Google AI optimization.

Action: Create more FAQ-style content and structured data (what Google AI favors).

3. Track Monthly Progress

Check your benchmark comparison each month after the update (first week of the month).

What to look for:

  • Are you closing the gap between your mentions and the average?
  • Is your position bar (% of top performer) increasing?
  • Are platform-specific gaps narrowing?

4. Study the Top Performer

If your category has a clear leader, they're doing something right. Run a competitor comparison scan to see:

  • Which queries they dominate
  • What content types they publish
  • How they structure their articles

Don't copy—learn the patterns and adapt them to your brand.

5. Contribute to Category Growth

As more tenants in your category improve, the benchmarks rise. This creates a virtuous cycle where industry-wide AI visibility improves.

Your role: Share what works with peers (if not direct competitors). Rising benchmarks benefit everyone by proving the category's authority to AI platforms.

Benchmark Limitations

Sample Size Matters

Categories with only 3-5 tenants produce less reliable benchmarks than categories with 20+ tenants.

Check sample size: Listed in the Industry Comparison card next to "Category Average."

Interpretation:

  • 3-5 tenants: Directional data, not statistically robust
  • 10+ tenants: Reliable benchmarks
  • 20+ tenants: High confidence

Recency of Data

Benchmarks use the latest visibility history record for each tenant. If a tenant hasn't run a scan in 30 days, their data is outdated.

Impact: Benchmarks may lag behind real-time performance if many tenants aren't actively scanning.

Category Accuracy

Benchmarks assume all tenants correctly set their business category. If a tenant miscategorizes themselves, it skews the benchmark data.

ILLIXIS safeguard: Admin reviews new tenant categories during onboarding to minimize miscategorization.

No Benchmark Data?

If you see "No benchmark data available," possible reasons:

1. Category Not Set

  • Fix: Settings > General > Business Category
  • Choose the closest match to your industry

2. Not Enough Tenants in Category

  • Categories need 3+ active tenants with visibility data
  • Common in niche industries
  • No action required—benchmarks appear as more tenants join

3. No Recent Scans

  • Benchmarks require at least 3 tenants with recent AI visibility history
  • If you're the only active tenant in your category, benchmarks won't generate
  • Run AI visibility scans weekly to contribute data

4. New Tenant

  • If you just set up your account, wait until the next monthly benchmark update (1st of month)
  • Run your first AI visibility scan to ensure you're included in the next calculation

FAQ

Q: Can I see benchmarks for multiple categories? No. Benchmarks only show for your configured business category. If you operate in multiple industries, choose your primary category.

Q: How do I move up in the rankings? Publish more AI-optimized content:

  • 2,000+ words per article
  • Include data, statistics, and expert insights
  • Use structured formatting (H2s, lists, tables)
  • Add author credentials
  • Update content quarterly

Q: Do benchmarks include free trial accounts? No. Only tenants with active subscriptions (Starter, Professional, Enterprise) contribute to benchmarks. This prevents data skew from abandoned trial accounts.

Q: What if I'm in a new/niche category? If fewer than 3 tenants exist in your category, benchmarks won't display. You can still track your absolute performance (mentions, visibility score) without category comparison.

Q: Can I opt out of benchmark aggregation? Yes. Contact support to exclude your tenant from category calculations. Your data remains private—only aggregated, anonymized metrics are used in benchmarks.

Q: Are competitor names visible in benchmarks? No. Benchmarks show aggregate statistics only (average, median, top performer count). Individual tenant names and domains are not exposed.

Q: How accurate are benchmarks? Accuracy depends on:

  • Sample size (more tenants = more accurate)
  • Scan frequency (weekly scans = fresher data)
  • Category homogeneity (how similar businesses are within the category)

For categories with 20+ tenants, benchmarks are statistically reliable.

Q: What if the top performer is unrealistic? Some categories have outliers—established brands with 10x more mentions than typical competitors. Focus on the median and average as more achievable targets.

Q: Do benchmarks account for business size? No. A solo entrepreneur and a Fortune 500 company in the same category contribute equally to benchmarks. This is intentional—AI platforms don't favor large companies over small ones. Authority is earned through content quality, not company size.

Related Features

  • AI Visibility Dashboard: Main dashboard showing your overall AI presence
  • Competitor Comparison: Direct comparison against specific competitor domains
  • Query Opportunities: Find gaps where you can outperform category norms
  • Weekly Planner: Recommendations include category benchmark context

Getting Help

Benchmarks not showing? Verify:

  1. Business category is set in Settings
  2. You've run at least one AI visibility scan
  3. Your category has 3+ active tenants (check sample size)
  4. It's after the 1st of the month (benchmarks update monthly)

Questions about your category? Ask Maya: "How do I compare to other companies in [your category]?" for contextual analysis.

Ready to lose the stack?

One platform. You approve. ILLIXIS executes. Marketing that just happens.

Join the waitlistNo spam, everUnsubscribe anytime
First 20 founding members: 50% off any plan for your first year.

Marketing, Unstacked.