What It Does

Tracks how accurately ILLIXIS predicts content performance before publication. Every brief generates a prediction of traffic and rankings. After publication, actual Google Search Console data validates these predictions at 30, 60, and 90 day checkpoints.

Purpose: Measure and improve the platform's ability to predict content success.


How Predictions Work

1. Prediction Creation (Automatic)

When a brief is generated, ILLIXIS automatically creates a prediction based on:

  • Search volume - Monthly search volume for target keyword
  • Keyword difficulty - How hard it is to rank (0-100)
  • Opportunity score - Brief quality score (0-100)
  • Competition density - SERP competition level
  • SERP data - Top 10 results analysis

Predicted at 3 checkpoints:

  • 30 days after publication
  • 60 days after publication
  • 90 days after publication

2. Prediction Storage

Each prediction includes:

  • Estimated traffic - Expected clicks at 30/60/90 days
  • Estimated position - Expected ranking position at 30/60/90 days
  • Confidence score - How confident ILLIXIS is (0-100%)
  • Target position - Goal ranking position (1-20)

3. Publication Tracking

When content is published to your CMS:

  • Prediction status changes from "Pending Publication" to "Published"
  • Publication date recorded
  • Validation schedule set (30/60/90 days from publication)

4. Validation (Automatic)

Daily at 7 AM, ILLIXIS checks for predictions due for validation:

For each due prediction:

  1. Fetch actual GSC data (clicks, impressions, position)
  2. Compare predicted vs actual metrics
  3. Calculate accuracy percentage
  4. Update prediction status
  5. Store result

Data quality assessment:

  • Good - Sufficient data (100+ impressions)
  • Partial - Limited data (< 100 impressions)
  • Insufficient - No GSC data found

5. Accuracy Calculation

Traffic Accuracy: How close the predicted traffic was to actual traffic.

Position Accuracy: How close the predicted ranking position was to the actual position.

Overall Accuracy: The average of traffic accuracy and position accuracy.

Accuracy ratings:

  • ≥70% = Accurate (Win)
  • 50-69% = Close
  • <50% = Missed

Dashboard Overview

Hero Narrative

Shows high-level accuracy summary:

  • Headline - "ILLIXIS predicted X out of Y correctly"
  • Sub-headline - Accuracy rate percentage
  • Trend badge - Accuracy improving/declining vs previous weeks

Win/Loss Record Bar

Visual breakdown of all validated predictions:

  • Green - Accurate predictions (≥70%)
  • Yellow - Close predictions (50-69%)
  • Red - Missed predictions (<50%)

Predicted vs Actual Cards

Shows up to 6 recent predictions with side-by-side comparison:

  • Target keyword
  • Predicted position → Actual position
  • Predicted traffic → Actual traffic
  • Accuracy percentage
  • Verdict (Win/Close/Miss)
  • Validation period (30/60/90 days)
  • Confidence score

Accuracy Trend Chart

Line chart showing accuracy over last 12 weeks:

  • X-axis: Week (W1, W2, W3...)
  • Y-axis: Accuracy percentage (0-100%)
  • Shows if predictions are getting better over time

Confidence Calibration Chart

Bar chart comparing confidence levels to actual outcomes:

  • What it shows: Are high-confidence predictions actually accurate?
  • Goal: Bars should match height (well-calibrated predictions)
  • Example: If 80-90% confidence bucket has 85% actual accuracy → well-calibrated

Requires: At least 5 validated predictions

Upcoming Validations Timeline

Shows next 8 predictions due for validation:

  • Days until validation
  • Target keyword
  • Validation period (30/60/90 days)
  • Confidence score

Accuracy by Content Type

Horizontal bar chart showing accuracy by brief type:

  • Keyword Brief
  • Topic Brief
  • Custom Brief
  • Calendar Brief

Shows which types of content are most predictable.

Key Metrics Summary

Overall stats:

  • Overall Accuracy - Average across all predictions
  • Traffic Accuracy - How well traffic is predicted
  • Position Accuracy - How well rankings are predicted
  • Total Predictions - All predictions created
  • Validated - Predictions with validation results

All Predictions Table

Sortable table of all predictions:

  • Target keyword
  • Content type
  • Status (Pending/Published/Validated/Complete)
  • Estimated 30-day traffic
  • Confidence score
  • Accuracy percentage (if validated)

Click any row to see full prediction details in modal.


Manual Operations

Run Validation (Button)

Forces immediate validation check for all predictions due today.

Use when:

  • You just published content and want to see if predictions are due
  • Daily validation hasn't run yet
  • Testing prediction accuracy

Process:

  1. Click "Run Validation" button
  2. Background task processes the request
  3. Checks 30/60/90 day due dates
  4. Fetches GSC data for each due prediction
  5. Calculates and stores accuracy results
  6. Page auto-refreshes after 3 seconds

Prediction Detail Modal

Click any prediction row or comparison card to see full details:

Top section:

  • Target keyword
  • Content type
  • Status
  • Search volume
  • Confidence score
  • Keyword difficulty
  • Publication date

Validation Results section:

For each validation period (30/60/90 days):

  • Period label
  • Predicted position → Actual position
  • Predicted traffic → Actual traffic
  • Accuracy percentage (color-coded)
  • Data quality indicator

Empty state: "No validation results yet. This prediction is awaiting validation."


Prediction Lifecycle

  1. Brief Created — Prediction record created (status: pending publication)
  2. Content Published — Status changes to published, validation dates are set
  3. 30 Days After Publication — First validation runs, 30-day results recorded
  4. 60 Days After Publication — Second validation runs, 60-day results recorded
  5. 90 Days After Publication — Final validation runs, prediction marked complete
  6. Stats Updated — Weekly accuracy history and best/worst predictions recalculated

ML Learning Over Time

Accuracy Stats (Automatic)

After each validation, your accuracy stats are updated with:

Overall metrics:

  • Average accuracy across all predictions
  • Traffic accuracy average
  • Position accuracy average
  • Validated prediction count

Accuracy by type:

  • Accuracy breakdown per content type
  • Identifies which brief types are most predictable

Best predictions:

  • Top 5 most accurate predictions
  • Used to identify winning patterns

Learning opportunities:

  • Bottom 5 least accurate predictions
  • Used to improve future predictions

Weekly history:

  • Last 12 weeks of accuracy scores
  • Tracks improvement over time

Improvement Cycle

  1. Brief creation - Platform makes prediction based on current model
  2. Content published - Real-world test begins
  3. Validation - Actual GSC data compared to prediction
  4. Learning - Accuracy patterns analyzed
  5. Model refinement - Future predictions adjusted based on learnings

Goal: Predictions get more accurate as the platform learns your niche, audience, and competitive landscape.


Understanding Confidence Scores

Confidence score (0-100%) indicates how certain ILLIXIS is about the prediction.

Higher confidence when:

  • ✅ Search volume 100-10,000 (sweet spot)
  • ✅ Keyword difficulty 20-50 (achievable)
  • ✅ Opportunity score ≥70 (high quality brief)
  • ✅ SERP data available (competitive analysis complete)

Lower confidence when:

  • ❌ Very low search volume (<50)
  • ❌ Very high difficulty (>70)
  • ❌ Low opportunity score (<30)
  • ❌ No SERP data

Example:

  • 85% confidence = "Very likely to rank as predicted"
  • 60% confidence = "Moderate uncertainty"
  • 40% confidence = "High uncertainty, use prediction as rough guide"

Data Sources

Prediction Inputs (From Brief)

  • Search volume from keyword research
  • Keyword difficulty from SEO data analysis
  • Opportunity score from brief analysis
  • Competition density from SERP analysis
  • SERP data from search engine intelligence

Validation Inputs (Actual Performance)

  • Clicks, impressions, position, and CTR from Google Search Console

Date range: Publication date to validation date (30/60/90 days)


Automation Schedule

| Task | Schedule | What It Does |
|------|----------|--------------|
| Daily Validation | 7:00 AM UTC | Validates predictions due at 30/60/90 day checkpoints |
| Weekly Stats Update | Sunday 6:00 AM UTC | Recalculates accuracy statistics for all accounts |

Predictions are validated automatically—no manual action required. The system checks daily for predictions that have reached their 30, 60, or 90 day post-publication checkpoint.


Common Scenarios

Scenario: No Predictions Yet

Dashboard shows:

  • "Your prediction engine is warming up"
  • Empty state in All Predictions table
  • Explanation: "Predictions are created automatically when briefs are generated"

To get predictions:

  1. Generate keyword briefs (Strategy Hub → Approve opportunities)
  2. Predictions created automatically with each brief
  3. Dashboard populates immediately

Scenario: Predictions but No Validations

Dashboard shows:

  • Total predictions count
  • "Validated: 0"
  • Upcoming Validations timeline

Why:

  • Content not yet published, OR
  • Content published less than 30 days ago

To get validations:

  1. Publish content from briefs
  2. Wait 30 days
  3. Daily validation task runs automatically
  4. Results appear on dashboard

Scenario: Insufficient Data

Result record shows:

  • Data quality: "Insufficient"
  • Note: "No GSC data found for this keyword"
  • Accuracy: Not calculated

Causes:

  • Content not indexed by Google yet
  • Keyword not receiving impressions
  • GSC not connected or syncing

Fix:

  1. Verify GSC connection (Analytics → Data Sources)
  2. Run GSC data sync manually
  3. Wait for Google to index content
  4. Re-run validation after content has traffic

Scenario: Low Accuracy (<50%)

Possible reasons:

  • Keyword difficulty higher than expected - Took longer to rank
  • Search volume fluctuated - Seasonal changes or trend died
  • Content quality lower than predicted - Didn't match search intent
  • SERP landscape changed - New competitors entered
  • No backlinks - Content needs link building

Action:

  • Check "Learning Opportunities" section
  • Review common factors in low-accuracy predictions
  • Adjust future content strategy

Scenario: High Accuracy (≥70%)

What it means:

  • Platform understands your niche
  • Brief quality is high
  • Predictions are reliable

Action:

  • Check "Best Predictions" section
  • Identify winning patterns (keywords, topics, formats)
  • Create more content following these patterns

FAQ

Q: When are predictions created? A: Automatically when any brief is generated (keyword, trend, custom, calendar).

Q: Can I create predictions manually? A: No. Predictions are tied to briefs and created during brief generation.

Q: Why is my prediction still "Pending Publication"? A: Content hasn't been published to your CMS yet. Publish the content to activate validation schedule.

Q: Can I delete a prediction? A: No. Predictions are permanent for accuracy tracking. They can be marked "Insufficient Data" if validation fails.

Q: Why don't I see accuracy for recent predictions? A: Predictions are validated 30 days after publication. Check "Upcoming Validations" to see when results will appear.

Q: What's a good accuracy percentage? A: 70%+ is excellent. 50-69% is acceptable. <50% indicates prediction model needs adjustment for your niche.

Q: Does accuracy improve over time? A: Yes. The platform learns from validated predictions and adjusts future estimates based on your actual performance patterns.

Q: What happens at 90 days? A: Final validation runs. Prediction status changes to "Complete". No further validations scheduled.

Q: Can I re-validate a prediction? A: Not currently. Each prediction validates once at 30, 60, and 90 days.

Q: Where do I see prediction details for a specific brief? A: Click any prediction in the All Predictions table, or view Brief Detail page.

Q: Why are some predictions missing validation results? A: Either validation not due yet, or GSC data insufficient (content not indexed/no traffic).


Next Steps After Reviewing Predictions

  1. If accuracy is high (≥70%):
  • Check best predictions for patterns
  • Create more briefs targeting similar keywords
  • Rely on predictions for content planning
  1. If accuracy is low (<50%):
  • Review learning opportunities section
  • Check common factors (difficulty, volume)
  • Adjust strategy (target easier keywords, longer content, etc.)
  1. If insufficient data:
  • Verify GSC connection
  • Check content indexing status
  • Ensure content is receiving traffic
  1. For upcoming validations:
  • Note which predictions validate soon
  • Check actual rankings in GSC
  • Compare predicted vs actual proactively

Related Help Guides:

Ready to lose the stack?

One platform. You approve. ILLIXIS executes. Marketing that just happens.

Join the waitlistNo spam, everUnsubscribe anytime
First 20 founding members: 50% off any plan for your first year.

Marketing, Unstacked.