What Gets Tracked

Every time you interact with content or recommendations, ILLIXIS captures what you chose and why the system suggested it. This creates a dataset specific to your business.

Signal Types

Explicit signals - Direct actions that clearly indicate preference:

  • Approve or reject a brief
  • Accept or decline an opportunity in Strategy Hub
  • Snooze a recommendation
  • Publish content to your CMS
  • Archive content
  • Generate content from a brief

Implicit signals - Inaction that reveals preference:

  • Ignore a brief for 72+ hours (system interprets as "not interested")
  • Leave an opportunity untouched in Strategy Hub
  • Skip Weekly Planner recommendations repeatedly

Performance signals - Real-world outcomes that validate decisions:

  • Google Analytics traffic data (pageviews, engagement)
  • Google Search Console rankings (positions, impressions, CTR)
  • Social media engagement (shares, comments, clicks)
  • Email performance (opens, clicks, conversions)

How the System Learns

Every action you take gets converted into features the ML model can understand. The system extracts 8 dimensions from each recommendation:

  1. Keyword volume - Monthly search volume
  2. Keyword difficulty - Competition level
  3. Opportunity score - System's initial scoring
  4. Trend velocity - Rising or declining interest
  5. Competitor count - How many competitors target this
  6. Content gap score - How underserved the topic is
  7. Time sensitivity - Evergreen vs time-bound
  8. Brand relevance - How closely it aligns with your brand keywords

When you approve a brief with high keyword difficulty but low trend velocity, the model learns you prefer competitive, evergreen topics over trending but fleeting ones. When you reject low-volume opportunities repeatedly, the model learns your traffic threshold.

When Learning Activates

Minimum signal requirement: 30 decisions

Until you've made 30 approval/rejection decisions, the system uses default scoring. Once you cross that threshold, the ML model trains for the first time.

What counts toward the 30:

  • Brief approvals
  • Brief rejections
  • Opportunity selections in Strategy Hub
  • Opportunity rejections in Strategy Hub
  • Content published from briefs

What doesn't count:

  • Ignored briefs (these create neutral signals but don't accelerate activation)
  • Content views without action
  • Dashboard visits

Automation Schedule

ILLIXIS runs several automated processes to keep your preference model accurate and up-to-date:

| Process | Frequency | Timing |
|---------|-----------|--------|
| Preference model retraining | Nightly | 2:00 AM UTC |
| Signal performance evaluation | Real-time | After each user decision |
| Model accuracy metrics update | Weekly | Sundays |
| Preference drift detection | Monthly | 1st of each month |

Minimum activation threshold: 30 decisions required before the model activates. Until you reach this threshold, the system uses default scoring. This ensures the model has enough data to make meaningful predictions rather than overfitting to a handful of early decisions.

Nightly Model Training

Every night at 2:00 AM UTC, ILLIXIS retrains your preference model using all signals collected since the last training run.

What happens during training:

  1. System fetches your last 1,000 decisions (or all if fewer)
  2. Extracts feature vectors from each decision
  3. Trains a logistic regression model with L2 regularization
  4. Calculates prediction accuracy using cross-validation
  5. Updates your tenant preference model with new weights
  6. Increments model version number

Training logs are saved so you can see accuracy improving over time. The system tracks accuracy change from one training run to the next.

How Recommendations Get Personalized

Once your model is active, every brief and opportunity gets two scores:

  1. Base score - The system's default scoring based on opportunity metrics
  2. Preference score - How likely you are to approve it (0-100)

Both scores appear in the interface. A brief might have a high opportunity score (85) but a low preference score (35) if it doesn't match your historical decisions. You'll see this brief ranked lower than one with an 80 opportunity score but a 90 preference score.

Where preference scores appear:

  • Strategy Hub opportunity list (sorted by preference + priority)
  • Brief detail pages (shows "Match: 87%" based on preference score)
  • Weekly Planner recommendations (highest preference scores surface first)
  • Maya's strategic recommendations (filtered by preference threshold)

Role-Based Personalization

Phase 38 added role-specific preference adjustments. If you have team members with different roles, the system learns their individual preferences.

Role adjustments:

  • Editors - Prefer lower difficulty, evergreen content
  • Managers - Prioritize high opportunity and volume
  • Writers - Favor creative freedom and unique topics
  • Approvers - Strong preference for brand alignment and ROI
  • Admins - No adjustments (balanced view)

When an editor views opportunities, their preference scores reflect editorial concerns. When a manager views the same list, scores adjust for strategic impact. Same data, different lens.

Ignored Brief Detection

ILLIXIS detects when you consistently ignore certain types of briefs and automatically adjusts recommendations.

How it works:

  1. Daily automated task runs at 4:15 AM UTC
  2. Identifies briefs untouched for 72+ hours
  3. Analyzes patterns across ignored briefs (topic, difficulty, volume, source)
  4. If a clear pattern emerges (e.g., you ignore all briefs below 500 monthly searches), the system:
  • Reduces similar briefs in future recommendations
  • Can pause generation of that brief type entirely
  • Logs the detection for your review

72-hour threshold: Briefs ignored for 72+ hours are considered "not interested" signals. This is long enough to avoid penalizing briefs you simply haven't reviewed yet, but short enough to adapt quickly to your preferences.

What qualifies as "ignored":

  • Brief was created 72+ hours ago
  • No actions taken (no approve, reject, snooze, or view)
  • Brief remains in pending/new status

Pattern detection thresholds:

  • Minimum 10 ignored briefs before pattern detection activates
  • 70%+ consistency required (e.g., 7 of 10 ignored briefs share a trait)
  • Patterns recalculated weekly

Viewing detected patterns: Navigate to Settings > Preference Learning > Ignored Patterns to see:

  • What patterns were detected
  • How many briefs triggered the pattern
  • Whether brief generation was paused for that pattern

Resetting ignored patterns: If your strategy changes and you want briefs the system stopped generating:

  1. Settings > Preference Learning > Ignored Patterns
  2. Click "Reset" next to the pattern
  3. Similar briefs will resume appearing in 24-48 hours

Rejection Pattern Detection

Beyond ML, ILLIXIS tracks rule-based rejection patterns. If you consistently reject opportunities with specific characteristics, the system applies automatic penalties.

Detected patterns:

  • Source type bias (e.g., always reject "gap" opportunities)
  • Volume threshold (e.g., always reject keywords below 2,000 monthly searches)
  • Difficulty threshold (e.g., always reject difficulty above 60)
  • Search intent bias (e.g., prefer informational over transactional)

How penalties work: When a pattern is detected with 70%+ confidence (at least 10 rejections), the system applies a 15% down-weight to similar future opportunities. A brief that would score 80 drops to 68.

You can view active rejection patterns in your account settings under Preference Learning.

Performance Feedback Loop

Seven days after content publishes, ILLIXIS fetches real-world performance metrics and updates the associated preference signal.

Data sources:

  • Google Analytics: Pageviews, sessions, conversions
  • Google Search Console: Clicks, impressions, CTR
  • Social platforms: Engagement metrics

How feedback improves predictions: If the model predicted you'd approve a brief (high preference score), you approved it, and the resulting content performed well (high traffic), that reinforces the model's feature weights. If you approved something the model doubted, but it performed poorly, the model learns you occasionally make decisions it shouldn't predict.

This closes the loop: recommendation → decision → outcome → better recommendations.

Checking Your Model Status

Navigate to Settings → Preference Learning to see:

  • Signal count - Total decisions captured
  • Model status - Not initialized / Training / Active
  • Accuracy - Prediction accuracy percentage
  • Last trained - Date of most recent training run
  • Model version - Increments with each training
  • Top preferences - Which features you weight most heavily

Interpreting accuracy:

  • 50-60% - Model is learning, not yet reliable
  • 60-70% - Reasonable predictions
  • 70-80% - Strong personalization
  • 80%+ - Excellent (rare in first month)

Most tenants see 65-70% accuracy within the first month, improving to 75%+ by month three.

What to Expect Over Time

Week 1-2 (0-30 decisions): No personalization yet. You'll see default scoring based purely on opportunity metrics. Make deliberate approve/reject decisions to build your dataset.

Weeks 2-4 (30-50 decisions): First model training occurs. Accuracy starts around 55-60%. Recommendations begin reflecting your choices but still feel generic.

Month 2 (50-100 decisions): Accuracy climbs to 65-70%. You'll notice the system surfaces topics aligned with your past approvals. Fewer irrelevant briefs.

Month 3+ (100+ decisions): Accuracy reaches 70-75%+. The system anticipates what you'll approve with high confidence. Weekly Planner feels curated specifically for your business.

Improving Recommendation Quality

1. Make explicit decisions Don't ignore briefs. Click "Approve" or "Reject" so the system knows your intent. Every ignored brief is a missed training opportunity.

2. Add rejection reasons When rejecting a brief or opportunity, use the optional reason field. While the model doesn't parse text, it helps you remember why you rejected similar topics later.

3. Trust the first 30 signals Your first 30 decisions set the foundation. If you approve most trend-based briefs and few keyword briefs, the model will favor trends forever. Make those initial choices representative of your actual strategy.

4. Review top preferences monthly Check which features the model weighs most heavily. If "trend velocity" is your top feature but you've shifted to evergreen content, reset the model and retrain with new signals.

5. Use role assignments If team members have different priorities, assign roles (Editor, Manager, Writer, Approver). The system will personalize for each individual.

Resetting the Model

If your content strategy changes significantly, you can reset preference learning and start fresh.

Navigate to Settings → Preference Learning → Reset Model. This action:

  • Deletes the current preference model
  • Preserves historical signals (doesn't delete decisions)
  • Requires 30 new decisions to reactivate
  • Retrains from scratch using all historical signals

Use this if:

  • You pivoted your content strategy entirely
  • You acquired a new business with different audience
  • Model accuracy is declining instead of improving

Common Questions

Does deleting a brief remove the signal? No. Signals persist even if you delete the associated brief or content. This preserves learning even when cleaning up your account.

Can I disable preference learning? Yes. Navigate to Settings → Preference Learning → Disable. The system will stop scoring with preferences and revert to default opportunity scoring. Historical signals remain for future reactivation.

Does the model learn from content I wrote manually? Only if you publish it through ILLIXIS. The system tracks what you publish via CMS connectors, not content created outside the platform.

What if I have multiple websites? Each tenant (account) has one preference model. If your websites serve similar audiences, one model learns from all decisions. If audiences differ significantly, consider separate ILLIXIS accounts.

Does preference learning cost extra? No. It's included in all plans. The nightly training runs automatically.

Related Features

  • Maya Strategic Recommender - Uses preference scores to filter recommendations (see Maya help guide)
  • Strategy Hub - Displays preference scores alongside opportunity scores (see Strategy Hub help guide)
  • Weekly Planner - Sorts recommendations by preference + recency
  • Content Quality System - Separate from preference learning; focuses on content grading (see Quality Grading help guide)

Questions? Email support@illixis.io or ask Maya (bottom-right chat icon).

Ready to lose the stack?

One platform. You approve. ILLIXIS executes. Marketing that just happens.

Join the waitlistNo spam, everUnsubscribe anytime
First 20 founding members: 50% off any plan for your first year.

Marketing, Unstacked.