What Gets Retrained

Four predictive models:

  1. Opportunity acceptance prediction - Which briefs you'll approve
  2. Brief approval likelihood - Topics that match your preferences
  3. Content quality prediction - What content will perform well
  4. User preference patterns - Your strategic priorities

Training Schedule

Nightly run: 3:15 AM (server time) Duration: 5-30 seconds per tenant Frequency: Daily (automated scheduled task)

Training runs automatically for all active tenants. No manual intervention required.

What Happens During Training

Step 1: Signal collection System fetches your last 1,000 decisions (or all if fewer). Signals include brief approvals, rejections, opportunity selections, and content publications.

Step 2: Feature extraction Each decision gets converted to 8 feature dimensions:

  • Keyword volume
  • Keyword difficulty
  • Opportunity score
  • Trend velocity
  • Competitor count
  • Content gap score
  • Time sensitivity
  • Brand relevance

Step 3: Model training Uses machine learning to find patterns in your decisions. Features get normalized for comparison. Classes balanced by inverse frequency.

Step 4: Cross-validation 5-fold cross-validation tests prediction accuracy. If fewer than 50 signals, full dataset validation used instead.

Step 5: Weight update New feature weights saved to database. Model version incremented. Accuracy score calculated.

Step 6: Logging Training log created with before/after accuracy, signal count, and weight changes.

Minimum Requirements

20 decisions required before first training occurs.

Until you reach 20 signals, recommendations use default scoring (opportunity metrics only). Once you cross the threshold, nightly training activates.

What counts:

  • Brief approvals
  • Brief rejections
  • Opportunity selections
  • Opportunity rejections
  • Content published from briefs

What doesn't count:

  • Ignored briefs (create neutral signals but don't trigger training)
  • Content views without action
  • Dashboard navigation

Ignored Brief Detection

Separate daily task (4:00 AM) detects briefs ignored for 72+ hours.

If a brief remains in "complete" status (analyzed but not acted on) for 3+ days, system creates a neutral preference signal (outcome = 0.5). This teaches the model you're not interested in similar topics.

Detection criteria:

  • Brief created more than 72 hours ago
  • Status = "complete" (analyzed)
  • No preference signal yet recorded
  • Not archived or deleted

Effect on recommendations: Future briefs with similar characteristics get lower preference scores. System surfaces less similar content instead.

Performance Feedback Loop

Weekly task (schedule varies) fetches real-world performance data for published content and updates preference signals.

Data sources:

  • Google Analytics: Pageviews, sessions, conversions
  • Google Search Console: Clicks, impressions, CTR
  • Social platforms: Engagement metrics

Process:

  1. Find preference signals older than 7 days (published content needs time to accumulate data)
  2. Query GA4/GSC for actual traffic metrics
  3. Calculate performance score (0-1) relative to tenant's average traffic
  4. Update signal with actual_traffic, actual_conversions, performance_score

Impact on training: Next training run incorporates performance outcomes. If high-preference predictions led to high-performing content, those feature weights strengthen. If low-preference content outperformed expectations, weights adjust.

This closes the loop: prediction → decision → outcome → better predictions.

Checking Training Status

Navigate to Settings → Preference Learning to view:

  • Signal count - Total decisions captured
  • Model status - Not initialized / Training / Active
  • Accuracy - Current prediction accuracy (percentage)
  • Last trained - Date/time of most recent training
  • Model version - Increments with each training run
  • Top preferences - Features you weight most heavily

Training Logs

Each training run creates a log entry with:

  • Start and completion timestamps
  • Signals processed count
  • Accuracy before and after
  • Feature weights before and after
  • Status (completed / skipped / failed)
  • Error message (if failed)

Accessing logs: Admin users can view training history under Settings → Preference Learning → Training Logs. Filter by date or status.

Training Failures

Common failure reasons:

Insufficient signals Training skipped if fewer than 20 signals exist. Log status = "skipped". System retries next night automatically.

All same outcome If all signals have same outcome (all approvals or all rejections), model can't learn differences. Training fails. Add more varied decisions.

Feature extraction errors If signal features are malformed or missing, that signal skipped. Training continues with remaining valid signals.

System error In rare cases, training may fail due to an internal system error. This is automatically retried the following night.

What Changes After Training

Preference scores appear on:

  • Strategy Hub opportunity list (sorted by preference + priority)
  • Brief detail pages (shows "Match: X%" based on preference score)
  • Weekly Planner recommendations (highest preference scores surface first)
  • Maya's strategic recommendations (filtered by preference threshold)

Score interpretation:

  • 0-40: Low match (system predicts you'll reject)
  • 41-69: Medium match (uncertain)
  • 70-89: High match (system predicts you'll approve)
  • 90-100: Very high match (strong confidence)

Preference scores appear alongside opportunity scores. A brief might have high opportunity score (85) but low preference score (35) if it doesn't match your historical decisions.

Accuracy Evolution

Typical progression:

Week 1 (0-20 signals): No training yet. Default scoring only.

Weeks 2-3 (20-50 signals): First training occurs. Accuracy starts 55-60%. Recommendations begin reflecting choices but feel generic.

Month 2 (50-100 signals): Accuracy climbs to 65-70%. System surfaces topics aligned with past approvals. Fewer irrelevant briefs.

Month 3+ (100+ signals): Accuracy reaches 70-75%+. System anticipates approvals with high confidence. Weekly Planner feels curated for your business.

Accuracy plateau: Most tenants plateau around 75-80% accuracy. This is expected. Human decisions aren't 100% predictable. If accuracy declines over time, consider resetting the model.

Manual Training Trigger

You can trigger training manually without waiting for the nightly run.

Navigate to Settings → Preference Learning → Train Now to start an immediate training cycle.

When to use:

  • After bulk-importing historical decisions
  • After significant strategy shift (want immediate retraining)
  • Testing preference learning setup

Manual training uses the same logic as the nightly run. Respects the 20-signal minimum.

Role-Based Personalization

If you have team members with assigned roles, training incorporates role-specific adjustments.

Role adjustments:

  • Editors - Prefer lower difficulty, evergreen content
  • Managers - Prioritize high opportunity and volume
  • Writers - Favor creative freedom and unique topics
  • Approvers - Strong preference for brand alignment and ROI
  • Admins - No adjustments (balanced view)

Same features, different weights per role. An editor viewing opportunities sees preference scores adjusted for editorial concerns. A manager viewing same list sees scores adjusted for strategic impact.

Role captured at time of decision. One user can have different roles over time.

Disabling Nightly Training

Navigate to Settings → Preference Learning → Disable.

Effect:

  • Nightly training skipped for your tenant
  • Existing model preserved (not deleted)
  • Preference scores no longer displayed
  • Recommendations revert to default opportunity scoring

When to disable:

  • Testing default scoring vs preference scoring
  • Temporarily pausing learning during strategy pivot
  • Troubleshooting unexpected recommendations

Re-enable anytime. Next nightly run retrains with all historical signals.

Resetting the Model

Navigate to Settings → Preference Learning → Reset Model.

What happens:

  • Current preference model deleted
  • Historical signals preserved (decisions not deleted)
  • Requires 20 new decisions to reactivate
  • Next training run rebuilds from scratch using all signals

When to reset:

  • Content strategy pivoted entirely
  • Acquired new business with different audience
  • Model accuracy declining instead of improving
  • Want fresh start after testing phase

Training Performance Impact

Resource usage: Minimal. Training takes 5-30 seconds per account. Runs during off-peak hours (3:15 AM).

Processing: Accounts are trained sequentially (one at a time). No impact on your daytime usage.

How It Works Under the Hood

Algorithm: Machine learning classification trained on your decision history Cross-validation: 5-fold (or full dataset if fewer than 50 signals) Signal retention: Last 1,000 decisions per training run Schedule: Daily at 3:15 AM (server time)

Related Features

  • Preference Learning Overview - See help guide #21 for full preference learning system
  • Strategy Hub - Displays preference scores alongside opportunity scores (help guide #7)
  • Maya Strategist - Uses preference scores to filter recommendations (help guide #1)
  • Weekly Planner - Sorts recommendations by preference + recency
  • Performance Feedback Loop - Separate weekly task updates signals with GA4/GSC data

Monitoring Training Health

Red flags:

Training never runs Contact support if training hasn't run for 48+ hours. The automated schedule may need attention.

Training always skipped Signal count below 20. Make more explicit decisions (approve/reject briefs).

Accuracy stuck at 50% Model not learning. Check if all signals have same outcome. Add varied decisions.

Accuracy declining over time Strategy shifted but model still learning from old patterns. Reset model and retrain.

Training fails consistently Contact support if training fails multiple nights in a row. The team can investigate and resolve the issue.

Common Questions

Does training consume API credits? No. Training uses only local computation and database queries. No external API calls.

Can I see what the model learned? Yes. View feature weights in Settings → Preference Learning. Positive weights = you prefer higher values. Negative weights = you prefer lower values.

Does deleting a brief remove it from training? No. Signals persist even after deleting associated content. This preserves learning.

What if I make a mistake approval? One wrong decision won't break the model. Training uses 1,000 signals. Outliers get averaged out.

Does preference learning work for multi-brand accounts? One model per tenant. If brands serve different audiences, consider separate ILLIXIS accounts.

Does training cost extra? No. Included in all plans. Runs automatically.


Questions? Email support@illixis.io or ask Maya (bottom-right chat icon).

Ready to lose the stack?

One platform. You approve. ILLIXIS executes. Marketing that just happens.

Join the waitlistNo spam, everUnsubscribe anytime
First 20 founding members: 50% off any plan for your first year.

Marketing, Unstacked.