Four predictive models:
Nightly run: 3:15 AM (server time) Duration: 5-30 seconds per tenant Frequency: Daily (automated scheduled task)
Training runs automatically for all active tenants. No manual intervention required.
Step 1: Signal collection System fetches your last 1,000 decisions (or all if fewer). Signals include brief approvals, rejections, opportunity selections, and content publications.
Step 2: Feature extraction Each decision gets converted to 8 feature dimensions:
Step 3: Model training Uses machine learning to find patterns in your decisions. Features get normalized for comparison. Classes balanced by inverse frequency.
Step 4: Cross-validation 5-fold cross-validation tests prediction accuracy. If fewer than 50 signals, full dataset validation used instead.
Step 5: Weight update New feature weights saved to database. Model version incremented. Accuracy score calculated.
Step 6: Logging Training log created with before/after accuracy, signal count, and weight changes.
20 decisions required before first training occurs.
Until you reach 20 signals, recommendations use default scoring (opportunity metrics only). Once you cross the threshold, nightly training activates.
What counts:
What doesn't count:
Separate daily task (4:00 AM) detects briefs ignored for 72+ hours.
If a brief remains in "complete" status (analyzed but not acted on) for 3+ days, system creates a neutral preference signal (outcome = 0.5). This teaches the model you're not interested in similar topics.
Detection criteria:
Effect on recommendations: Future briefs with similar characteristics get lower preference scores. System surfaces less similar content instead.
Weekly task (schedule varies) fetches real-world performance data for published content and updates preference signals.
Data sources:
Process:
actual_traffic, actual_conversions, performance_scoreImpact on training: Next training run incorporates performance outcomes. If high-preference predictions led to high-performing content, those feature weights strengthen. If low-preference content outperformed expectations, weights adjust.
This closes the loop: prediction → decision → outcome → better predictions.
Navigate to Settings → Preference Learning to view:
Each training run creates a log entry with:
Accessing logs: Admin users can view training history under Settings → Preference Learning → Training Logs. Filter by date or status.
Common failure reasons:
Insufficient signals Training skipped if fewer than 20 signals exist. Log status = "skipped". System retries next night automatically.
All same outcome If all signals have same outcome (all approvals or all rejections), model can't learn differences. Training fails. Add more varied decisions.
Feature extraction errors If signal features are malformed or missing, that signal skipped. Training continues with remaining valid signals.
System error In rare cases, training may fail due to an internal system error. This is automatically retried the following night.
Preference scores appear on:
Score interpretation:
Preference scores appear alongside opportunity scores. A brief might have high opportunity score (85) but low preference score (35) if it doesn't match your historical decisions.
Typical progression:
Week 1 (0-20 signals): No training yet. Default scoring only.
Weeks 2-3 (20-50 signals): First training occurs. Accuracy starts 55-60%. Recommendations begin reflecting choices but feel generic.
Month 2 (50-100 signals): Accuracy climbs to 65-70%. System surfaces topics aligned with past approvals. Fewer irrelevant briefs.
Month 3+ (100+ signals): Accuracy reaches 70-75%+. System anticipates approvals with high confidence. Weekly Planner feels curated for your business.
Accuracy plateau: Most tenants plateau around 75-80% accuracy. This is expected. Human decisions aren't 100% predictable. If accuracy declines over time, consider resetting the model.
You can trigger training manually without waiting for the nightly run.
Navigate to Settings → Preference Learning → Train Now to start an immediate training cycle.
When to use:
Manual training uses the same logic as the nightly run. Respects the 20-signal minimum.
If you have team members with assigned roles, training incorporates role-specific adjustments.
Role adjustments:
Same features, different weights per role. An editor viewing opportunities sees preference scores adjusted for editorial concerns. A manager viewing same list sees scores adjusted for strategic impact.
Role captured at time of decision. One user can have different roles over time.
Navigate to Settings → Preference Learning → Disable.
Effect:
When to disable:
Re-enable anytime. Next nightly run retrains with all historical signals.
Navigate to Settings → Preference Learning → Reset Model.
What happens:
When to reset:
Resource usage: Minimal. Training takes 5-30 seconds per account. Runs during off-peak hours (3:15 AM).
Processing: Accounts are trained sequentially (one at a time). No impact on your daytime usage.
Algorithm: Machine learning classification trained on your decision history Cross-validation: 5-fold (or full dataset if fewer than 50 signals) Signal retention: Last 1,000 decisions per training run Schedule: Daily at 3:15 AM (server time)
Red flags:
Training never runs Contact support if training hasn't run for 48+ hours. The automated schedule may need attention.
Training always skipped Signal count below 20. Make more explicit decisions (approve/reject briefs).
Accuracy stuck at 50% Model not learning. Check if all signals have same outcome. Add varied decisions.
Accuracy declining over time Strategy shifted but model still learning from old patterns. Reset model and retrain.
Training fails consistently Contact support if training fails multiple nights in a row. The team can investigate and resolve the issue.
Does training consume API credits? No. Training uses only local computation and database queries. No external API calls.
Can I see what the model learned? Yes. View feature weights in Settings → Preference Learning. Positive weights = you prefer higher values. Negative weights = you prefer lower values.
Does deleting a brief remove it from training? No. Signals persist even after deleting associated content. This preserves learning.
What if I make a mistake approval? One wrong decision won't break the model. Training uses 1,000 signals. Outliers get averaged out.
Does preference learning work for multi-brand accounts? One model per tenant. If brands serve different audiences, consider separate ILLIXIS accounts.
Does training cost extra? No. Included in all plans. Runs automatically.
Questions? Email support@illixis.io or ask Maya (bottom-right chat icon).
Control what each team member can access and modify. ILLIXIS uses role-based access control RBAC to ensure team members have appropriate permissions for their responsibilities.
Control who reviews and approves content before it goes live. ILLIXIS approval workflows let you build multi-stage review chains with role-based approvers, deadlines, and automatic escalation.
ILLIXIS automatically records every significant action in the platform with an immutable audit trail. Track who did what, when, and from where - critical for compliance SOC 2, GDPR, security investigations, and accountability.
Replace ILLIXIS branding with your own across the entire platform. Perfect for agencies delivering client work or businesses wanting a fully branded experience.
Two-factor authentication 2FA adds an extra layer of security to your ILLIXIS account. When enabled, you'll need both your password and a verification code from your phone to sign in.
One platform. You approve. ILLIXIS executes. Marketing that just happens.
Marketing, Unstacked.