Every time you interact with content or recommendations, ILLIXIS captures what you chose and why the system suggested it. This creates a dataset specific to your business.
Explicit signals - Direct actions that clearly indicate preference:
Implicit signals - Inaction that reveals preference:
Performance signals - Real-world outcomes that validate decisions:
Every action you take gets converted into features the ML model can understand. The system extracts 8 dimensions from each recommendation:
When you approve a brief with high keyword difficulty but low trend velocity, the model learns you prefer competitive, evergreen topics over trending but fleeting ones. When you reject low-volume opportunities repeatedly, the model learns your traffic threshold.
Minimum signal requirement: 30 decisions
Until you've made 30 approval/rejection decisions, the system uses default scoring. Once you cross that threshold, the ML model trains for the first time.
What counts toward the 30:
What doesn't count:
ILLIXIS runs several automated processes to keep your preference model accurate and up-to-date:
| Process | Frequency | Timing |
|---------|-----------|--------|
| Preference model retraining | Nightly | 2:00 AM UTC |
| Signal performance evaluation | Real-time | After each user decision |
| Model accuracy metrics update | Weekly | Sundays |
| Preference drift detection | Monthly | 1st of each month |
Minimum activation threshold: 30 decisions required before the model activates. Until you reach this threshold, the system uses default scoring. This ensures the model has enough data to make meaningful predictions rather than overfitting to a handful of early decisions.
Every night at 2:00 AM UTC, ILLIXIS retrains your preference model using all signals collected since the last training run.
What happens during training:
Training logs are saved so you can see accuracy improving over time. The system tracks accuracy change from one training run to the next.
Once your model is active, every brief and opportunity gets two scores:
Both scores appear in the interface. A brief might have a high opportunity score (85) but a low preference score (35) if it doesn't match your historical decisions. You'll see this brief ranked lower than one with an 80 opportunity score but a 90 preference score.
Where preference scores appear:
Phase 38 added role-specific preference adjustments. If you have team members with different roles, the system learns their individual preferences.
Role adjustments:
When an editor views opportunities, their preference scores reflect editorial concerns. When a manager views the same list, scores adjust for strategic impact. Same data, different lens.
ILLIXIS detects when you consistently ignore certain types of briefs and automatically adjusts recommendations.
How it works:
72-hour threshold: Briefs ignored for 72+ hours are considered "not interested" signals. This is long enough to avoid penalizing briefs you simply haven't reviewed yet, but short enough to adapt quickly to your preferences.
What qualifies as "ignored":
Pattern detection thresholds:
Viewing detected patterns: Navigate to Settings > Preference Learning > Ignored Patterns to see:
Resetting ignored patterns: If your strategy changes and you want briefs the system stopped generating:
Beyond ML, ILLIXIS tracks rule-based rejection patterns. If you consistently reject opportunities with specific characteristics, the system applies automatic penalties.
Detected patterns:
How penalties work: When a pattern is detected with 70%+ confidence (at least 10 rejections), the system applies a 15% down-weight to similar future opportunities. A brief that would score 80 drops to 68.
You can view active rejection patterns in your account settings under Preference Learning.
Seven days after content publishes, ILLIXIS fetches real-world performance metrics and updates the associated preference signal.
Data sources:
How feedback improves predictions: If the model predicted you'd approve a brief (high preference score), you approved it, and the resulting content performed well (high traffic), that reinforces the model's feature weights. If you approved something the model doubted, but it performed poorly, the model learns you occasionally make decisions it shouldn't predict.
This closes the loop: recommendation → decision → outcome → better recommendations.
Navigate to Settings → Preference Learning to see:
Interpreting accuracy:
Most tenants see 65-70% accuracy within the first month, improving to 75%+ by month three.
Week 1-2 (0-30 decisions): No personalization yet. You'll see default scoring based purely on opportunity metrics. Make deliberate approve/reject decisions to build your dataset.
Weeks 2-4 (30-50 decisions): First model training occurs. Accuracy starts around 55-60%. Recommendations begin reflecting your choices but still feel generic.
Month 2 (50-100 decisions): Accuracy climbs to 65-70%. You'll notice the system surfaces topics aligned with your past approvals. Fewer irrelevant briefs.
Month 3+ (100+ decisions): Accuracy reaches 70-75%+. The system anticipates what you'll approve with high confidence. Weekly Planner feels curated specifically for your business.
1. Make explicit decisions Don't ignore briefs. Click "Approve" or "Reject" so the system knows your intent. Every ignored brief is a missed training opportunity.
2. Add rejection reasons When rejecting a brief or opportunity, use the optional reason field. While the model doesn't parse text, it helps you remember why you rejected similar topics later.
3. Trust the first 30 signals Your first 30 decisions set the foundation. If you approve most trend-based briefs and few keyword briefs, the model will favor trends forever. Make those initial choices representative of your actual strategy.
4. Review top preferences monthly Check which features the model weighs most heavily. If "trend velocity" is your top feature but you've shifted to evergreen content, reset the model and retrain with new signals.
5. Use role assignments If team members have different priorities, assign roles (Editor, Manager, Writer, Approver). The system will personalize for each individual.
If your content strategy changes significantly, you can reset preference learning and start fresh.
Navigate to Settings → Preference Learning → Reset Model. This action:
Use this if:
Does deleting a brief remove the signal? No. Signals persist even if you delete the associated brief or content. This preserves learning even when cleaning up your account.
Can I disable preference learning? Yes. Navigate to Settings → Preference Learning → Disable. The system will stop scoring with preferences and revert to default opportunity scoring. Historical signals remain for future reactivation.
Does the model learn from content I wrote manually? Only if you publish it through ILLIXIS. The system tracks what you publish via CMS connectors, not content created outside the platform.
What if I have multiple websites? Each tenant (account) has one preference model. If your websites serve similar audiences, one model learns from all decisions. If audiences differ significantly, consider separate ILLIXIS accounts.
Does preference learning cost extra? No. It's included in all plans. The nightly training runs automatically.
Questions? Email support@illixis.io or ask Maya (bottom-right chat icon).
Maya is your always-on growth leader who analyzes your data, spots opportunities, and tells you exactly what to do next. No more wondering "what should I work on today?" Maya knows.
ILLIXIS transforms your written content into professional videos with AI-generated visuals, voiceovers, and music. The unique storyboard approval workflow lets you review and refine each scene before final production.
Turn website visitors into leads with interactive content. ILLIXIS generates complete quizzes, calculators, assessments, and recommendation engines from a single description. No coding required.
Create high-converting landing pages with a visual builder. Describe your goal, ILLIXIS generates a complete page, and you customize every detail with inline editing, image tools, and section management. Connect your own domain with automatic SSL.
Run advertising campaigns across Google Ads, Meta, LinkedIn, Microsoft Bing, TikTok, Spotify, Pinterest, and Snapchat from one dashboard. ILLIXIS handles audience creation, ad generation, and performance tracking.
One platform. You approve. ILLIXIS executes. Marketing that just happens.
Marketing, Unstacked.