What Gets Tracked

Every time you interact with a quality issue, ILLIXIS records the action and aggregates it into learning patterns.

Issue Actions

Fix - You agree it's a problem and correct it

  • Auto-fix accepted (AI suggestion approved)
  • Manual edit (you wrote the fix yourself)

Ignore - You disagree or don't care about this issue

  • Remains in content unchanged
  • Signals "not a real problem for my brand"

Dismiss - You never want to see this issue type again

  • Removes from UI immediately
  • Stronger signal than ignore (applies to all future content)

Each action is recorded with before/after text, decision time, and context. These resolutions aggregate into trend statistics per issue type.

Learning Stages

The system tracks patterns for each issue type (48 total subcategories) and moves through five learning stages.

Stage 1: Collecting Data

Status: collecting

What happens: System detects issues but doesn't act on patterns yet. Needs 10 samples before making decisions.

How to exit: Fix or ignore the same issue type 10 times.

Stage 2: Ready to Prevent

Status: ready_to_prevent

Trigger: 10+ fixes with 70%+ fix rate

What happens: System asks if you want to enable prevention for this issue. Appears in Quality Insights dashboard under "Ready to Learn."

User decision required: Click "Enable Prevention" or "Not Yet"

Example: You've fixed "rhetorical questions" in 8 of 10 articles. System asks: "Enable prevention for rhetorical questions?"

Stage 3: Prevention Active

Status: preventing

What happens:

  • Prevention instruction added to content generation prompts
  • Future articles generated with explicit avoidance of this issue
  • Detection still runs (to verify prevention works)

How to activate: Confirm prevention from Quality Insights dashboard

How to disable: Toggle off in Active Learnings section

Stage 4: Ready to Suppress

Status: ready_to_suppress

Trigger: 10+ ignores with 80%+ ignore rate

What happens: System asks if you want to stop detecting this issue. Appears in Quality Insights dashboard under "Ready to Learn."

User decision required: Click "Suppress Detection" or "Not Yet"

Example: You've ignored "passive voice" warnings in 9 of 10 articles. System asks: "Stop detecting passive voice?"

Stage 5: Detection Suppressed

Status: suppressed

What happens:

  • Quality assessment skips this issue type entirely
  • Won't appear in issue lists or grade calculations
  • Saves detection processing time

How to activate: Confirm suppression from Quality Insights dashboard

How to re-enable: Toggle on in Active Learnings section

Auto-Learning Thresholds

System applies learning without user confirmation if patterns are overwhelming:

Auto-Prevention: 25+ fixes with 90%+ fix rate

  • Status: auto_preventing
  • No confirmation needed
  • Adds to prompts immediately

Auto-Suppression: 25+ ignores with 95%+ ignore rate

  • Status: auto_suppressed
  • No confirmation needed
  • Stops detection immediately

Checking Learning Status

Navigate to Content Hub → Quality Insights to see:

Stats Dashboard

Four metric cards show system-wide learning:

Issues (Last 30 Days) - Total issues detected across all content

Fix Rate - Percentage of issues you fixed vs ignored

Active Preventions - Number of issue types being prevented in prompts

Suppressed - Number of issue types no longer detected

Ready to Learn Section

Shows issue types awaiting your confirmation:

Prevention candidates:

  • Issue name
  • Description of the problem
  • Times fixed
  • Fix rate percentage
  • "Enable Prevention" button

Suppression candidates:

  • Issue name
  • Times ignored
  • Ignore rate percentage
  • "Suppress Detection" button

Click button to activate learning or "Not Yet" to keep collecting data.

Active Learnings Section

Shows currently active prevention and suppression rules with toggle switches:

Prevention rules:

  • Issue type name
  • Status: "Preventing"
  • Toggle to disable

Suppression rules:

  • Issue type name
  • Status: "Suppressed"
  • Toggle to re-enable detection

Toggle off any rule to stop applying it. System reverts to collecting status and waits for new pattern.

Charts

Issues Over Time - Line chart showing issues per article over 30 days. Should trend downward as learning improves.

Quality Grade Trends - Average letter grade over time (weekly or monthly view). Should trend upward.

How Prevention Works

When you enable prevention for an issue type, the system adds an instruction to content generation prompts.

Prevention Instructions

Each issue type has a default prevention instruction. Examples:

Rhetorical Questions:

"Avoid rhetorical questions. Make direct statements instead."

AI Isms:

"Don't use phrases like 'in the ever-evolving landscape' or 'it's important to note.' Write naturally."

Weak Openings:

"Start with a specific insight, statistic, or scenario. No generic introductions."

Where Instructions Appear

Prevention instructions are injected into content generation prompts under a section header:

```

LEARNED QUALITY REQUIREMENTS

Based on past content quality patterns, AVOID these issues:

  • Rhetorical Questions: Avoid rhetorical questions. Make direct statements instead.
  • Ai Isms: Don't use phrases like 'in the ever-evolving landscape.'
    ```

This appears in:

  • Keyword brief content generation
  • Trend brief content generation
  • Custom content generation
  • Content extension generation
  • Bilingual content generation

Verification

After enabling prevention, generate new content and check if the issue appears. If detection still finds the issue:

  1. The prevention instruction may need more time to take effect
  2. Issue might be hard to prevent (consider suppression instead)
  3. Try generating a few more pieces of content to confirm

How Suppression Works

When you enable suppression for an issue type, the system skips detection entirely during quality assessment.

Suppression Flow

  1. Content gets graded
  2. Quality service calls LearningEngine.get_suppressed_issues()
  3. Returns set of subcategories to skip
  4. Detection loops check: if issue_type in suppressed: continue
  5. Issue never appears in results

What Gets Suppressed

Suppression applies to:

  • Quality assessment during content generation
  • Manual "Grade Content" operations
  • Bulk grading operations
  • Issue detection API calls

Suppression does NOT affect:

  • Existing issues already detected (remain visible)
  • Content created before suppression was enabled
  • Other issue types (only suppresses specified subcategory)

When to Use Suppression

Suppress an issue type when:

  • It's not relevant to your brand voice (passive voice okay for academic tone)
  • Detection produces false positives consistently
  • You intentionally violate the "rule" (rhetorical questions in specific content types)
  • The issue type doesn't match your quality standards

Don't suppress if:

  • You sometimes care about the issue (use ignore on case-by-case basis instead)
  • Issue is valid but hard to fix (prevention is better)
  • You're unsure (keep collecting data)

30-Day Rolling Window

All statistics use a 30-day rolling window. Resolutions older than 30 days don't count toward learning decisions.

Nightly Recalculation

An automated task runs at 3:00 AM daily:

  1. Fetches all resolution records from the last 30 days
  2. Groups by category and subcategory
  3. Recalculates counts: times detected, times fixed, times ignored
  4. Recalculates rates: fix rate, ignore rate
  5. Updates learning status based on thresholds
  6. Saves updated trend records

Why Rolling Window

Adapts to strategy changes: If you fixed "rhetorical questions" 30 times last year but now ignore them, the system forgets old fixes after 30 days and adjusts.

Prevents stale learning: Decisions from 6 months ago don't influence current recommendations.

Maintains recency: Recent actions weigh equally to older actions within the 30-day window (no decay).

Manual Recalculation

To force immediate recalculation without waiting for the nightly task, contact support.

Performance-Weighted Learning

Status: Implemented (December 2025)

System correlates issue resolutions with content performance using Google Analytics data.

How It Works

Seven days after content publishes:

  1. An automated task fetches GA4 metrics (pageviews, engagement)
  2. Calculates performance_weight (0.5 to 2.0 multiplier)
  3. Updates resolution record with weight
  4. Trend aggregation uses weighted counts

Example:

  • Article A: 10 issues fixed → performed well (1.5x weight) → counts as 15 weighted fixes
  • Article B: 5 issues ignored → performed poorly (0.5x weight) → counts as 2.5 weighted ignores

Performance Tiers

| Tier | Pageviews (30d) | Weight | Meaning |
|------|-----------------|--------|---------|
| Top 25% | 1,000+ | 2.0 | High performer |
| Top 50% | 500-999 | 1.5 | Above average |
| Average | 200-499 | 1.0 | Baseline |
| Below avg | 50-199 | 0.75 | Underperformer |
| Low | < 50 | 0.5 | Poor traffic |

Weighted Metrics

Quality Insights dashboard shows:

Weighted Fix Rate - Fixes weighted by performance vs total weighted samples

High Performer Fixes - Count of fixes on content in top 25% performance

Weighted vs Unweighted Comparison - Shows if high-performing content has different fix patterns

Why This Matters

Prevents learning from low-performing content. If you fix issues in 10 articles but only the 3 high-traffic ones had those fixes, weighted rate will be lower than unweighted rate. System learns to trust patterns from successful content more than unsuccessful content.

Example Learning Scenarios

Scenario 1: Recurring AI Clichés

What happens:

  1. First 5 articles contain "in today's digital landscape"
  2. You fix it every time (5/5 = 100% fix rate)
  3. After 10 fixes, system prompts: "Enable prevention for AI isms?"
  4. You enable prevention
  5. Next 20 articles don't contain the phrase
  6. Detection still runs but finds no instances

Result: Issue prevented in prompts. Quality improves automatically.

Scenario 2: False Positive Passive Voice

What happens:

  1. System detects passive voice in academic content
  2. You ignore every warning (passive voice is intentional for your tone)
  3. After 10 ignores with 100% ignore rate, system prompts: "Suppress passive voice detection?"
  4. You confirm suppression
  5. Future quality assessments skip passive voice entirely

Result: Detection stops wasting processing time. Issue never appears again.

Scenario 3: Mixed Pattern

What happens:

  1. System detects rhetorical questions in 20 articles
  2. You fix 12, ignore 8 (60% fix rate)
  3. Doesn't hit threshold for prevention (needs 70%+)
  4. Status remains collecting
  5. Keep making decisions until pattern becomes clear

Result: No automatic action. System waits for stronger signal.

Scenario 4: Auto-Learning Without Confirmation

What happens:

  1. System detects em-dash misuse in 30 articles
  2. You fix all 30 (100% fix rate)
  3. Hits auto-prevention threshold (25+ fixes with 90%+ rate)
  4. System immediately adds prevention to prompts without asking
  5. Status: auto_preventing

Result: Overwhelming pattern bypasses confirmation. Immediate action.

Common Questions

Does learning apply retroactively to old content?
No. Learning only affects future content generation and detection. Existing content remains unchanged unless you regenerate or re-grade it.

Can I reset learning for one issue type?
Yes. Toggle off the prevention/suppression in Active Learnings section. System reverts to collecting status but preserves historical resolution data.

What if I accidentally enable prevention?
Toggle it off immediately in Active Learnings section. The prevention instruction will be removed from prompts for the next generation.

Does suppression affect content already published?
No. Suppression only skips detection in future quality assessments. Issues already detected remain visible.

Can I see the exact prevention instruction being used?
Yes. Navigate to Admin → Tenant Issue Trends → [Your Tenant] → [Issue Type]. The prevention_instruction field shows what's being added to prompts.

What if my strategy changes?
30-day rolling window automatically adjusts. Old decisions fade out after 30 days. Start making new decisions (fix → ignore or vice versa) and the system will adapt.

Does learning work across tenants?
No. Each tenant has separate learning patterns. Your fixes don't affect other customers' learning.

Can I export learning data?
Not currently available. Contact support if you need learning data exported.

Performance Expectations

Week 1: Baseline quality. No learning active. Issues detected normally.

Week 2-3: First prevention candidates appear. Enable 2-3 high-confidence issues.

Month 1: 3-5 active preventions. Issue count drops 30-40%. Quality grades improve from B/C to B+/A-.

Month 2: 5-8 active preventions. Issue count drops 50-60%. Quality grades consistently A-/A.

Month 3+: 8-12 active preventions. Issue count drops 60%+. Quality grades consistently A/A+.

Total improvement: 60% fewer issues by end of first month for typical tenant.

Related Features

Content Quality Grading - Issues detected during grading feed into learning system

Issue Registry - 48 issue subcategories with default prevention instructions

Content Generation - Prevention instructions injected into prompts automatically

Preference Learning - Separate system that learns content topic preferences, not quality patterns


Questions? Email support@illixis.io or ask Maya (bottom-right chat icon).

Ready to lose the stack?

One platform. You approve. ILLIXIS executes. Marketing that just happens.

Join the waitlistNo spam, everUnsubscribe anytime
First 20 founding members: 50% off any plan for your first year.

Marketing, Unstacked.