Part 6- The Learning System: How Dwij's Feedback Loop Adapts to Every User Action
A deep dive into Dwij's event-driven feedback loop, exploring the signals we capture and the mechanisms we use to continuously update our user models, making every recommendation smarter than the last.
A static intelligence is a contradiction in terms. Any system designated as "AI" must be capable of learning and evolving; otherwise, it's merely a complex set of fixed rules, destined for obsolescence the moment it's deployed. In the dynamic world of student preparation, where motivation, fatigue, and knowledge fluctuate daily, a static system is not just ineffective—it's counterproductive.
This is why we architected Dwij not just to make recommendations, but to learn from their outcomes. Every user interaction—a completed test, a skipped question, a drop in accuracy during a long session, even a moment of hesitation—is treated as a valuable signal. This stream of signals feeds our **Feedback Adaptation Loop**, a closed-circuit system that continuously refines our understanding of each learner. This post explores the architecture of that loop: the signals we capture, the models we update, and how this process transforms Dwij from a static tool into an evolving coaching partner.
Beyond Analytics: Feedback as an Engine for Change
Most platforms collect user data for analytics dashboards, offering rearview-mirror insights. Our philosophy is different: feedback is not for passive analysis; it's the fuel for real-time change. A student's journey is not a predictable, linear path. Confidence gained from mastering one topic can be shattered by a difficult new one. The goal of our feedback loop is to capture this dynamic state and immediately translate it into smarter future decisions.
The Principle: Every Action is a Signal
In our ecosystem, there is no wasted user action. A student skipping a Chemistry test for the third time is sending a signal as powerful as a student acing a Math quiz. The former might indicate a confidence crisis or topic aversion; the latter signals mastery and an opportunity for a new challenge. The Feedback Loop's primary function is to listen to both explicit signals (test scores) and implicit signals (behavioral patterns) and update the student's core `UserContext` object accordingly.
[The feedback loop makes the final decision from our optimizer even smarter. Read about it here: "Strategic Selection: A Deep Dive into Dwij's Multi-Objective Optimizer"]
The Anatomy of a Signal: What We Capture and Why
Our Feedback Adapter microservice listens for dozens of events, which we categorize into four high-impact types. Each type provides a different lens through which to understand the user's current state.
1. Performance Events (The "What")
These are the most direct signals of academic progress. They include question-level correctness, time taken per question, and the overall score. We also track the delta between first-attempt accuracy and re-attempt accuracy, which provides a strong signal about learning effectiveness.
2. Engagement Events (The "How")
This category captures user intent and motivation. A skipped test, an early exit from a quiz, or repeatedly retrying a test in a short window are powerful behavioral indicators. They help us understand if a student is feeling overwhelmed, frustrated, or determined.
3. Fatigue & Effort Signals (The "When" and "How Much")
We continuously monitor for signs of cognitive load. A noticeable drop in accuracy after 40 minutes of study, a session length that is significantly longer than the user's historical average, or a sudden increase in the time taken to answer simple questions are all signals that feed our `fatigueScore`.
4. Streak & Pattern Models (The "Tendencies")
This layer looks for higher-level patterns over time. This includes positive signals like consecutive day completion streaks, and negative signals like a multi-day avoidance pattern for a specific topic. We also model time-of-day profiles to see if a student is consistently less accurate during late-night sessions.
The Update Mechanism: Translating Signals into Intelligence
Capturing signals is only the first step. The magic happens when these signals trigger real-time updates to the core `UserContext` object, recalibrating our understanding of the student.
// Example: Updating the Performance Map after a test
performanceMap[topicId].accuracy = rollingAvg(latest_accuracy);
performanceMap[topicId].attempts += 1;
performanceMap[topicId].lastAttemptedAt = now();
Beyond simple performance updates, the feedback loop continuously tunes the more nuanced models:
- The Retention Model: When a user correctly answers a question on a topic, the decay of that topic's "forgetting curve" is slowed. Conversely, long periods of inactivity accelerate the decay, increasing the `retention` score and making the topic more likely to appear in a future revision quiz.
- The Fatigue Score: A long, difficult session will immediately increase the user's `fatigueScore`. This has a direct, instantaneous impact on the next recommendation cycle, causing the RCP Generator to filter out high-effort tests and the Scoring Engine to apply a heavy `fatiguePenalty`.
- The Persona Tag: A user who consistently attempts challenging tests and works through their weak areas might see their `persona` tag evolve from 'Avoider' to 'Grinder'. This change adjusts the weights in the MOO, leading to a different set of recommendations more suited to their new study habit.
Architecture & Real-World Impact: The Case of "Raj"
Our feedback architecture is built for scale and real-time responsiveness. A stateless **Feedback Adapter** microservice ingests user events and passes them to a **Partial Update Engine**, which efficiently modifies only the relevant fields in the User Context. This update triggers a **Cache Invalidator** for that user's profile in our Redis layer, ensuring the next recommendation request fetches the very latest data.
Let’s see how this impacts a student named Raj.
- Initial State: Raj is studying for the SSC CGL exam. He is strong in English but has been skipping Quantitative Aptitude ('Quants') for three days.
- Action & Signal: On the fourth day, he attempts a long Quants session. His accuracy starts at 70% but drops to 40% by the end. The system captures `(topic: 'Quants', accuracy_delta: -30%)` and `(session_length: 'high')`.
- Context Update: The Feedback Loop processes these signals. Raj's `fatigueScore` spikes. His `performanceMap` for Quants is updated, but the system notes the accuracy drop. His `persona` tag might temporarily gain an 'avoider' trait for complex Quant problems.
- System Response: On his next visit, the RCP immediately filters out all full-length mocks. The MOO, seeing the high fatigue and recent struggle, selects a short, easy English revision quiz to rebuild confidence and a very low-stakes, 5-question Quant quiz targeting the specific sub-topic where his accuracy first started to drop.
Raj doesn't need to manually diagnose his burnout. The system detected it, understood its context, and adapted its strategy for him automatically.
Your Training Starts Now
Be the first to get access. Join the waitlist and help us build the perfect practice tool for you.
What’s Next: From Intelligence to Interface
We have now journeyed through the entire intelligent core of the Dwij engine: modeling the user, generating candidates, scoring them, selecting a final set, and learning from the outcome. But how do we present this intelligence to the student in a way that is clear, motivating, and transparent? In our next article, we explore the final piece of the user experience: the **Planner and Explanation Layer**, where system intelligence becomes an actionable and trustworthy daily roadmap.
[Read the Blog series Finale here: "Strategic Selection: A Deep Dive into Dwij's Multi-Objective Optimizer"]