Part 2- The Brain of the Coach: A Deep Dive into Dwij's User Context Layer

CodeClowns Editorial TeamJuly 7, 202511 min read

An engineering breakdown of how Dwij models students with a real-time User Context Layer, moving beyond stateless personalization to enable truly adaptive, strategic coaching based on performance, fatigue, and learning personas.

Imagine a world-class coach training an athlete. Do they use the same training plan every day? Do they forget a crucial injury from last week? Of course not. A great coach’s value comes from their memory—their deep, evolving understanding of the athlete’s history, fatigue, confidence, and goals. This is the fundamental gap in most educational technology today. Traditional platforms suffer from a form of digital amnesia, treating each student interaction as a largely isolated event.

At Dwij, we engineered our system to have a memory. More than that, we gave it a central nervous system. This is our User Context Layer—the first and most foundational piece of intelligence in our recommendation pipeline. It’s the "brain of the coach," a living, breathing profile of each learner that makes true, moment-to-moment personalization possible. In this second installment of our engineering series, we will dissect the architecture, data models, and real-time engines that form this critical foundation.

The Critical Flaw of Stateless EdTech: Digital Amnesia

A system that cannot remember cannot truly personalize. When a platform operates statelessly, its recommendations are shallow, based only on a few recent actions. This approach fails to capture the nuances of a student's long-term journey.

When "Personalization" Isn't Truly Personal

Simple rule-based logic like, "If a student fails a topic, show them more of that topic," is not intelligent personalization. It lacks context. Why did they fail? Were they fatigued after a long session? Is it a consistent weakness or a one-time slip? A stateless system doesn't know, so it often recommends a difficult test to a tired student, leading to a vicious cycle of failure and demotivation.

The Vicious Cycle of Cognitive Overload

Without a memory of a student's cognitive load, a platform cannot distinguish between a productive challenge and a frustrating overload. It will keep pushing content, oblivious to the student's diminishing returns. This is why many students feel "stuck," putting in hours of work without seeing improvement. They are fighting not just the exam syllabus, but also a platform that is actively working against their mental state.

[Checkout the First part of this Blog series here: "The Architect's Blueprint for Dwij's AI Engine"]

Architecting the "Living Profile": Dwij's User Context Object

To solve this, we architected the User Context as a first-class citizen in our system. Each student is represented by a rich, JSON-based `UserContext` object that serves as their real-time "digital twin." It's not a static profile but a dynamic entity, continuously updated by every interaction—or lack thereof.

Unpacking the UserContext Schema

This object is the single source of truth about a learner's state. Here are some of its key components from the data interface:

  • performanceMap: This is the core academic memory. For every granular topic (e.g., 'Trigonometry - Compound Angles'), it stores accuracy, attempt count, and speed metrics. Crucially, it also tracks a localized fatigue score from when the topic was last attempted.
  • retentionModel: This field powers our Spaced Repetition logic. It tracks a "forgetting curve" score for each learned concept, allowing the system to reintroduce topics at the optimal time to maximize long-term retention.
  • persona: This is our psychometric layer. It classifies a user's learning style—like 'Grinder', 'Explorer', or 'Avoider'—based on how they interact with difficult, new, or repetitive content. This is a key input for our Multi-Armed Bandit algorithms.
  • goal: This attribute aligns practice with ambition. It tells the entire recommendation pipeline whether to prioritize `syllabus_coverage`, deep `mastery` of topics, or `simulation_ready` performance in full-length mocks.

The Engine Room: How Context is Forged in Real-Time

The `UserContext` object is not static; it's updated via an event-driven architecture. Every user action—completing a test, skipping a question, taking a long pause—emits an event. These events are processed by specialized engines that update specific parts of the context object.

The FatigueEngine: Your Personal Energy Guardian

This engine acts as a cognitive load monitor. It listens for signals of mental fatigue, such as a drop in accuracy over a long session, unusually slow response times, or even late-night study patterns. From these signals, it calculates a real-time fatigue score (a float between 0 and 1). This score naturally decays over periods of inactivity, simulating rest. The downstream optimizer uses this score to decide if today is a day for a sprint or a marathon.

The PersonaModel: Understanding Your Learning Style

This model observes behavior to classify a user's learning tendencies. A 'Grinder' might willingly repeat difficult topics, while an 'Explorer' prefers variety and gets demotivated by repetition. An 'Avoider' consistently skips certain topics, signaling a potential confidence issue. This persona tag is not a permanent label but a dynamic state that helps our optimizer fine-tune the balance between challenging a user (exploitation) and letting them try new things (exploration).

Engineering for Speed & Scale: Design Rationale

Building a real-time context layer for thousands of concurrent users presents significant engineering challenges. Our architecture is built on four key design principles to ensure performance and reliability.

  • Partial State Refresh: To minimize write-load and prevent data races, an incoming event only triggers updates to the relevant fields in the context object. For example, a 'test_completed' event updates the `performanceMap`, but not necessarily the `persona`.
  • Latency-Optimized Caching: User contexts are the hot data in our system. They are stored in an in-memory cache layer (like Redis) for millisecond-level lookups. This "hot" context is periodically synced with a durable cold storage database (like MongoDB) for persistence.
  • Context Mutation Isolation: To ensure system sanity and debuggability, each engine (Fatigue, Persona, etc.) has an exclusive write scope. One engine cannot directly modify a state managed by another, preventing unpredictable cascading effects.
  • Read-Only Downstream Consumption: All other layers of the recommendation pipeline (RCP Generator, Scoring Engine, etc.) consume the User Context in a read-only manner. This enforces a clean, one-way data flow, making the system's behavior consistent and observable.

Case Study: Sanya's First Week

Let’s see how this works in practice. Sanya is preparing for the CAT exam. Her initial diagnostic reveals her `UserContext`:

  • Performance: Strong in Verbal Ability, weak in Quantitative Aptitude (especially 'Geometry').
  • Persona: Initially classified as an 'Avoider' for Quant.
  • Fatigue: Low (0.1).
  • Goal: `syllabus_coverage`.

Day 1: Dwij recommends a short, easy 'Geometry' quiz. The goal is not to master it, but to get a baseline and build a sliver of confidence. Sanya completes it, and her `performanceMap` for Geometry updates slightly. Her `fatigueScore` inches up.

Day 2: Recognizing her strength, the system recommends a slightly challenging Verbal Ability test to maintain her streak and morale. This is a confidence-building move.

Day 4: After a rest day, her `fatigueScore` has decayed. The `retentionModel` flags that her initial 'Geometry' knowledge is fading. The optimizer now chooses a medium-difficulty 'Geometry' test—a strategic push. She performs better this time, and her `persona` score begins to shift away from 'Avoider'. This is the system in action: a dynamic, responsive coaching strategy based entirely on her evolving context.

Your Training Starts Now

Be the first to get access. Join the waitlist and help us build the perfect practice tool for you.


Up Next: Generating the Playbook

The User Context Layer provides the deep understanding—the "who" and "why." But what about the "what"? Now that we have a rich profile of the learner, how do we generate a set of strategically viable tests for them to take? In our next article, we’ll dissect the Recommendation Candidate Pool (RCP) Generator, the layer responsible for creating the daily playbook of personalized practice options.

[Checkout the next part of this Blog series here: "The Architect's Blueprint for Dwij's AI Engine"]

Preparing for CAT, SSC, CUET or IELTS? Dwij gives you mock tests, AI-generated questions, and personalized planners — everything you need to practice smarter and get exam-ready.

#engineering blog#dwij#ai#system design#personalization#edtech#data modeling#recommendation systems