How 759,000 Students Use AI to Learn
A behavioral analysis of 759,000 AI tutoring sessions across 213,000 students in Bangladesh.
What 759,000 Students Taught Us About Learning with AI
In Bangladesh, 40 million students share classrooms with 60+ peers and one teacher. Private tutoring costs more than many families earn in a day. When Shikho launched an AI tutor for grades 5–12, we didn’t just want students to use it. We wanted to understand how they used it, and what that revealed about learning itself.
Over 15 months, we collected and analyzed every session on the platform. Not to report impressive numbers, but to understand the real behavioral patterns that would shape what we built next.
These numbers tell a story that surprised us in almost every dimension.
It Started Small, Then Exploded
The product launched quietly in early 2025. For the first six months, a few thousand students used it each month, mostly trying it out, figuring out what it could do.
Then in July 2025, something happened. Sessions jumped 5x in a single month.
Monthly Activity — Jan 2025 to Feb 2026
But here’s what the chart doesn’t show on its own: that July spike was a different kind of student. They wrote in English, they asked deeper questions, and they had lower rates of off-task behavior. By August, they were gone. Sessions nearly doubled again, but English usage collapsed from 28% to 5%. Banglish became the dominant language.
This matters more than the raw growth number. Any product decisions made on the July cohort would have been calibrated to the wrong user. The language shift told us exactly who we were actually building for.
What Students Actually Do
When you have a patient tutor available at 3 a.m. who never gets frustrated, what do you ask it?
Mostly: quick questions.
How students spend their sessions
Nearly 44% of sessions were students getting a fast answer: a formula, a definition, a step in a problem. This is AI as a smarter, faster textbook. That’s not a bad thing. In a classroom of 60 students, raising your hand to ask “wait, what’s the formula for kinetic energy again?” carries social risk. With an AI, it’s frictionless.
The more interesting 25% are the students doing sustained problem drilling: working through practice exercises, getting feedback, trying again. These sessions are longer, more conversational, and show the kind of repeated-attempt learning that actually builds skill.
Around 9% of sessions were students testing what the AI would and wouldn’t do: asking it to cheat, write essays for them, or go off-topic. We’ll come back to this number, because it almost doubled over the course of the year.
Are They Really Learning Deeply?
There’s a framework in education research called Bloom’s Taxonomy: a ladder from “remembering facts” at the bottom to “making judgements and creating original ideas” at the top. We classified every session by where it sat on this ladder.
The result was honest, if uncomfortable.
Depth of thinking — what students are really doing
70% of sessions are at the two lowest rungs: looking up facts or understanding how something works. Only 3.8% reach the kind of thinking where students are analysing, connecting ideas, or forming their own conclusions.
The uncomfortable truth: this isn’t the AI’s fault, and it’s not the students’ fault. When every exam mark comes from reproducing facts and formulas, that’s what students optimise for. The AI becomes a faster, more patient textbook. It was always going to be used this way given how the assessments are designed.
The encouraging sign: higher-order thinking grew from 2.9% to 4.3% over eight months, a 48% increase. Slow, but real. As students got more comfortable with the tool, they started asking it harder questions.
The Dhaka Centric Divide That Wasn’t
Before we launched, our assumption, like most edtech assumptions, was that digital tools benefit urban students most. Better devices, more English fluency, parents who understand online education.
The data disagreed.
Sessions by division — who's actually using the AI?
gap, Dhaka vs Barishal
Engagement quality is nearly equal across all divisions
Dhaka leads at 30.1% high engagement. Barishal is at 26.8%. A 3.3 percentage point spread across the entire country.
The engagement gap between Dhaka and Barishal is 3.3 percentage points across the entire country. Access wasn’t the problem. Students in every division were accessing the platform and learning at nearly identical rates. The actual gap isn’t about geography or technology. It’s about curriculum: students outside major cities are disproportionately in humanities and business tracks, not science. And science, as we’ll see, is where the AI helps most.
Why Science Students Use AI Better Than Everyone Else
This is the central finding of the entire research project.
It has nothing to do with intelligence. Nothing to do with effort. It’s about whether the curriculum gives students a reason to use AI productively.
How each academic track uses the AI differently
Science students: working through physics equations, chemistry problems, biology diagrams, have structured problems with clear right answers. The AI can help them work through each step, check their reasoning, explain what went wrong. The tool is genuinely useful.
Humanities and Business students are asked to write essays, form arguments, interpret texts, discuss social issues. These tasks require original thought, and the AI’s output can’t be used directly without it becoming academic dishonesty. So what happens? Students try to use the tool anyway, get frustrated when it won’t just write their essay, and start probing its limits.
This is a curriculum design problem, not a behaviour problem. Students aren’t worse in humanities. They just have a tool that was never designed for what their curriculum demands.
The Pressure Map: Why Grade Matters
Not all students test AI limits for the same reason. The data revealed a pattern that, once you see it, is impossible to unsee.
Students testing AI limits — by school grade
Young students in grades 5 and 6 have high off-task rates because they’re curious. They’ve never encountered a system like this before. They poke it, ask it weird questions, try to befriend it. This is benign exploration.
Grade 10 students are the most focused of any group. They’re preparing for the SSC national exam, the single most important test in their school career up to that point. The exam gives them a clear target, and the AI becomes a drilling tool. Off-task behaviour drops to 6.5%.
Then grade 12 spikes back up to 13.2%. These students are preparing for the HSC: the exam that determines university admission. The curriculum is broader, the stakes are higher, and when students can’t find what they need through legitimate questions, they push harder. This isn’t rebellion. It’s desperation.
The product implication: grade 12 students don’t need to be blocked or restricted. They need better exam-specific support: topic summaries, practice question banks, mark scheme guidance.
A Growing Warning Sign
One metric we tracked every month told us more about platform health than any other: the percentage of sessions where students were primarily testing AI limits rather than learning.
Off-task behaviour — growing over 12 months
It doubled in a year.
This mirrors a pattern documented in general AI usage research: as users get comfortable with a tool, they use it for more off-task purposes. The question is whether the product responds proactively or reactively.
The most important nuance: among the students who did this the most, the majority came back with legitimate questions afterwards. Testing AI limits appears to be a phase in a user’s learning curve, not a permanent behaviour. Students who find that the AI responds helpfully even when they’re probing its limits build stronger long-term engagement. Hard blocks produce frustration and churn.
The reaction data confirms this: boundary testing had the second-lowest “thumbs up” rate of any session type. Students know, at some level, that it’s not giving them what they actually want.
Who Are These 213,000 Students?
When you aggregate session patterns at the user level, eight distinct types of students emerge.
213,000 students — eight distinct patterns
The Drive-by pattern at 74.9% isn’t failure. These students had a specific problem, found an answer, and moved on. Exam-season studying produces exactly this pattern. The Night Owl at 16.7% is the defining study culture of Bangladeshi students: peak activity from 10pm to 5am, after tuition, after dinner, when the house is quiet. The Consistent Learner at 3.6% shows the clearest evidence of cognitive growth over time: question depth increases, off-task rate falls.
What This Research Revealed
The data told a story that cut against almost every assumption we started with. Rural students aren't disadvantaged users. Humanities students aren't disengaged learners. Grade 12 boundary-testers aren't bad actors. Each pattern, once understood, pointed to a specific product intervention.
Build for Humanities
Science was well-served. The next design priority is structured tools for humanities: argument scaffolds, Socratic prompts, essay planning flows that push students to think rather than outsource.
Make it easier to go deeper
Quick answers are frictionless; deep exploration is accidental. Follow-up prompts that shift students from 'what is X' to 'why does X happen' measurably increase higher-order engagement.
Grade 12 needs exam scaffolding
The HSC pressure spike is a design opportunity. Topic prioritisation, predicted question formats, and worked-example breakdowns channel that energy into productive revision.
More Projects
Ads Pipeline Implementation
End-to-end ads data pipeline and mobile attribution system that unified Facebook Ads, Google Ads, and in-app event tracking into a single BI layer for campaign optimization.
In-House CRM Transformation
Led the shift from a third-party CRM to a custom in-house sales platform for Shikho's 200-agent telesales team, improving data access, customization, and cost control.
Installment Plan System
Flexible installment-based course purchasing that expanded access to education for students who could not pay upfront.