*This essay is a modified version of Chapter 1 of my book, “Teaching Math With Examples.”*

Some of the dullest teaching on the planet comes courtesy of worked example abusers. These are the math classes that consist of a steady march of definitions, explanations and examples, one after the next. Practice (and learning) happen out of the classroom hours later, while students work on their homework. With few exceptions, we’ve all had these kinds of teachers at one point or another. Many of us, at times, have *been* this kind of teacher.

This is not good teaching. Some students will learn from examples presented in this way, but not nearly enough. This raises the question: what’s the difference between worked examples done well and done poorly?

There is a paper by research psychologist Bethany Rittle-Johnson that offers an answer to this question. Rittle-Johnson (2006) first notes that cognitive science and psychology researchers have amassed evidence supporting the efficacy of worked examples when compared to asking students to invent their own strategies:

In support of direct instruction, a large number of studies have shown an advantage for learning and transfer if people study worked examples (a form of direct instruction) rather than solve problems unaided (a form of discovery learning). [1]

But, unfortunately, directly presenting examples doesn’t always lead to learning.. Of course, other teaching techniques studied by researchers fail as well. (If they didn’t, experiments wouldn’t find advantages to worked examples.) Still, there must be something *deeper *that can explain why it is that instructional techniques succeed or fail so often.

It seems simple, but the point is lost too often: what matters is what is going on inside a student’s head. This is true no matter what pedagogical technique you use. Rittle-Johnson continues:

*The potential benefits of discovery learning may be due to actively engaging the learner in manipulating, linking, and evaluating information – in other words, self-explanation – rather than the discovery of the procedure itself. Successful uses of direct instruction may emerge when learners are engaged in active cognitive processes like self-explanation.[2] [emphasis mine] *

Teaching only works when it provokes this sort of active, deeper thought: “manipulating, linking, and evaluating information.” Performing a series of worked examples while students drool into their notebooks doesn’t meet this standard. Neither does discovering a solution if students discover it by guessing. Either approach can, and often does, fail to produce learning. Educational techniques are not magic. If they don’t provoke thinking, they don’t work. (Cognitive scientist Daniel Willingham puts it like this: “memory is the residue of thought.”[3])

You won’t be surprised to hear that I find the research supporting them to be compelling. But it’s important to take Rittle-Johnson’s point to heart. Students do not learn ** from** a worked example; students learn when they think actively and deeply

**a worked example. To make the case for teaching with them, we need to explain what active and deep engagement with them looks like.**

*about*There are two major ways that students can fail to learn from a worked example:

- Students don’t think actively and deeply about the worked-out solution
- Students aren’t ready to understand the problem and its worked-out solution

As we’ll see, there are ways of designing activities that avoid these pitfalls.

**Analyze, Explain and Apply**

At first, I was skeptical of research supporting learning from worked examples. I assumed what research would recommend was what I knew not to be effective – a teacher, standing at the board for long periods of time, working out the solutions to problems step by painful step, pausing every so often to ask the class a question. (“And why do we subtract three from both sides? Anybody? Somebody?”) This is, after all, the most common mode of math instruction that students experience in the United States (Stigler, 1999).[4] If *that’s *what research was recommending, some enormous mistake must have been made.

The Algebra by Example project helped change my mind.[5] This project represents the fruit of a partnership between a team of researchers and a school district. The school district wanted to help students learn algebra; the researchers wanted to help. The researchers used the research literature to create a collection of worked examples. They then tested the examples in the classroom while performing an experiment that showed the materials actually worked as intended (Booth, 2015). [6] After testing, they posted the materials for free online so that any teacher could use them. They’ve since created and tested materials for elementary school students (“Math by Example”) and are currently working on middle school mathematics.[7] This seemed to me a best-case scenario for research that is relevant to the needs of classroom teachers.

Once I saw their materials, it became clear that I had completely misunderstood what the research recommended. I thought research called on teachers to simply present worked examples without worrying about student engagement. What I saw instead were activities that guided students towards deep thinking. A complete mathematical strategy was presented through the worked example, and students were prompted to explain *why* it worked. Then they were tasked with using the strategy to solve a related problem on their own. This was an active and interesting set of activities – my students would love them.

Algebra by Example materials always ask students to do three things:

**Analyze**a solution**Explain**why it works**Apply**it to a new problem

You can see these three steps at play in any of their materials. Here is an example activity of my own in the Algebra by Example style:

This is “direct instruction” by any definition – students are tasked with learning from a worked-out solution presented by the teacher. But it doesn’t fit the stereotype of boring, passive learning. Students are given something mathematically valuable and asked to think about it.

In the **“analyze”** stage, students begin by carefully reading a worked-out solution, an activity that puts students in direct contact with new and challenging mathematical ideas. As students closely analyze the procedure, they may ask: *does this make sense? do I understand what the solution did? why did it do this? could I do this on my own?* Students learn from asking and answering these questions, engaging in what Rittle-Johnson and other researchers call “self-explanation.”

Not always, though. Stretching back to nearly the beginning of worked-examples research is an awareness that not all students ask themselves these probing questions. This research was pioneered by Michelene T.H. Chi, who also suggested that the propensity to self-explain is responsible for some of the differences between stronger and weaker students (e.g. Chi et al. (1999)).[8] In her studies, students who successfully learned from worked examples were more likely to explain the examples to themselves as they read them. Other students studied the examples superficially, reading each line but failing to engage in self-explanation. These students were less likely to learn from the examples, and often failed to apply the strategies they studied to new problems.. Superficial engagement is perhaps the most common obstacle in the way of learning from a worked example. It’s what happens in passive, lecture-heavy classrooms across the planet: students listen, but they aren’t thinking.

A solution to this problem is straightforward: after reading the example, prompt students to **explain** it themselves. This draws attention to aspects of a solution students might have skimmed over. If artfully chosen, these prompts direct students to consider ideas in the solution they might not have even realized they didn’t understand. Booth writes that this may “facilitate integration of new information with prior knowledge and force learners to make their new knowledge explicit.”[9] In other words, when students realize there are things they don’t easily understand, it highlights what is new. Prompts are a sort of “safety net” for students who read the solution superficially at first. When students realize they can’t explain something, they go back to step one and analyze the example with more zest.

In the third stage, students are asked to **apply** what they know to a new problem. If the problem has been chosen well, this pushes students to form generalizations – they compare their solution to this problem to the solution they just read. Solving the problem sends many students right back into opportunities for self-explanation: *does the solution I just studied work here? did I really understand the solution? which parts of the solution will look different, and which will remain the same?* The teacher can prompt even more of this reflection during a follow-up discussion.

It should be clear that this is *not *the passive direct instruction that is often subjected to critiques in educational debates. This is something altogether different – an active, engaged direct instruction. It should hardly seem surprising that this form of teaching would benefit students.

**Notice and Remember**

Even if we could guarantee that students analyzed every example with care – making sure they engaged in deep self-explanation – there would be times that learning would fail. That’s because frequently students will not be prepared to learn from the example.

The research concerning this question is somewhat confusing, with claims and counterclaims tossed back and forth. Some researchers have argued that working on the problem *before *studying a worked example is important for learning. Manu Kapur, for instance, has argued that this has benefits even if the student fails to correctly solve the problem – he calls this “productive failure.”[11] Slava Kalyuga and Anne-Marie Singh, on the other hand, suggest that solving a problem before an example might be useful for helping a student understand what a problem is even asking and what its solution may look like.[12] Likewise, Schwartz and Martin suggest that inventing an inefficient procedure prepares students to more quickly and effectively learn from examples and explanations.[13]

Meanwhile, other researchers have performed experiments showing advantages to example-first teaching, and still others have found no differences between example-problem and problem-example approaches.[14] It’s all very confusing, and researchers are currently trying to design studies that could explain these conflicting results.

Just as we did before, to make sense of this debate I think it’s helpful to dig deeper. There are at least two mental processes that are crucial prerequisites for learning from a solution:

- Students must
**notice**everything in the problem that will be involved in its solution - Students must
**remember**previous knowledge and strategies that are taken for granted in the solution

Consider, for instance, this worked example for finding the length of a trapezoid base, given its area and other lengths. It’s a lot to take in all at once! Noticing and remembering certain information is a prerequisite from learning from the solution:

If students don’t notice everything about the problem or if they don’t remember necessary material, they aren’t likely to gain much from analyzing a solution.

I often precede a worked example with a short problem to solve. These problems tend to be brief, as I want my students to save their energy for working through a solution. When they work, I think it’s because they help my students notice and remember information crucial for the example.

Suppose, for instance, that I’m teaching a class to write linear equations from graphs of lines. They’ll need to **remember** how to find the slope of that line for the solution to make any sense. They’ve studied this already, of course, but I would like to remind them. I’ll give them a f problem before the example:

There is also a ton of information contained in a graph of a line, all of which I would like students to **notice** before attempting to understand the solution. I might share that same diagram and say, “I’d like everyone to notice something about this diagram – the more specific, the better.” I’d then ask students to share what they see. They might say any of the following:

- There is a line
- The numbers go by 2
- The line passes through 6 and 4 (“It’s (6, 4),” I’ll remind them.)
- It also passes through (0,0).

Each of these observations will be useful when I present an example showing how to find the equation of a line.

This is how I make sense of the narrower debate about whether solving problems enhances learning from examples. Students need help noticing and remembering details that will appear in the solution; problems can help students do this. There are ways to do this without asking students to solve problems, but I find it useful to launch class with a brief warm-up problem anyway. I might as well use it to support their upcoming learning.

I also suspect that these warm-up problems provide a motivational boost to students as they head into a worked example. It shows them that class is going to be focused on extending that which they already know and understand. A quick problem allows me to clearly articulate the purpose of the day’s class – we’ve already studied this, but today we’re taking it further. I have rarely experienced issues motivating students to study worked examples compared to other classroom activities; I attribute this to my use of notice/remember problems, along with the analyze/explain/apply structure I have adopted for teaching with examples.

## In Conclusion

Researchers and educators have sometimes fiercely debated worked examples. Do students learn better when they study an example or when they discover an idea on their own?

My approach to making sense of these disagreements is to dig deeper. The key is to get students thinking about the new mathematics in as direct a way as possible – that means they can learn from **analyzing **a solution. To make sure that analysis has been careful, we encourage students to **explain** crucial aspects of the solution. To help students generalize the solution (and to further ensure they’re thinking carefully) we can ask them to **apply **it to a new problem.

To learn from a solution, students need to **notice **crucial aspects of the problem. They also need to **remember **some things they have already learned. A problem that comes before an example can help students notice and remember this information, and so enhances their learning from the example.

Is this the only way to learn? Is it the only technique you need for learning math? Of course not. But worked examples don’t deserve to be seen as a tool of passive, boring instruction. What research on worked examples actually points to is a set of learning activities that provoke lively, active thinking in our students. Using these techniques, we put our students in direct contact with some new and interesting ideas.

[1] Rittle‐Johnson, B. (2006). Promoting transfer: Effects of self‐explanation and direct instruction. *Child development*, *77*(1), 1-15.

[2] Ibid.

[3] Willingham, D. T. (2008). What will improve a student’s memory. *American Educator*, *32*(4), 17-25.

[4] Stigler, J. W., Gonzales, P., Kawanaka, T., Knoll, S., & Serrano, A. (1999). The TIMSS videotape classroom study: Methods and findings from an exploratory research project on eighth-grade mathematics instruction in Germany, Japan, and the United States. *Education Statistics Quarterly*, *1*(2), 109-112.

[5] Algebra by Example – https://www.serpinstitute.org/algebra-by-example

[6] Booth, J. L., Oyer, M. H., Paré-Blagoev, E. J., Elliot, A. J., Barbieri, C., Augustine, A., & Koedinger, K. R. (2015). Learning algebra by example in real-world classrooms. *Journal of Research on Educational Effectiveness*, *8*(4), 530-551.

[7] Math by Example – https://www.serpinstitute.org/math-by-example

[8] Chi, M. T., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. *Cognitive science*, *13*(2), 145-182.

[9] Booth, J. L., Oyer, M. H., Paré-Blagoev, E. J., Elliot, A. J., Barbieri, C., Augustine, A., & Koedinger, K. R. (2015). Learning algebra by example in real-world classrooms. *Journal of Research on Educational Effectiveness*, *8*(4), 530-551.

[10] Rittle-Johnson, B., Loehr, A. M., & Durkin, K. (2017). Promoting self-explanation to improve mathematics learning: A meta-analysis and instructional design principles. *ZDM*, *49*(4), 599-611.

Renkl, A., & Eitel, A. (2019). Self-explaining: learning about principles and their application. *J. Dunlosky & K. Rawson (Eds.), Cambridge Handbook of Cognition and Education*, 528-549.

[11] Kapur, M. (2008). Productive failure. *Cognition and instruction*, *26*(3), 379-424.

[12] Kalyuga, S., & Singh, A. M. (2016). Rethinking the boundaries of cognitive load theory in complex learning. *Educational Psychology Review*, *28*(4), 831-852.

[13] Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. *Cognition and Instruction*, *22*(2), 129-184.

[14] Ashman, G., Kalyuga, S., & Sweller, J. (2020). Problem-solving or Explicit Instruction: Which Should Go First When Element Interactivity Is High?. *Educational Psychology Review*, *32*(1), 229-247.

Likourezos, V., & Kalyuga, S. (2017). Instruction-first and problem-solving-first approaches: alternative pathways to learning complex tasks. *Instructional Science*, *45*(2), 195-219.