What's the evidence?
Writelike draws on several areas of educational research, theory, and professional practice. So there are lots of reasons why it should work, but whether or not it does—that remains to be seen. It’s new and as yet unproven.
Below is a summary of the models which have informed its development, anecdotal feedback to date, a list of open questions, and an invitation to participate in a controlled trial if you are interested.
Cognitive load theory
Cognitive load theory is an information processing model that has over 40 years’ worth of amazing experimental results. It’s content agnostic—it has no particular opinion on language teaching or writing—and it has little to say about student motivation. But in terms of raw instructional design, it is very powerful.
The recommendations of cognitive load theory will sound straightforward to most teachers: don’t overwhelm students with too much instruction at once, give them time to practice, provide examples, beginners need more scaffolding, and so on. But the real question is a matter of degree: how much instruction vs practice? How many worked examples? And so on.
If you take the theory seriously, the resulting instructional designs can look quite different to conventional teaching in English (although less so in performance-oriented fields such as art, sport and music).
You’ll see the CLT influence on Writelike in the emphasis on minimalist instruction, emphasis on learning through worked examples, massed practice, part-whole structures, and explicit scaffolding for novice learners.
Further reading: NSW Government Centre for Education Statistics and Evaluation
Social learning theory
A big hole in cognitive load theory is motivation and meaning. CLT-informed instruction helps students develop fluency, which can be motivating in and of itself, but the theory has almost nothing to say about why we teach anything, or why students should be bothered to learn.
Social learning theory fills this gap. Learning is never done in isolation (even when done alone). Content is culture and mastering relevant skills has social value. We derive motivation and meaning from our relationships with other people, both directly as individuals and more broadly as participants in society.
Writelike is particular informed by the apprenticeship and community of practice writing of Jean Lave and Etienne Wenger. In the purest sense, that would suggest having young students work as proof-readers, researchers and layout assistants, but Writelike has a more watered-down, school-friendly approach.
First, even though Writelike is an online tool, it is designed to be situated in a classroom, with a teacher and a group of students. The instruction, guidance, elaboration and feedback from the teacher is crucial, as is discussion among students. Second, the peer review system provides for social reinforcement and value. Third, by scaffolding from authentic texts, we are keeping students close to the craft of writing, with all of its real-world complexities and values.
Where cognitive load and social learning theory are content-agnostic, genre-based pedagogy is explicitly about teaching reading and writing. The genre-based approach emphasises the social role of different text types and attempts to develop student mastery of genre-specific text features.
Genre-based pedagogy places a heavy emphasis on writing as a means of learning to read, analyse and understand texts of all types. Similar to social learning theory, it tries to induct students into a cultural heritage of meaning-making and exchange through texts, and similar to the suggestions of cognitive load theory, it teaches complex skills by starting small and providing lots of modelling.
Writelike aligns with genre-based pedagogy because it has a similar functional and craft-oriented point of view, with a focus on developing fluency in genre conventions. Writelike has a similar approach to the deconstruction of text, joint construction (whether through worked examples or teacher-led rewrites) and then individual construction at multiple levels of complexity.
There’s a lot of theory behind Writelike, and it’s been shaped by classroom practice, but there are many open questions. For instance:
- What is the relative value of different approaches to highlighting? For instance, how valuable is highlighting grammatical patterns (e.g. prepositional phrase–nominal group–verbal group–prepositional phrase) vs semantic patterns (e.g. a small event–indicates–something much larger)?
- How many worked examples do students need on a page?
- How many responses should students write on a page?
- How much strategic instruction is necessary?
- How much support in terms of images, vocab support and writing prompts is necessary?
- What is the relative value of overview lessons vs project lessons?
- What is the actual scale of impact on writing quality of using Writelike?
- How long does it take to see results? What kind of volume and duration of practice is required?
- Is there a threshold where students can move from structured lessons to more open-ended drills?
To answer these questions, we need your feedback.
Help validate or invalidate the Writelike approach
We hope to one day perform a proper publication-quality controlled trial, but that’s a complex undertaking, so in the meantime we are gathering more anecdotal feedback.
If you want to help us prove or disprove that Writelike works, we would love to help you run an informal experiment in your school. There are a couple of ways this can work, depending on your school’s appetite for rigour.
Super informal: Just pre and post assessment
- Get a pre-intervention writing sample
- Use Writelike for a period of weeks or months in one group
- Get a post-intervention writing sample
- Grade both samples and record the degree of improvement
This is helpful, but susceptible to confirmation bias.
Semi formal: Anonymised and obscured
- Get a pre-intervention writing sample, but ask students to identify it using a code instead of their name
- Use Writelike for a period of weeks or months in one group
- Get a post-intervention writing sample, and again ask students to identify their work using a code instead of their name—a different code this time
- Combine the two sets of samples, mix them up, then grade them
- Have students identify their work and separate the samples back into the original pre and post intervention sets.
- Tabulate student names with pre and post intervention grades, and then take a closer look at degree of improvement.
This helps eliminate bias re student identity and pre/post grouping, but it doesn’t control for the learning students would get from an existing instruction.
More formal: Controlled
As above, but divide students into control and trial groups. (For instance, one class uses existing instruction, one class uses Writelike.)
Do pre and post test, mix everything up and grade blind as above, then identify the tests, collate back into pre and post assessment sample sets within treatment and control groups, then tabulate the results and compare.
This approach is obviously the most involved, but by comparing Writelike and non-Writelike groups, it helps discount improvements that are purely a result of the particular subject matter. ("Everyone did better this term because everyone loves pirates.")
If you're keen to run a trial
If you’d like to run an experiment at your school, we’d love to try and support you any way we can, so get in touch.