Help us prove whether or not Writelike works

Retro sci fi utopian image of school life

We created Writelike with one goal in mind: to make a profound impact on student writing. Meaning:

  • Can we move the needle on national standardised scores? (Get everyone meeting age targets? Or substantially higher?)
  • Can we substantially lift classroom grades? (C to B? C to A?)
  • Can we change identities and attitudes towards writing? (So students who don’t like writing begin to enjoy it, find it powerful and rewarding, and change their entire sense of self as a result?)
  • Can we develop a culture in which everyone can express themselves with fluency and power, no matter what their circumstances and needs?
  • And can we do this with a realistic level of efficiency? (Without consuming all the teaching time and all the education funding?)

All very grand, but what’s the answer? Can we do it or not?

As yet, we don’t know; we don’t have the data.

We have some data, but they're informal and anecdotal. For instance, we have teachers who’ve said they can see students adopting text patterns, using them a year later, and beginning to adapt them to novel purposes.

That's great! However, the sample size is small and the teachers are talking about an incremental change, at best. Our ambitions are bigger.

We’ve always had confidence that we should be able to create a profound impact because we’ve based Writelike on a strong body research which we know has produced fantastic results in application. We’ve outlined some of this theory in our evidence page.

However, working theory does not guarantee working practice. We still need to know:

  • Is the Writelike-specific expression of these ideas effective in its own right? (It could be a bad application of theory!)
  • If Writelike is effective, is it more effective than other expressions of the same ideas? (For instance, is there any added value in a digital platform, or is all the value in CLT-informed classroom teaching?)
  • If Writelike is a multiplier in its own right, then how do you use it to get maximum impact? And is that maximum impact profound enough to make it all worthwhile?

It’s perfectly reasonable at this stage of product development to not know the answers to these questions. Certainly we know more than we did when we began: while Writelike might look obvious, you should see all the ideas we’ve tried and abandoned.

But it’s time to get some proper answers.

While we would love to participate in a university-level research project, a more realistic starting point is for ordinary teachers with an appetite for experimentation to trial Writelike and send us the results.

So we're asking for your help.

We have written some basic instructions on how to run trials of varying levels of rigour, but a nice and easy end-of-year experiment would be:

  1. Choose some Grade 6-9 classes that are learning about narrative and creative writing.
  2. Nominate a two-week period (such as the two weeks before Christmas holidays—not long now!).
  3. Get students to write pre-test narrative pieces.
  4. Run one of the Writelike narrative courses (with a couple of classroom sessions but mostly as homework).
  5. Get the students to write post-test narrative pieces.
  6. Obfuscate which samples fall into which group, and send to us for grading (or grade yourself—we're just trying to save you time).
  7. We send the graded samples back, you sort into original groups, let us know the scores, and we look at the difference.

If you want to do something like this, feel free to get in touch—we’ll do our best to take on some of the logistical or grading load, or support you any other way we can.

Practice