Does your technology make learning better? Does it make assessment better? Does it make learning more enjoyable? These are the key questions asked by Professor Richard Kimbell from Goldsmiths when he's looking at technology, and he found a problem with all three in e-portfolios. They need to change.
Currently, performance portfolios are created as an end result of project work. With teachers who are increasingly aware and communicating what will gain a good grade, we end up with a project and therefore a portfolio which are not real, which are fiction, which have no real sense. It is, says Kimbell, one of the reasons girls do better than boys - girls have more patience and creativity for presenting the results in a well-finished manner.
Cue Project E-Scape: this project was about generating real-time performance portfolios and finding new ways of assessing them. Initially, the idea began on paper.
A change in pedagogy
The tasks are real: repackaging lightbulbs to make the packaging reusable and multifunctional. The results: the box should be hexagonal, with a taper for the narrow end of the bulb. If you get enough of them you would end up with a sphere to surround the lightbulb. You can cut the ends to create lettering or animals which are then projected around the wall. Their projects are entitled "Your name in lights" or "Jack-In-A-Box light". You can see an example of project in this video.
Students, in their projects, are handed a script by the teacher, which choreographs their activity but does not dictate it. It's a scaffold for some improv. These students end up working like engineers, with the teacher in a technician role: "you could do it this way, or that way, or this way. It's your call". Teachers hate it, seeing their role reduced in some way from the sage on the stage to very much the guide on the side.
The need to make assessment digital
The project became digital as a result of an argument, an argument between two students about where their project should go. If only the teacher could capture that discussion it would make such a difference to the final assessment, providing a way to fill a gap in the learning process which is rarely assessed, if at all.
E-Portfolios, though, have three core problems. Firstly, they are generally works of fiction, created in a sterile ICT suite or on a laptop in a students' bedroom, not in the workshop or art room where the action (and learning) was happening. Secondly, It's a secondhand activity, digitally constructed as an afterthought to the learning itself. Finally, what kids tell you they're learning is different from what they write down in a portfolio.
So, E-Scapes asked if they could capture, in a portfolio, the learning that was happening in typical, messy, complex classrooms. They answered with handheld learning devices and collaborative co-creation of ideas: ideas are created, swapped around and extended by team-mates. As work is done, step-by-step, the work is uploaded dynamically to the e-portfolio website. Each stage of the learning 'build' can be accessed in a browse mode, or examined in greater detail. It's real-time, so the teacher can see and hear everything, all of the time, act on the spot or react later. You can see more of the process in this video.
How can this be assessed?
One potential methodology is based upon the law of comparative judgement. Think about eye tests, where we are asked which spot is sharper, the one on the left or the one on the right? We've only got two options, so we answer which one is better, without considering or knowing why. Taking this further, the E-Scape team, with their especially hard-to-judge non-identical projects, is to use a comparative pairs methodology (pdf). On a very simplistic level, assessment from seven judges is carried out on pairs of projects at a time, each judge marking 17 pieces of work. The judges decide which one is better, and move onto the next pair for the first round.
In a second round, the 'core' of median performances are taken and worked on further to create a rank order of evenly spaced performances. Using the resulting curve of performance, grade boundaries can be created retrospectively to award a grade, and the margin of error between the highest and lowest opinion of judges can be seen as clear as a whistle. These large margins of error are down to judges disagreeing, so these portfolios need to be pulled out and looked at further. We can also look at the judges and how consensual each one is with the rest of the judging team (the principle of moderation, which Scottish schools already practice). Those who are too harsh or too 'easy' can stimulate discussion as to why a project might be more or less strong. So this formative assessment informs the judges and teachers.
The reliability coefficient of all this? 0.93% It's virtually faultless, and no assessment system anywhere else comes close to getting this realistic in its outcomes. The team are working now on the third phase pairs being selected automagically after each judgement has been made, making sure that the process is as efficient as possible.
If you want to take more away from this model, the innovation in teaching, learning and assessment, I cannot recommend highly enough the interim reports on the TERU website: Phase 1 and Phase 2. You might also want to watch this 30 minute programme on new e-assessment ideas, where the E-Scape project is featured, and follow Professor Kimbell in discussion on the assessment element of the project in this programme.
Pic: Moleskin PDA