Stephen Hale at the FCO has an excellent, interesting and important post about measuring the success of the London G20 Summit site.
With wonderful openness and transparency, Stephen has set out some of the factors by which the site’s success could be measured, along with the results. Its fascinating reading, and provides lots of lessons for anyone approaching an engagement project like this.
Indeed, this ties in with Steph’s recent (and overly-modest) post about the achievements of the engagement bods at DIUS over the last year or so. He wrote:
We still haven’t nailed some of the basics like evaluation, [or] the business case
Figuring out whether or not something has actually worked is terrifically important, and the long term efficacy of online engagement relies on this nut being cracked.
Stephen’s post highlighted some really good practice here: outline what your project aims to do, and come up with some measures around it so you can work out whether it worked or not.
As Steph mentions, having an up-front business case is really important – a written down formulation of what the project actually is and what it ought to achieve.
Now, business cases and evaluation criteria can be developed in isolation and in a project-by-project basis. I wonder, though, how much more value could be created by developing a ‘package’ of evaluation which could be used as a foundation by everyone involved in government online engagement?
Of course, each project has its own unique things that will need to be measured and tested, but surely there are some basic things that every evaluation exercise would need to look at?
How about some common evaluation documents were created, and that every project undertaken ensured that the basic, common stuff was recorded, as well as the unique bits. That way, some kind of comparative analysis would be possible, especially if everyone submitted their results into a common database.
Just how hard would it be to come up with a common framework for online engagement projects? I think it is worth a shot.
{ 5 comments… read them below or add one }
I was really impressed by the transparency of this. Others would be well-placed to watch and learn.
As budgets get squeezed, as they will, the need for provable results is only going one way. Motto: Be a Boy Scout and be prepared.
I’m wary of people using the London Summit as a role model, though. Too big, too global.
I’ve never felt comfortable with short term, fixed term projects like this. The examples of true ‘engagement’ which I’ve experienced have been when people open themselves up to the wider world, not for one specific day, but as an integral part of their work and/or life.
Let’s think about the word ‘engagement’. When two lovers get engaged, they aren’t doing so – in theory, at least! – just to organise a large one-off event, namely the wedding. It’s the first step in a much longer process leading to ’til death us do part’.
Must – get – evaluation – post – written…
Re: business case – it may be just a point of terminology, but I see the basic task of project definition (what it is you’re doing, why, how, with what and to whom) as slightly separate from the business case which to me is about having an evidence base for a justification why the project will achieve its aims, based on prior examples. Maybe I just have rather high hopes for my business cases.
@Simon: Agreed, it’s an unusual example. But it’s not the success or otherwise of the engagement so much that I find really positive, but the fact that it had clearly defined goals, some proper welly behind it and is being thoroughly evaluated – at least partially in public. Sure, we’ve seen evaluations like the Hansard Society’s Digital Dialogues before but defining, implementing and evaluating your own digital engagement is still unusual and rather impressive.
Thanks for pointing this out Mr Briggs. I’ve forwarded on to the team. It’s very interesting to see some of the info around big events and how different departments are measuring their success .
There appears to be several things being discussed here.
1) Evaluation of online activity (website around the G20 summit & online consultation etc); and
2) Digital engagement which is about open and transparent government practice that begins with a conversation and builds into a meaningful ongoing relationship.
The type of evaluation metrics needed for these two distinct activities are very different. It becomes a question of volume versus quality of interaction (in it’s most basic form).
At the moment there are plenty of guides and frameworks that outline how to evaluate consultation and engagement activities. However, there isn’t anything that outlines how integrated (on-&-offline) approaches are measured nor how relationships are maintained and built over time.
Steph’s and Stephen’s work are great starting points but as Simon quite rightly points out we need to move away from one-off events/projects that engage citizens and move towards involving citizens in the policy/decision making process. This, of course, would require a different set of metrics.
In terms of a common framework it is difficult to achieve. When I was working @ DIUS with Steph (hi!) I invited people to contribute to an evaluation framework but only a couple of people responded – probably because it’s an emerging area.