by Tobias Escher
Although e-participation – the use of the internet to engage citizens in dialogue and policy consultation – is high on the political agenda of all Western governments, there is surprisingly little well-founded knowledge as to whether the use of such technologies and techniques really makes much of a difference.
While the most important question concerning e-participation projects is whether they have any impact on democracy, it is also the question that is most difficult to answer, since ‘impact’ in this context is tough to define, and even tougher to measure. For example, how do you gauge the impact of an individual’s contribution to an online consultation, and what kind of impact are we trying to measure – to the citizen, to the government, or to the outcome of the decision?
To date, there are no standard measures of the effectiveness of e-participation projects. Many evaluations settle with measuring how representative the user base of a particular service is, the rationale being that online tools should aim to overcome the traditionally biased pattern of participation in which citizens that are older, better educated and have a greater income are the most politically active. Diverse studies of e-participation projects such as online petitions and consultations show consistently that these are liable to similar biases. Particularly problematic is the digital divide in access that pronounces certain disadvantages, often leading to even stronger biases in gender distribution and education.
In contrast, targeted efforts which aim to leverage the expertise of particular groups of citizens can be very successful in engaging people, as the Hansard Society has demonstrated with its parliamentary consultations and its Digital Dialogues project.
Other criteria that have been used for evaluating the wider impact of particular e-participation efforts include an assessment of their visibility (e.g. press coverage); whether all stakeholders are represented; how well it is integrated into the existing political processes; participants’ subjective feeling of efficacy; and the quality of the discourse.
Data is usually gathered in the form of surveys, interviews and observations as well as by analysing the web statistics and the project documentation. However, most of the collected measures are of limited interpretive value unless comparative data is available – for example, how does one decide whether spending £100 per user is a lot or very little? Comparative data can often be hard to come by, however, as some parties might be hesitant to share it or simply are not able to provide it.
Apart from the problems with finding reliable indicators, a lack of evaluation of e-participation projects can also be ascribed to a lack of resources, a reluctance to carry out evaluations because one wants to get things done rather then look backwards, and a fear of having to admit failure.
Where evaluation has taken place, it has mainly been focused on publicly-funded projects (where both the resources available and the demand for accountability are higher) and not so much on bottom-up efforts or distributed campaigns. Systematic research efforts also suffer from the lack of large-scale projects, having to focus instead on small and experimental exercises with few users.
Despite all these pitfalls, evaluation is crucial to justify the invested resources, to improve the project and to establish what does work and what not. In an acknowledgment of both the necessity for and the difficulty of evaluating e-participation projects, researchers across Europe are joining up their efforts in initiatives such as DEMO-net (http://www.demo-net.org/) or the Pan European e-participation Network PEP-NET (http://pep-net.eu). However, given the many challenges outlined above, it is yet to be seen whether these will eventually be able to deliver a consistent framework for evaluation.
NOTE: Tobias Escher is a research assistant and doctoral student at the Oxford Internet Institute. On behalf of UK Citizens Online Democracy he is currently leading an effort to evaluate the major mySociety projects and will present some of the findings from this work at Headstar’s forthcoming conference e-Democracy ’08 (http://www.headstar-events.com/edemocracy08).