The recent elections across the UK included a number of e-voting
and e-counting pilots. And for the first time, official observers were
allowed to attend.
The Open Rights Group
called for volunteer observers in February and has now released a report of their
observations. You can guess the overall summary: no confidence in
the results.
I’ve skimmed the report; it makes scary reading.
It seems that few places were geared up for observers; in at least one
case an official observer was granted less access than the media. The
Electoral Commission stepped in more than once to guide the election
administrators.
In many places the software vendors appeared more in control than the
returning officer. There were unguarded PCs lying around with open
ports. There was no certification of voting equipment. A hodge-podge
of software was used, including programs with known unpatched
vulnerabilities.
In one e-voting pilot voters received a two-part receipt containing a
‘voting receipt’ – which seems to be a sixteen-character hex number –
and a ‘ballot signature’, which looks like a cryptographic hash. The
purpose of the receipt is to allow the voter to verify that their vote
was counted. But one pilot gave no instructions on how to do that.
Another pilot allowed people to check their receipt by downloading a
69-page PDF file which – I kid you not – appears to have been produced
by opening an XML file (with no stylesheet) in Firefox and printing to
PDF. The voter must search this PDF file for a line containing their
sixteen-character ‘voting receipt’ – something like this:
<ballot_id value=”123456789abcdef0″ index=”123″ />
This is, of course, mad.
There appears to be no way to check the ‘ballot signature’ hash, and
no clue as to why that even exists. And the file does not tell you
anything else: the location of the election, for example. It certainly
gives you no confidence that your vote was counted correctly.
Most publicity at the time focused on the problems with the Scottish
Parliamentary elections, in particular the large number of spoiled
ballots (which in 16 of the 73 constituencies was greater than the
majority of the winning candidate). The report is unsurprisingly
harsh here. Voters were given misleading and contradictory
instructions. The layout of the ballot papers didn’t match user
expectations (the regions appeared on the left, the constituencies on
the right – most people thought the constituencies more important, and
assumed they were on the left).
And despite advice by usability professionals, they didn’t perform any
valid usability tests on the ballot paper. Instead they presented a
set of sample ballots to a number of focus groups and asked for
opinions. This isn’t a valid usability test. And in any case, none of
the sample ballots had the constituencies on the left where people
expected them.
This was doomed to failure. As anyone with any usability experience
could tell you from a glance at the ballot, many people saw the large
text saying ‘You have two votes’, ignored the tiny text saying ‘vote
once in this column’ for each of the two columns – constituency and
region – and believed they could vote twice in the same column. And
that’s what many of them did.
A simple fix – two pieces of paper instead of one, with each one
saying ‘vote once’ – would have solved that problem. Still, it’s only
an election, usability doesn’t matter…
The election result in Scotland was close: the SNP emerged with 47
seats, Labour 46. But without a last-minute objection by an SNP
candidate at one count, Labour would have won. The reason? The
resolution of someone’s monitor.
It was the final set of results to declare: the regional seats for the
Highlands and Islands. The SNP were then two seats ahead, with seven
undeclared. One of the SNP candidates had been keeping an eye on the
count, and reckoned the SNP had about 35% of the vote. But when the
returning officer showed the calculated results to the candidates
before the official declaration, it showed Labour with four seats and
the SNP with zero – unlikely if the SNP had anywhere near 35% of the vote.
This would give Labour overall victory in the
national election.
As the returning officer headed to the podium, the candidate
officially challenged the result. After some resistance the returning
officer agreed to show the workings (in the Scottish regional
elections it’s not a one-member-one-seat winner-takes-all system).
It emerged that the SNP’s votes hadn’t been included: the large number
of parties contesting the election meant that the SNP had scrolled off
the right of the Excel spreadsheet window (yes, that’s right). The
true result gave Labour three seats and the SNP two, and the SNP
gained control of the Scottish Parliament.
The returning officer was deeply apologetic. I bet.
The Open Rights Group report makes the point that many computer
scientists and related geeks and nerds, despite traditionally being
early adopters, are concerned about voting technologies. It recommends
that further e-voting and e-counting trials are suspended until more
research has been performed (and, unsaid, until politicians get a
clue).
Sadly I suspect that the only way to prevent a headlong rush into
e-voting hell is to engineer a major hack: an election apparently won
by someone who wasn’t even standing, with 110% of the vote.
But would even that work? The politicians would probably prosecute the
messenger and carry on regardless. As usual.
ORG report on Scottish Elections May 2007 – comment 1
The ORG report on the recent elections is extremely valuable, but so far as the Scottish Parliament elections are concerned, it contains some small but significant mistakes and omits a great deal of background information that is extremely relevant to the main point of failure and the main point of criticism. I have no connection with any of the organisations responsible for these elections, but I was an Accredited Observer (independent, private individual) and attended the counts at the Edinburgh Counting Centre.
The recommendation to present the two ballot papers for the Scottish Parliament elections on a combined “ballot sheet” came from the Arbuthnott Commission because research had shown that voters did not understand the function of the two AMS votes and did not appreciate the importance of the regional vote in determining the outcome. The Arbuthnott Commission also recommended that the regional ballot paper should be placed first of the sheet, i.e. on the left, to emphasise its importance. This is the design used for AMS elections in New Zealand, where it was adopted for the same reasons. Because this was the priority consideration, it should be no surprise that only designs of this kind (regional ballot paper on the left) were compared with two separate ballot papers in the qualitative research.
Repeated mention is made of the use of “focus groups” and criticism made of the use of such groups for evaluating alternative designs. But so far as I can see, NO focus groups were involved. The Cragg Ross Dawson report describes the selection of the “test voters” in these terms:
“2. Methodology
• 100 short qualitative interviews were conducted in four locations in Scotland.
• interviews were semi-structured and lasted about 20 minutes.
• respondents were recruited on the street and interviewed immediately, with no prior warning.
• they were asked to use each of the five proposed ballot papers in polling booths as if taking part in a real election, and then discussed them with the interviewer”.
This indicates to me that the 100 voters participated individually, not in any kind of focus group.
It should be noted that the most significant finding to emerge from that qualitative research was the overwhelming preference for a combined ballot sheet (83/100) rather than two separate ballot papers (17/100). No other finding showed such a clear margin of difference.
James Gilmour
ORG report on Scottish Elections May 2007 – comment 2
The report states: “ORG notes that, in general, media coverage tended to place constituencies before regions when reporting on the Parliamentary elections.” I saw no examples of this. All the media illustrations and descriptions of the AMS ballot sheet I saw had the regional ballot paper of the left. If ORG has any examples of incorrect media coverage of the kind they suggest, it is essential that that evidence is given to the Independent Inquiry now being conducted by Ron Gould.
The ORG report mostly refers (correctly) to “rejected ballot papers”, but occasionally refers (incorrectly) to “spoilt ballot papers”. These two terms have very clear and very different meanings in electoral legislation. The problems in the Scottish Parliament elections were with “rejected ballot papers”. The word “spoilt” also conveys the implication that the voters whose ballot papers were rejected had, necessarily, made mistakes. In some cases this was certainly true, most commonly by placing two Xs in the regional vote column and consequently making no mark in the constituency vote column.
Constituency ballot papers that were blank were correctly “rejected”, but it would be quite wrong to assume that all such rejected papers had been left blank by mistake. The differences in the numbers of rejected regional ballot papers and rejected constituency ballot papers show that, in many constituencies, the voters had cast a valid regional vote (one X) but did not vote in the constituency. I know that this was done deliberately in some cases because I have been consulted about it by voters since the election and have read in newspapers and blogs about voters who also did exactly that and did it intentionally. The interpretation of the numbers and proportions of “rejected ballot papers” is thus more complex than suggested in the ORG report. I have done a great deal of analysis of the data on rejected ballot papers and there are clearly very significant differences between electoral regions and among constituencies within most of the electoral regions.
There were obvious design mistakes in the AMS ballot papers, in particular the omission of the “directional” arrows above the voting columns on the ballot sheets used in the Glasgow and Lothians electoral regions. However, the real puzzle is why so many Scottish voters did make real mistakes in completing the combined ballot sheets correctly. Such combined ballot sheets are used for AMS elections in both New Zealand and in Germany. In both countries the proportions of rejected ballot papers are very much lower than we saw in Scotland on 3-4 May.
James Gilmour
Re: ORG report on Scottish Elections May 2007 – comment 2
There is no doubt that we had access to less information overall when examining the Scottish elections when compared with the English pilots. This was in part because we started later in Scotland, because we’d been advised that we wouldn’t be able to observe in Scotland, and also because the Scotland Office and Scottish Executive were not willing to share even a smidgeon of information with us, which DCA did in England (though they could have done much more).
In drafting the report ORG tried to avoid pulling in too much deep background, instead focussing on what we observed in and around the elections themselves. The Arbuthnott Commission’s recommendations, whilst obviously of some importance to the process, should not have created a situation where the logical alternative of presenting the columns the other way around was not tested at all. This is poor usability practice. The testing process was a marketing-based, qualitative process which was referred to in parliamentary statements by Douglas Alexander as ‘focus groups’. Whatever the process is called, it was not usability testing.
I think you have misunderstood our point regarding media coverage. We do not claim that the ballot sheets were mis-represented, but general media reporting always discussed the constituency before the regional. I have screenshots to support this. This year and previously print and broadcast media, the Scottish Parliament and local authorities all reported constituency results before regional and laid them out on the screen with the constituency either above or to the left of regional results. It is ORG’s view that this played an important part in what voter’s expected from the ballot paper. As the Arbuthnott Commission argued, putting the regional column on the left would have been to highlight it’s important — which in ORG’s view was the opposite of how voter’s perceived the elections, they thought of constituencies first. We do not question combining the ballot papers to a single paper, we question the design and the process used to support the decisions made in making that design. A proper process may have argued that separate ballot papers were better, or a workable combined paper could be produced. We don’t know as those tests weren’t done in the correct way.
While I’m sure the report isn’t perfect, I’m yet to receive details of any factual errors in the report. We may agree to differ on why so many ballots were rejected in Scotland, but that will come down to opinion because the papers themselves are not available for inspection.
One point that James raises is over the distinction between rejected and spoilt ballots. The report does interchange those terms occasionally, we should have been clearer on that. However it is worth noting that across Scotland and England the term ‘rejected’ and ‘spoilt’ are used in woolly and inconsistent ways by Returning Officers and others in the election world. So perhaps getting better at our terminology – along the US lines of overvotes, undervotes, intentionally spoilt etc would be helpful for the discussions we are sure to be continuing as the Electoral Commission starts to issue its reports.
Next week we’ll be publishing the video and audio of last night’s launch event; and looking forward to contributing to this ongoing debate.
Jason Kitcat
e-voting coordinator, Open Rights Group