Tag Archives: election

Blessed election

Not since the year of someone else’s lord 1847 has there been a contested election for Chancellor of Cambridge University. That year Prince Albert, winner of Britain’s Got Moustache and celebrity Saxon, beat off god-bothering fusspot the Earl of Powis by about 120 votes. The last time a vote of any kind took place for the position was in 1950, when Indian fashion icon Jawaharlal Nehru withdrew from the contest at the last moment to leave pipe-smoking chocks-awayer Lord Tedder as the unopposed winner. Just 200 people bothered to vote, despite Neighbours not being on telly then.

From 1976 to 2011 the university’s Chancellor was the Duke of Edinburgh. This year he decided that, aged 90, he’d take early retirement, and subsequently the university’s Nomination Board – a cross between Hogwarts’ Sorting Hat and a Ouija Board – settled on Lord Sainsbury as its preferred candidate to replace him. The Board was not, apparently, expecting a contest; but a contest there has been.

The candidates: Lord Sainsbury of Bagging Area (the Chequebook party); Abdul Arain, Mill Road shopkeeper (the Stop Sainsbury party); Michael Mansfield QC (the Establishment Law-Snore party); and Brian Blessed (the Energetic, Loud, Peri-Marrying, Bonkers party).

The election took place yesterday and today and the turnout was tremendous, well into the thousands. As the holder of a Cambridge MA I was entitled to a vote and today I gladly scaled the ivory tower for an hour or so. The voting was scandalously well-organised: marquees, chairs for the doddery, free alumni pins with the university crest, porters in top hats hustling you everywhere, and huge piles of gowns to borrow since you can’t scratch your bum in the university without spending at least half an hour in Ede and Ravenscroft.

I haven’t worn a gown in anger for several years. I still haven’t even returned to college for the termly free nosh, and the undergraduate gown I bought for a tenner on my first day in nineteen-umpty-ump now serves only as an emergency fancy dress cape, dusted off for Darth Vader impersonations and little else.

Today I slipped on the borrowed robes and briefly rejoined the tribe. I thanked the porters, because I know my place; they, meanwhile, gossiped like old queens about toffs in top hats. I queued dutifully at the side door of the Senate House waiting for the appropriate desk to clear; immediately behind me was former Labour MP and cabinet minister Chris Smith, now Baron Smith of Finsbury Park.

What do you call two gay Smiths in the Senate House? Punchlines to the usual address.

At the desk they looked up my details using an app on university-issued iPads and— no, don’t be silly. They looked up my name in the Cambridge University Big Book of Names, no doubt printed specially for the occasion. I was then given a ballot paper and directed to a polling booth. This election uses the Single Transferable Vote system; oddly, rather than print the candidates’ names on the ballot paper and ask you to number them in order, they printed the numbers one to four and asked you to write in the names.

I voted, thanked the closest porter, shrugged off my gown into grateful hands for recycling into the queue, and escaped the Senate House bubble back into the real world.

Then I went to The Anchor, where Brian Blessed held court for a couple of hours and was kind enough to pose for photographs. Nice man. Totally bonkers, obviously.

Yes, of course I voted for him.

Update: Lord Sainsbury won in the first round of voting, meaning he had more than 50% of the first preferences of those who voted.

Leave a comment

Filed under Random

Market research versus usability testing

Yesterday’s post about the ORG report earned me two interesting comments from James Gilmour. They spurred me into more research, in particular the Cragg Ross Dawson report he referred to in the first comment. I decided another post on the subject was in order; normal nonsense will resume in due course.

I’ve found two reports by Cragg Ross Dawson; I believe James referred to the ballot paper design research report, but the later STV ballot paper report also makes interesting reading.

As he said, these might not have been “focus groups” in the sense of a bunch of people sitting round a table munching on biscuits, but neither were they proper usability tests.

Usability testing is about asking people to perform tasks in as close as possible to a realistic scenario (no prompting, no helping, no detailed instructions in advance) and observing what they do, and their success (or failure). It gives you objective results rather than the subjective feelings of the Cragg Ross Dawson reports.

Cragg Ross Dawson aren’t usability professionals; they’re a market research company. There’s a huge difference.

Some of my problems with their approach:

  • The ‘Topic Guide’ in the first report suggests that test users, after trying out a ballot, were asked questions such as “is it clear to them who and what they were voting for?” and “how clearly does it explain how to use the ballot paper?”. A true usability test observes the test users to answer those questions – watch, don’t ask. People are very bad at explaining this kind of thing, often to the point of self-delusion. They’ll say things were easy when observation showed they had significant problems. When asked why they did something, they’ll invent entirely spurious explanations (not maliciously, but because they were asked and a plausible answer just pops into their head).
  • It appears in this case that every test user tried every design of ballot, and then explained which one they preferred and why. This was a bad idea: from the second ballot, they were more familiar with the process and thus biased. To get a fair view of which ballot design was easiest to use, each user should have tried only one design; the success rates of each design could be compared after the test was complete. (And then the best design could be modified and the test performed again with new test users to verify that the new design was better and not worse.)
  • Look at section C, ‘Outcome’. In a true usability test this section would summarise the success rates for each design of ballot. It doesn’t; it just reports ‘preferences’ for one design over another. It’s full of phrases like ‘regarded as’, ‘felt that’, ‘thought that’. Which design was most successful – helped most people vote for the candidate(s) they wanted to vote for? It doesn’t say!

I did dig out some actual usability data from the reports:

  • First report, section 2.1, “Initial impressions”: “on first sight of the ballot papers most voters looked initially at the list of parties and candidates; on the basis of observation by the moderators, few seemed to start at the top and read the instructions”. And that’s exactly what I would expect to see. It’s been proven time and time again: people don’t read instructions (there are always exceptions, but they are exceptions).
  • Second report, Chapter 3: “despite the view that the designs were straightforward, some respondents made mistakes; 13 out of 100 ballot papers were unintentionally spoiled”. Followed by “it is worth noting that of the 13 respondents who spoilt their initial ballot papers, 9 realised their mistakes and corrected subsequent papers – many admitted they had voted before reading instructions carefully”.

That second point is damning. People said that the designs were straightforward, but the reality was different. That’s why true usability tests are so important. The fact that people corrected subsequent papers just confirms my point above: from the second ballot design, they’re biased. Not to mention that in a real election they don’t get a second chance to vote.

The goal of a ballot paper design is to allow voters to vote for the candidate(s) of their choice, and for that vote to be counted, as efficiently as possible. This is easy to test objectively, and to retest with improved designs, until there is sufficient confidence in the results. This wasn’t done. Market research isn’t usability testing.

In the actual election, we know that voters made marks on the ballot paper that were mostly, but not always, valid. How many people successfully voted for the candidate(s) of their choice? We have no idea.

Leave a comment

Filed under Random

How monitor resolution nearly cost the SNP victory in Scotland, and other stories

The recent elections across the UK included a number of e-voting
and e-counting pilots. And for the first time, official observers were
allowed to attend.

The Open Rights Group
called for volunteer observers in February and has now released a report of their
observations
. You can guess the overall summary: no confidence in
the results.

I’ve skimmed the report; it makes scary reading.

It seems that few places were geared up for observers; in at least one
case an official observer was granted less access than the media. The
Electoral Commission stepped in more than once to guide the election
administrators.

In many places the software vendors appeared more in control than the
returning officer. There were unguarded PCs lying around with open
ports. There was no certification of voting equipment. A hodge-podge
of software was used, including programs with known unpatched
vulnerabilities.

In one e-voting pilot voters received a two-part receipt containing a
‘voting receipt’ – which seems to be a sixteen-character hex number –
and a ‘ballot signature’, which looks like a cryptographic hash. The
purpose of the receipt is to allow the voter to verify that their vote
was counted. But one pilot gave no instructions on how to do that.
Another pilot allowed people to check their receipt by downloading a
69-page PDF file which – I kid you not – appears to have been produced
by opening an XML file (with no stylesheet) in Firefox and printing to
PDF. The voter must search this PDF file for a line containing their
sixteen-character ‘voting receipt’ – something like this:

<ballot_id value=”123456789abcdef0″ index=”123″ />

This is, of course, mad.

There appears to be no way to check the ‘ballot signature’ hash, and
no clue as to why that even exists. And the file does not tell you
anything else: the location of the election, for example. It certainly
gives you no confidence that your vote was counted correctly.

Most publicity at the time focused on the problems with the Scottish
Parliamentary elections, in particular the large number of spoiled
ballots (which in 16 of the 73 constituencies was greater than the
majority of the winning candidate). The report is unsurprisingly
harsh here. Voters were given misleading and contradictory
instructions. The layout of the ballot papers didn’t match user
expectations (the regions appeared on the left, the constituencies on
the right – most people thought the constituencies more important, and
assumed they were on the left).

And despite advice by usability professionals, they didn’t perform any
valid usability tests on the ballot paper. Instead they presented a
set of sample ballots to a number of focus groups and asked for
opinions. This isn’t a valid usability test. And in any case, none of
the sample ballots had the constituencies on the left where people
expected them.

This was doomed to failure. As anyone with any usability experience
could tell you from a glance at the ballot, many people saw the large
text saying ‘You have two votes’, ignored the tiny text saying ‘vote
once in this column’ for each of the two columns – constituency and
region – and believed they could vote twice in the same column. And
that’s what many of them did.

A simple fix – two pieces of paper instead of one, with each one
saying ‘vote once’ – would have solved that problem. Still, it’s only
an election, usability doesn’t matter…

The election result in Scotland was close: the SNP emerged with 47
seats, Labour 46. But without a last-minute objection by an SNP
candidate at one count, Labour would have won. The reason? The
resolution of someone’s monitor.

It was the final set of results to declare: the regional seats for the
Highlands and Islands. The SNP were then two seats ahead, with seven
undeclared. One of the SNP candidates had been keeping an eye on the
count, and reckoned the SNP had about 35% of the vote. But when the
returning officer showed the calculated results to the candidates
before the official declaration, it showed Labour with four seats and
the SNP with zero – unlikely if the SNP had anywhere near 35% of the vote.
This would give Labour overall victory in the
national election.

As the returning officer headed to the podium, the candidate
officially challenged the result. After some resistance the returning
officer agreed to show the workings (in the Scottish regional
elections it’s not a one-member-one-seat winner-takes-all system).

It emerged that the SNP’s votes hadn’t been included: the large number
of parties contesting the election meant that the SNP had scrolled off
the right of the Excel spreadsheet window (yes, that’s right). The
true result gave Labour three seats and the SNP two, and the SNP
gained control of the Scottish Parliament.

The returning officer was deeply apologetic. I bet.

The Open Rights Group report makes the point that many computer
scientists and related geeks and nerds, despite traditionally being
early adopters, are concerned about voting technologies. It recommends
that further e-voting and e-counting trials are suspended until more
research has been performed (and, unsaid, until politicians get a
clue).

Sadly I suspect that the only way to prevent a headlong rush into
e-voting hell is to engineer a major hack: an election apparently won
by someone who wasn’t even standing, with 110% of the vote.

But would even that work? The politicians would probably prosecute the
messenger and carry on regardless. As usual.

3 Comments

Filed under Random