Tag Archives: rant

The invisible revolution

People generally notice that they’re taking part in a revolution. Barbarians-at-the-gates revolutions with Bolshevik oiks toppling Romanov nobs and their imperialist haemophiliac ways are self-evident thanks to the bodies in the streets and the widespread clampdown on interesting haberdashery. But we’re in the middle of a revolution now, a revolution most people aren’t even barely aware of.

Two skirmishes in this revolution have taken place in the last week. They’re not the first and won’t be the last, but they’re a classic demonstration of cluelessness from the Old Guard.

The first can be summarised in one word: Trafigura. You, like me, had probably not heard the name before this week. The first inkling I had of a percolating story was a tweet from Ben Goldacre suggesting that the Guardian had been gagged from reporting Parliament. The bare facts emerged pretty quickly: this tweet revealed all to me a short time later. What followed, and the story behind it, is well documented so I shan’t bother here. The salient point to make is: welcome to a different world. In this world, sufficient eyeballs routes around censorship. Maybe not immediately, but ultimately.

The second skirmish involves the Daily Mail and is ongoing. One of its columnists, Jan Moir, wrote a hateful story that appeared on Friday morning entitled Why there was nothing “natural” about Stephen Gately’s death. As with Trafigura an immediate twitstorm ensured that the bigotry was well publicised. Comments on a Daily Mail article are usually of the string-em-up, ship-em-back variety, but not on this one: the writer’s views were soundly condemned. The Daily Mail changed the story’s headline (but not its content) in an attempt to paper over the cracks, and the article’s author has issued a non-apology apology. But more importantly for the paper, companies have pulled their adverts from the story.

People power again, yes; big deal, nothing new. But it’s yet another demonstration of the crucial difference between the bolshies and the nobs. What Trafigura’s legal team Carter-Ruck and the Daily Mail’s journalists don’t get is that people – more people every day – now realise that power, real power, is bottom-up not top-down. That’s at the core of this revolution.

Jan Moir complains in her non-apology that there is “clearly a heavily orchestrated internet campaign” to accuse her of homophobia. Excuse me while I point and laugh at the deluded woman. The Daily Mail is itself massively guilty of orchestrating campaigns in a traditional top-down approach: it was the Daily Mail that hyped up the Russell Brand/Jonathan Ross story, encouraging its readers to complain en masse to the relevant authorities about something they hadn’t themselves heard. That’s orchestration. Top-down.

Moir’s story about Gately offended individuals, who commented or tweeted or blogged to make their opinions known to others. Those others read the article themselves, made up their own minds, and communicated likewise. The network effect ensured that, pretty soon, word spread to connectors (to use Gladwell’s term from The Tipping Point) like Stephen Fry, Graham Linehan, Charlie Brooker and Derren Brown who have thousands of followers. Bottom-up. (I’m not using the word viral because that makes me think of marketing, and this is more fundamental.)

The same effect a few days earlier ensured everyone knew about Trafigura and its “super-injunction” gagging order on the media, even if they hadn’t read the Guardian and put two-and-two together. People also soon learned that Wikileaks held a copy of the Minton report, which says that Trafigura’s oil waste, dumped in west Africa, was potentially toxic. Meanwhile traditional media couldn’t even mention the report’s existence. Last night Trafigura caved again, since the ants had well and truly unstitched the bag to let out the potentially toxic pussy, and the Guardian became free to publish the report. Trafigura and Carter-Ruck bodged this up in as bodgy a way as it is possible to bodge, and questions are now being asked about how, on earth, could a judge issue such a super-injunction in the first place. And why do we have super-injunctions anyway?

Publicity about Moir’s article ensured the Press Complaints Commission web site was hammered out of existence for a time. But the PCC won’t do anything of consequence: it’s a toothless body, controlled by the newspapers themselves, that exists as a sop to politicans afraid to regulate an industry that knows all about their cupboard-based body parts. In any case its policy is to “normally accept complaints only from those who are directly affected by the matters about which they are complaining.” Which is handy.

The way to deal with the Daily Mail is, I hope by now, obvious: bottom-up. Continue to publicise its bigotry and hatred. Make its advertisers pull out.

The two incidents I’ve highlighted aren’t isolated cases. Earlier in the year a single tweet by Graham Linehan started off a “we love the NHS” campaign on Twitter to fight back against uninformed or deceitful comments from those on the side of private health insurers in the US healthcare debate. Many right-wingers in the UK proved they Just Didn’t Get It by claiming this was a Labour party campaign: nope. Bottom-up, not top-down.

Perhaps I’m being idealistic. Perhaps this is merely a Prague Spring of freedom before the tanks roll in. But I don’t think so. People may not be brandishing pitchforks but change is afoot and the world will be a very different place in ten years or so. At the moment we’re still clanking our way to the summit of the rollercoaster, and don’t have the faintest idea what’ll happen on the way down.

3 Comments

Filed under Random

Why (some) Open Source projects suck, part 94

I hate PHP. PHP is the bunny in the road having a scratch, oblivious to impending, blood-streaked, gut-strewn death. It is the Saudi bar that serves everything except what you want. It is the mildly fragrant old man leaning too far through your passenger window giving you directions round the corner via the most perverse, circuitous route a human can devise.

In my job I work with PHP constantly. There’s no doubt that PHP is great for some projects: it’s easy to do simple stuff. But, my god, on substantial projects it’s a pain to work with compared to Python. All languages have their good points and their bad points, yes, blah, religious issue, you can write bad programs in any language, etc, but you can’t deny that Python has an elegance, a philosophy, that PHP has never had and never will.

PHP is an overflowing slop bucket of poorly named, arbitrarily parameterised, ill-considered functions encrusted with half-arsed OO bolt-ons and it doesn’t care two hoots about clarity, brevity or maintainability.

But all this is an aside, albeit a ranty, spit-flecked one, to the main point of this blog.

Ladies and gentlemen, let me introduce you to this PHP error:

Fatal error: Exception thrown without a stack frame in Unknown on line 0

You don’t need to be a PHP guru to realise that “Unknown on line 0” looks, smells and tastes like a PHP bug. There is no stack trace; no other assistance to help you track down the cause. That’s yer lot.

I’ve been seeing this error randomly on one particular cron job but, since I could find no resulting damage and my head was deeply entwined in some other code, I’ve left it alone until now. Today I investigated further. It turns out that PHP has an annoying limitation: it explodes with the entirely unhelpful fatal error above if there’s an uncaught exception within a destructor. It’s up to you to figure out which of your possibly large collection of objects is committing this particular mortal sin.

I discovered this via Google, which pointed me at PHP’s own bug tracker. The developer who reported the bug included everything a good developer should, including details of the stack trace he expected to see and the nonsense he received instead. There’s no doubt that this is a bug in PHP and the error message is entirely useless, no better than saying “bye!” and falling over.

The response:

Throwing exceptions in __desctruct() is not allowed.
Should be documented..

And so they treated it as a documentation problem, added a note to the doc, and closed the bug. See for yourself.

I cannot begin to describe how much this annoys me. Yes, document the problem. No, the problem has not been fixed. Adding a one-line note does not absolve PHP’s developers of any and all consequences, and PHP’s users are not helped in any way by this tiny documentation change. If you think they are, you are probably the type of person who mistakenly believes that engineers memorise every last wrinkle of every API they use.

But it gets worse. As in all bug trackers, especially those open to the public, there are duplicates. Duplicate reports of this particular bug get the response “Thank you for taking the time to write to us, but this is not a bug.” It’s not a bug because it’s mentioned in the documentation, of course! A magic wand is waved and all shall have presents!

This is just profoundly wrong. The only possible response in any professional development process is to reply “Thanks for the report, and sorry you hit this bug. As this has already been reported as bug XXX, we’re closing this report as a duplicate.” Bug XXX, of course, would be open and – I’d say – reasonably high priority since it can be so hard to debug.

I see similar things all the time in PHP’s bug tracker. In one case someone claims “That’s a gcc bug not a PHP bug” and promptly closes the bug, disregarding the fact that PHP is often built from source and its build process happily builds using this supposedly buggy gcc without warning the user that their code will break in undocumented ways. It’s really, really simple: just because there’s a bug in a downstream tool it doesn’t mean it’s not your problem: your users see a problem in your software, and see that you’re not helping – they don’t care about the downstream tool. “I’m sorry sir, I don’t see why we should fit a suspension system to our vehicles: the problem is with the roads.”

I’m picking on PHP but it’s by no means the only culprit, sadly. I’ve seen similar problems with ExtJS. Are there in fact any open source projects that get this right?

1 Comment

Filed under Random

A for ‘orses

I’ve been awarded the first A-level in Twitter Studies. Grade A, naturally. It was a tough course: modules in Signing Up, Following Stephen Fry, Publishing Tedious Tweets About Your Life, Tweeting and Retweeting Just The More Interesting Things, and finally the most advanced module, Getting On Channel 4 News.

It’s a relatively new subject, I’d be surprised if you’d heard of it. The only accredited qualfications agency is Avaragado’s A-levels and Argentinian Aardvark Acupuncture Analysis and Associates, more commonly known as A7. It’s based somewhere between Edinburgh and Carlisle. Frankly I suspect most of its business currently comes from the aardvark acupuncture side, which is very big in South America – outpacing the much lamer Lima llama loom industry.

Like thousands of 18-year-olds across the nation, I waited in front of TV cameras and local newspaper reporters for the letter telling me my grades. But they just took pictures of screeching girls called Jocasta, as usual. I screeched alone, hugging myself and sending myself excited texts. I didn’t tweet myself; what do you think I am, some kind of nerd?

One whiskered-and-whiskeyed old hack belched me a question: did I think A-levels were getting easier? I threw the question back at him: did he think A-levels were easier? Yes, he said. Congratulations, I replied: have an A-level. 98% of students can’t be wrong. Apparently.

It’s no surprise I received an A in Twitter Studies: one in four entries gets an A. And grades are up for the 27th glorious year in a row! That proves students are getting more intelligent. Don’t listen to the doom-mongers and wishy-washy so-called “scientists” at Durham University’s Centre for Evaluation and Monitoring who have spent the last twenty years looking at this question and have so-called “data” to indicate that D-grade students of the late 80s would now get Bs, and probably As in Maths subjects. Don’t stop this so-called “evidence” from piling more and more students into universities.

Next year some clevers clogs will do especially well and get one of the new-fangled A* grades, and no doubt more students will get As overall. And in a few years I imagine there’ll be an A**, then an A***, and then everyone will receive an A for every exam and Her Majesty’s Media will be overjoyed at how successful our students are. Meanwhile the universities will cross out A*** and write A, cross out A** and write B, cross out A* and write C, and cross out A and write D, and we can start all over again.

1 Comment

Filed under Random

The legitimacy of Speaker Beckett

Today, MPs vote (in a secret ballot for the first time, under new procedures) to elect a new Speaker for the House of Commons.

Of course, since the expenses scandal that brought down Michael Martin and effectively ended the political careers of several of them, MPs are taking excruciating care over this new election. There is complete transparency, there are no hidden agendas, and everything is happening in a new-broom, bipartisan spirit.

Not a bit of it.

Avaragado’s first rule of politics:

Wherever three or more people are gathered together, there shall be politics

This is natural, since we’re social animals. Gossip, bitching and backstabbing have been going on since we had the ability to communicate.

Avaragado’s second rule of politics:

Wherever three or more politicians are gathered together, there shall be corruption

The early favourite was Conservative John Bercow, seen as a reformist – ie, what the public seems to want, and what the Commons needs. But he also has some wacky ideas like taking Parliament round the country, which reminds me of that recurring sketch from The Day Today of the Bureau de Change on the back of a lorry.

But now the smart money’s on Margaret Beckett. Why? Because it seems the government whips are “encouraging” Labour MPs to back her. She’s the “Stop Bercow” candidate. So much for the new broom, for transparency, for reform.

Would she be a good Speaker? Possibly. But that won’t be why she’ll be elected, if she gets the job.

And I’m sure I’m not the only one who feels that someone who until only last month was Housing Minister (and who occasionally sat in Cabinet) – and, let’s not forget, who was Tony Blair’s last Foreign Secretary, and who stood in as Labour leader after John Smith’s death – is not going to be seen as entirely independent.

One of the reasons many MPs didn’t like Michael Martin was the perception that he favoured Labour. How is that going to be fixed by the Labour whips installing a recently ex-minister as his replacement? There is always going to be a whiff of distrust.

The Westminster bubble indeed. Entirely clueless, the lot of them.

Leave a comment

Filed under Random

Spot the difference

I am a great fan of the BBC, as you know. Institution, example to the world, etc, etc.

The fuss over the Ross/Brand broadcast was, of course, manufactured outrage by the Daily Mail that spurred Middle England into action (despite very few complaints at the time of broadcast). And the fallout from that affair is still settling – not least within the BBC, which has (according to some insiders) become noticeably more risk-averse in the last few months.

But the BBC does itself no favours with Chris Moyles. He can, it seems, broadcast blatantly homophobic material mocking Will Young, be found in breach of the broadcasting code by Ofcom for doing so (despite just eight complaints), but continue broadcasting without any apology. Oh, he has been “spoken to” by the Radio 1 controller, but that’s it. (And it’s not his first offence either.)

Contrast with Clare Balding. Interviewing the winning Grand National jockey she made a joke about his poor teeth. I saw it and laughed, as did those around her. The jockey was mildly embarrassed but it was all in good humour. And yet the BBC received nearly 1500 complaints and Balding has now had to apologise.

I just do not understand this.

1 Comment

Filed under Random

Vast media conspiracy, etc

So the US military shot down the satellite (apparently). But why is nobody talking about the real reason?

It’s not because of worries about contamination, that’s just standard military cover story #27.

It’s not a cover for testing anti-satellite technology either, as the Russians are claiming.

It’s because the satellite contains/contained tech the US military doesn’t want China or Russia to recover from any bits that make it through the atmosphere.

I can understand why the US aren’t giving the real reason (muslims under the beds etc), and why Russia and China are making fake claims (better to express outrage at an administration the world generally despises than to stoke more fears about an arms race).

But why aren’t the media talking about it? I know they generally just reflect the spin the various parties put on any story, but surely someone understands what’s actually going on. I’ve seen a couple of Newsnight stories on this and was expecting a full-on Paxman blast at various sweaty officials, but nothing.

1 Comment

Filed under Random

That 79p British Gas bill

I finally got round to trying to pay the 79p British Gas bill online. Note the word “trying”.

The site let me fill in all my credit card details and helpfully put 79p in the payment field automatically. And then, when I submitted the form, it said:

There are problems in the form you submitted
The minimum amount that can be paid is £2.00.

Then why did you ask me to pay? Don’t make me fill in all my credit card details first! Just put up a page telling me not to worry about such a low amount, it’ll appear on the next bill. That would make me feel good about your site. And saying that “there are problems in the form you submitted” makes it sound like it was all my fault!

Interestingly, a couple of days ago I received an email from British Gas telling me that they’d launched a new web site. Shame they didn’t send it two weeks earlier. Unlike the site emails apparently from the Brand Marketing Director, this one came from the Director of Customer Service, and from an (apparently) real, if generic, email address. Shame it was addressed “Dear Smith”.

1 Comment

Filed under Random

Free web rant with every bill

Older viewers may remember me remarking on the general rubbishness
of the British Gas web site house.co.uk, which included picklists for
credit card start and expiry dates allowing any year from 1900 to
2999.

They’ve just completely redesigned (and reimplemented, by the look
of the URLs) their site. And what a grand job they’ve done
(sarcasm).

I shall list the sins, in order of discovery, after the jump.
If you don’t want to know the result, look away now…

Continue reading

1 Comment

Filed under Random

Market research versus usability testing

Yesterday’s post about the ORG report earned me two interesting comments from James Gilmour. They spurred me into more research, in particular the Cragg Ross Dawson report he referred to in the first comment. I decided another post on the subject was in order; normal nonsense will resume in due course.

I’ve found two reports by Cragg Ross Dawson; I believe James referred to the ballot paper design research report, but the later STV ballot paper report also makes interesting reading.

As he said, these might not have been “focus groups” in the sense of a bunch of people sitting round a table munching on biscuits, but neither were they proper usability tests.

Usability testing is about asking people to perform tasks in as close as possible to a realistic scenario (no prompting, no helping, no detailed instructions in advance) and observing what they do, and their success (or failure). It gives you objective results rather than the subjective feelings of the Cragg Ross Dawson reports.

Cragg Ross Dawson aren’t usability professionals; they’re a market research company. There’s a huge difference.

Some of my problems with their approach:

  • The ‘Topic Guide’ in the first report suggests that test users, after trying out a ballot, were asked questions such as “is it clear to them who and what they were voting for?” and “how clearly does it explain how to use the ballot paper?”. A true usability test observes the test users to answer those questions – watch, don’t ask. People are very bad at explaining this kind of thing, often to the point of self-delusion. They’ll say things were easy when observation showed they had significant problems. When asked why they did something, they’ll invent entirely spurious explanations (not maliciously, but because they were asked and a plausible answer just pops into their head).
  • It appears in this case that every test user tried every design of ballot, and then explained which one they preferred and why. This was a bad idea: from the second ballot, they were more familiar with the process and thus biased. To get a fair view of which ballot design was easiest to use, each user should have tried only one design; the success rates of each design could be compared after the test was complete. (And then the best design could be modified and the test performed again with new test users to verify that the new design was better and not worse.)
  • Look at section C, ‘Outcome’. In a true usability test this section would summarise the success rates for each design of ballot. It doesn’t; it just reports ‘preferences’ for one design over another. It’s full of phrases like ‘regarded as’, ‘felt that’, ‘thought that’. Which design was most successful – helped most people vote for the candidate(s) they wanted to vote for? It doesn’t say!

I did dig out some actual usability data from the reports:

  • First report, section 2.1, “Initial impressions”: “on first sight of the ballot papers most voters looked initially at the list of parties and candidates; on the basis of observation by the moderators, few seemed to start at the top and read the instructions”. And that’s exactly what I would expect to see. It’s been proven time and time again: people don’t read instructions (there are always exceptions, but they are exceptions).
  • Second report, Chapter 3: “despite the view that the designs were straightforward, some respondents made mistakes; 13 out of 100 ballot papers were unintentionally spoiled”. Followed by “it is worth noting that of the 13 respondents who spoilt their initial ballot papers, 9 realised their mistakes and corrected subsequent papers – many admitted they had voted before reading instructions carefully”.

That second point is damning. People said that the designs were straightforward, but the reality was different. That’s why true usability tests are so important. The fact that people corrected subsequent papers just confirms my point above: from the second ballot design, they’re biased. Not to mention that in a real election they don’t get a second chance to vote.

The goal of a ballot paper design is to allow voters to vote for the candidate(s) of their choice, and for that vote to be counted, as efficiently as possible. This is easy to test objectively, and to retest with improved designs, until there is sufficient confidence in the results. This wasn’t done. Market research isn’t usability testing.

In the actual election, we know that voters made marks on the ballot paper that were mostly, but not always, valid. How many people successfully voted for the candidate(s) of their choice? We have no idea.

Leave a comment

Filed under Random

On the unsolving of problems

Oh, it’s supposed to be so easy. And it is, until it doesn’t work properly. At which point it becomes a Living Hell.

Yes, I know that’s not narrowing it down in the slightest. Let me elaborate.

I decided it was about time to migrate my videos to the bandwagon that is YouTube. Their Internet tubes are gallons more voluminous than mine, and everyone’s got a Flash player these days. Plus people can rate and comment and make video responses and make it a favourite and embed it and do all those wondrous things that don’t actually make anyone more productive or useful but ooh, isn’t it exciting! and please blog me and make me famous and so on and so forth.

First problem: you’re limited to 10 minutes and 100 MB per video upload. Hmm… OK. I don’t have many videos longer than that; I can split those. I’m sure by tickling the codecs I can limbo my way under the file size restriction. (By “I’m sure”, I mean “I think/hope”.)

Wearing my bestest geek hat I perform a test run. I’m not going to start with one of the manky WMVs currently available under the Avaragado Pictures banner, though – I’m returning to the source material, or as close as I can get without installing video editing software from the dark ages or resucking gigs of raw footage from DV tape. The test run is with the Alpe D’Huez 2001 trailer: 2 min 30, for which I have a DVD-quality MPEG.

YouTube recommends you upload at 320×240, MPEG4 (DivX or Xvid video, MP3 audio). Note that this has already confused 95% of the world, for whom that reads “wah wah wah wah MP3 wah”. But anyway.

I am equipped with: sundry codecs (Xvid, DivX, etc); VirtualDub-MPEG2 (for transcoding the DVD MPEGs into AVIs with the codecs of your choice); and Adobe Premiere Pro (cos I is a professional amateur, innit).

Right. Raw MPEG into VirtualDub, deinterlaced, resized, DivX, MP3, save. OK, it’s an AVI not YouTube’s recommended MPEG4, but let’s try it. I mean, what could possibly go wrong?

I upload via the crummy Flash-based uploader (it could really do with a makeover, but it works). Meh, metadata stupidity: I must enter a description and at least three tags, and it doesn’t grok multi-word tags. It then processes the uploaded video, slicing and dicing into the correct format for playback, giving me no indication of progress but at least letting me do other things. Some time later…

I play it. The sound and video are out of sync. Not by much, but by enough: the video leads the audio by five or six frames. Well, this won’t do. The file I uploaded is fine.

Casting the runes, Google leads me to a page that suggests manually shifting the audio track by my best guess and re-uploading. You’ve got to be kidding me. Nope, not doing that. Other sites tell me that this is a common problem, and suggest various combinations of codecs and container formats. (“It worked for me!”)

This is nonsense, right? I’ve got free software here, there and in the other room that will happily transcode anything into anything inside any container format and preserve synchronisation between the audio and video. I’ve got a Quicktime player, a Real player and a Windows Media player that’ll stream live feeds over the Internet to my desktop and preserve synchronisation.

Why can’t YouTube do it? Is it the player? Their back-end? Their conversion process? Flash itself?

I know that most YouTube videos are in sync. But mine isn’t. Anyway, this shouldn’t be guesswork, or semi-random. And given the size of the videos I’ll be uploading, trial and error is simply not practical.

For this test run, I try various combinations. The best I find – but still not perfectly in sync – is an AVI with Xvid and MP3. Bah.

I proceed to the smaller collection of videos – mostly short clips, some with a bit of editing. For these, the synchronisation doesn’t matter much and it’s often hard to notice when a video’s out of sync anyway. Some of them I transcode as AVI/Xvid/MP3. In some cases I’ve got the original source videos handy – from my digital camera at the time – and the Premiere project file. So I generate bog-standard MPEG2 files from these, and they end up perfectly synchronised in YouTube. In one case I just upload the WMV I have handy, as it would take too much effort to recreate from source; again, this is acceptable quality.

A mental model begins to form. Maybe MPEG2 is the way to go. Tried, tested, etc.

On to the first “proper” video: one of the Ireland ones. With music and that (and therefore sensitive to synchronisation issues). AVI/Xvid/MP3: 45 MB upload, out of sync. No good.

I find and download a program called SUPER, with the UI sensibilities of a deformed cabbage (you pick something from a menu and the window moves around the screen) but with the ability to generate MPEG container files and much else besides, unlike VirtualDub. It’s a hideous front-end to ffmpeg, MEncoder, etc, but at least it works.

I spend the best part of a day trying different combinations. MPEG2. MOV/H.264/AAC. WMV/MPEG4-v2/MP3. All no less than 30 MB. I even transcode to AVI/DV/WAV (880 MB), load into Premiere (it loves that combination) and get it to spit out MPEG2 (as the MPEG2 files I built from Premiere for the shorter clips worked fine). They’re all out of sync. (I say all: at the time of writing, one upload is still processing. It has been for several hours now; I’ve given up on it.)

All out of sync, that is, except one. One magical upload works. Which one? The crummy low-quality WMV sitting on my web site.

My mental model now adjusts. Maybe I just need to lower the quality. Maybe, maybe, maybe.

Maybe when I wake up in the morning the sky will be blue, the birds will be a-twitter and the kid currently walloping his football noisily against a wire fence every five seconds will have been given a clip round the ear and told to pack it in.

Maybe I’ll try Google Video or somewhere else.

Aha. Hang on. That WMV/MPEG4-v2/MP3 combo that’s been processing for about four hours has finally finished. And guess what? It’s in sync.

Mental model #3: WMV? WMV?

MENTAL MODEL DOES NOT COMPUTE. [Emits smoke, sparks, explodes despite containing no explosives]

Leave a comment

Filed under Random