I’m often asked: “What were computers like when you were young, grandad?” The answer is: big and clunky, even the ones that weren’t big and clunky. The past always looks like that. Apollo got to the moon with little more than a heavily disguised abacus, intravenous nicotine and a mountain of nerds. But at the time, it was the future.
That was forty years ago. Thirty years ago, the future was the home computer. I remember my parents asking what would we use it for? I pointed to the moustachioed twonk in a glossy magazine advert, inexplicably engrossed in a green-on-black low-res bar chart. Finance! I implausibly claimed. Or recipes, or something. I’m not sure they bought either argument, but eventually they bought the computer. Christmas 1980: the Tandy TRS-80.
That means I’ve been coding, on and off, amateurishly and professionally, for fun and profit, for thirty years. I’ve seen more religious wars than the Vatican correspondent of the Jewish Chronicle. I have a list this long of creators of programming languages, libraries and other technological ephemera who I would, gladly and vigorously, throttle.
I started programming with TRS-80 BASIC and a very much mistaken belief that I couldn’t reuse variable names. Multi-dimensional arrays took a while to grasp, before I stopped worrying about real-world metaphors of rows of columns on pages inside folders inside shelves inside filing cabinets inside rooms on floors inside buildings in streets in cities in states in countries in continents on planets in solar systems in galaxies in universes and realised that if I needed an 18-dimensional array then I had bigger problems.
The first “proper program” I remember writing was called Data Draw. It used what might be the world’s least efficient way of encoding a graphic: arrays of ones and zeroes (TRS-80 display: 128×48 pixels, one-bit colour).
My brother wrote a much more interesting Space Invaders variant: it had a single invader, which crawled along the top of the screen while you crawled along the bottom firing lasers (slow-moving pixels) or photon torpedoes (slower-moving pixels) at it. We enhanced it regularly with ever more ingenious and slow-to-render alien designs.
It was a roaring hit. With us.
In those days, magazines like Personal Computer World printed programs contributed by readers. We didn’t submit any of our own masterpieces, but we occasionally typed in ones that looked interesting. They rarely worked – far too many vectors for bug transmission between the mind of the author and our own fingers.
Those we did get working, or bought from the Tandy store in Cheshunt, or borrowed, live strong in the memory. Various Scott Adams text adventures that I’ve written about before, full of plot and ingenuity in no more than about twenty locations. The trading game Taipan! by the excitingly named Art Canfil, with its moneylender Wu and sea battles regularly sending us to Davy Jones’ locker.
And then there was Dancing Demon. This was a genre-defining choreograph-em-up in which you gave a highly green creature a set of dance moves encoded in a long string of characters and watched him twirl and skip and tap along to tinny tunes. Yeah, that was pretty much it. And by the same author, Android Nim: a simple game, but full of character.
The most interesting part: although these games had sound, the TRS-80 had no sound chip. But it saved programs in audio form to standard cassette tape. If you threw correctly shaped bits speedily enough at the tape interface, and gave the user sufficiently detailed instructions on setting up their tape recorder, games could make sounds.
And if you could do that, all it took was a sprinkling of magic hacker dust and you could synthesise speech. And they did: one game we played, Robot Attack, would say phrases like “Game over, player two. Great score, player one.”
Not bad for 16K RAM.
I remember the same kind of astonishment a few years later when I first heard the single-channel audio ZX Spectrum play two-channel music. The game was Zombie Zombie, the isometric 3D follow-up to the isometric 3D Ant Attack. The two-channel effect was achieved by playing short bursts of each note in rapid succession and hoping the brain would fuzz them together. I can still hear the music in my head.
Many of the early TRS-80 games were written in BASIC and it was trivial to see the source code. I learned little coding technique but got a very strong taste for hacking. Even games written in the mysterious and arcane “machine code” weren’t immune. Later I’d progress to disassembly and the hunt for infinite lives and other treats, PEEKing and POKEing my way through my teens almost as if I were a real boy.
In fact I wrote my first machine code program for the TRS-80. It looped through every non-space character on screen – one ASCII-encoded byte per character – decreasing the value by one. Then it repeated, until the screen was entirely spaces. It was a neat clear-screen effect possible in BASIC but too slow even for the paltry 64×16 character resolution – 1K of video RAM.
I learned several lessons writing that first machine code. Not least: save your work before you run it, you idiot. I had no assembler. I wrote on paper, hand-assembled using a Z80 reference guide at the back of a book, and used a BASIC program to POKE the values in sequence from an array. “Error prone” hardly seems to cover it. I eventually got it working. I even got the code to exit properly. I think.
I know I used to be able to recite Z80 opcodes from memory, in hex and in decimal. What a saddo. Er, I mean, real programmer.
The early 80s home computing landscape was the equivalent of turn-of-the-20th-century film-making. Thousands of unpleasantly aromatic amateurs making things up as they went along, trying everything, making the hardware do things its inventors hadn’t thought possible. Audio on a system with no audio. Two-channel sound on a system with only one channel. The BBC Micro had several distinct video modes with different colour depths and pixel resolutions: and Bell and Braben’s classic Elite changed the video mode half-way down the screen.
Thirty years on from the launch of the Sinclair ZX81 on 5 March 1981, we’ve reached the 1930s of film. The talkies have arrived in the form of the internet as a platform for applications and data (some people said the talkies were a short-lived fad, incidentally). Computer games now, like movies then, are becoming a recognised art form – despite much huffing and puffing by those who should know better.
On the desktop, Charlie Chaplin no longer makes his programs single-handed. But the new frontier of mobile and tablet apps still has that back-bedroom feel to it. It still seems possible to make bags of cash developing on your own, at least for a short time. But it’s not a repeat of the early 80s; not another sequel. The difference, the key to success with apps on the new devices, is one simple realisation: it’s not a computer. Numerous geeks and geek-hags groan that their lives depend on unfettered access to a filesystem and the ability to place a thousand and one customisable pointless gizmos on their desktop-equivalents. But real people just want to Do Stuff. Computers are the means, not the ends.
In another thirty years, modulo the usual underwater caveats, the question “what were computers like when you were young?” will likely sound as odd to people’s ears as “what were motors like when you were young?” do to ours. In the early 20th century people could buy motors and add attachments to make them useful: the software to their motor hardware. Then motors became cheap and ubiquitous and disappeared inside the machines.
This transition has now started with computers. Big and clunky; big and clunky, but smaller; invisible.
Now take this Werther’s Original and clear off, I need a nap.
http://www2.b3ta.com/heyhey16k/ Ah, I remember it all so well.
Most programming back then was about constraints. For example on seeing my first spreadsheet I was wondering about how to implement it. The hard part wasn’t the expression evaluation, but how to fit the functionality in 16kb. Almost everything all implementation was about taking what you wanted to do and the data, and splitting them into smaller pieces operating on them in pieces, swapping things around and constantly worrying about efficiency.
For a recent project I directed the team “assume we have infinite CPU, memory, storage and network”. That is how we did things and it worked out great. The bottlenecks, efficiencies, timing cycles, limitations etc were all about the people involved, not the machines as it used to be.
The best thing about my space invader program was that everything stopped while the laser beam crept up the screen.