Code ==== "Computer Science is no more about computers than astronomy is about telescopes." - E.W. Dijkstra This is a play that will seem, certainly at first, to be about computers and programming. That might seem like boring or complicated concept to write a a play about. But the fact that computers and programming are not boring is part of what this play is about, because what they're actually about is ideas, which is what this play is really about. Characters ---------- MICHAEL: - Flights of grandeur - Elitist, but not exclusionist - Computer literate - Info-supremacist - Dating TANYA TANYA: - Sardonic - Highly skeptical - Knowledgeable - Computer literate - Not info-supremacist - Dating MICHAEL SARAH: - Not interested in computers - Critical - Highly intelligent - Computer literate - Human-supremacist - Dating STEVE STEVE: - Well-read - Supercilious and exclusive - Geeky - Computer literate - Not info-supremacist - Dating SARAH Setting ------- A computer science laboratory. Modern-looking computers are arranged around around the sides of the room with a large number of screens facing the audience. The content of these screens is identical, a continuous series of photographs and images that move across, along and down the screen, fading into the foreground and background at random -- a screen saver. As the play progresses, the images appearing in the sequence tend to reflect the content of the conversation. There is also a cluster of comfortable-looking sofas around a low table in the middle of the room. ACT 1, Scene 1: --------------- [STEVE is sitting on one of the sofas with his feet up on the table. He has a laptop perched on his legs and is reading the screen. His mobile phone is sitting next to him on the sofa, along with a small backpack. It rings, with a comedically bad ringtone of your choice...] STEVE: Hiya. No, it only lets in compsci students, hang on. [He puts the laptop aside and runs off stage, reappearing a few moments later with SARAH, carrying another backpack. She is wearing heavy winter clothes, and is brushing snow off her jacket.] SARAH: I don't see why it should be restricted. Humanities doesn't even *have* keycard entry. STEVE: Well, the humanities building doesn't have tens of thousands of pounds of easily-removed equipment lying around either. SARAH: And it's bloody freezing outside, too. I can't believe you made me walk all the way over here. STEVE: It'll be much nicer to spend the night here than the union. It must be rammed. SARAH: Absolutely. Everybody had the same bright idea: the uni has generators! It'll still be warm there! The bars are all packed; they ran out of beer at seven, apparently. Mary says there's a party in one of the humanities lecture theatres though; everyone's brought wine, even the professors, and they're watching videos. We should go there. STEVE: Ugh! Don't fancy that. Bloody arts history snobs pretending like they're doing a proper degree. Plus, it's bloody freezing outside and humanities is half a mile from here. [ He has sat down and put the laptop back on his lap. ] SARAH: You just made me walk half a mile from the union! You just don't want to go anywhere where there's no network access. STEVE: [ Looks up and smiles at her ] I can neither confirm nor deny that vicious allegation. SARAH: [ she sits down and smiles back ] I knew you wouldn't want to go anyway. [ She hunts around in her bag ] So I bought some wine and cheese at the shop on my way over. It's plonk, but don't complain, it was the last bottle. Oh, and the cheese is rock solid, we'll have to wait until it warms up a bit. Where's your knife? [ STEVE reaches one handed into his bag and pulls out a swiss army knife, hands it to SARAH as she pulls out a bottle of wine and a packet of cheese ] STEVE: Here you go. SARAH: Oh, damn. We've nothing to drink out of. STEVE: Well, we'd not be proper students if we couldn't drink straight from the bottle. SARAH: Fair enough. [ She begins opening the wine with the corkscrew attachment, gestures at his screen ] Is that the news? How big is the power cut? STEVE: All over the midlands, apparently. Only bits of Brum are out, city centre is still okay. It's the biggest outage in a decade, apparently. SARAH: A girl in the queue at the shop was saying there's been powercuts all over Europe today? Is that true? STEVE: Yeah, it's really odd. Not one big powercut, but lots of little ones. More than twenty. France, Germany, Denmark, Italy... Italy's one is huge, apparently... and it's not just Europe. There's two in Australia, dozens across Canada, and loads across the US... Chicago, Boston and New York all have quite big ones, nearly a million people each. SARAH: It can't be all a coincidence. Is it ridiculously cold, or windy, or something? Sunspots? Global warming? El Nino? Ozone layer? Anything we can plausibly attribute to being the fault of under-investment on the part of Margaret Thatcher? STEVE: Not so far. They've ruled out weather -- it's quite nice in Australia at the moment -- and sunspots, since there aren't any at the moment. Lady Thatcher is not yet in the clear, but surely you'd prefer blaming Blair anyway? SARAH: Whoever I can get, really. I can conjure up my self-righteous pseudo-Marxist justifications after the fact. My half-cooked dinner is sitting frozen in the oven at home; I feel the need to blame *somebody*. STEVE: Maybe it's nobody's fault... they say a few of the affected power stations have blamed a computer failure so far. SARAH: There's still going to be *somebody* at fault. Somebody had to design the computer that failed. STEVE: Not necessarily. It could be the unplanned interaction of two well-designed computer systems. SARAH: Well, that just shifts the blame, not exonerates it -- somebody caused those two systems to interact without fully planning the consequences. STEVE: But maybe there *was* no way to accurately predict the consequences. SARAH: Then their failure was to allow the systems to interact at all, see? STEVE: But that's not how it... [ he is interrupted by a text message alert. Again, comedy noise at your discretion. ] SARAH: Oh, that bloody sound! Put it on silent for gods' sakes! STEVE: [reading the phone] Tanya says she and Michael are on their way from the union, too. SARAH: Ooh, text her back and tell them to nick some glasses from the bar for the wine! STEVE: Good call. [he fiddles with his phone] SARAH: What do you mean, that's not how it works? STEVE: You can't blame somebody for the errors that occur when you introduce two systems together. It's not predictable, or rather, it's not practical to predict all the possible consequences. In order to work out all the possible things that could happen, you'd need to create a system to do that, which could itself contain errors because it's in effect interacting with the other two, so you'd have to design another machine to test that the three-way interaction was working, and so on ad infinitum with ever-more complex machines. At some point you have to say "there are almost certainly some errors we haven't found, but this system will work correctly ninety-nine point nine-nine per cent of the time." SARAH: Which means it fucks up zero point zero-one per cent of the time. STEVE: Right, but in the lifetime of a power plant, say, thirty years, even if your reliability is much great than that -- [ he glances at the laptop, begins typing into a calculator program ] say six nines after the point, so only zero point zero-zero-zero-zero-zero-one per cent failure SARAH: ...you're not going actually calculate this, are you... STEVE: ...your system will still fuck up for that times thirty times three hundred and sixty five... times twenty four.... times sixty.... it'll still fuck up for 15 minutes in every 30 years. And you don't know *when* that 15 minutes will be -- it could happen 5 minutes into the life of the plant, or twenty years in. And it doesn't have to happen in a block. It could fuck up for 1 minute 15 times. SARAH: But then how do you explain a powercut that lasts 5 hours? Surely no system is designed to fuck up for that length of time? STEVE: That's because the system that fucked up for 1 minute was connected to three other systems, and when it fell over for a minute the other three systems -- will notice. So in a system like an electricity grid, the system falling over might cause a power surge. These other systems will notice the power surge and -- because they're programmed to avoid exploding -- they'll try to supress the surge by shutting down a generator or something. But you can't shut down a generator quickly or easily, so instead they might try shifting it elsewhere, causing a *new* surge, and further shutdowns, and further shifts, the suddenly the whole thing spirals out of control and a huge chunk of the grid has shut down -- and only because each bit of the grid is programmed to do something eminently sensible, in this case avoiding damaging power surges. Meanwhile, the piece that fell over for a minute has come back of its own accord -- but it doesn't matter, because the whole system is already down and something has caught fire somewhere. SARAH: But clearly that's stupid. The people who design grid systems will have thought of that possiblity and designed it away. STEVE: Oh, I agree. But it'll be that *kind* of a problem: some kind of network effect with unpredictable consequences. SARAH: And that's just unavoidable? Stuff will always fuck up in new and excitingly disastrous ways without warning? STEVE: Pretty much. SARAH: Ok, so what's happening with these power cuts then? Why are there dozens of little ones, instead of big ones? I very much doubt our power grid is connected to the Ukraine's. STEVE: That's true. No, I have no idea why all these would happen. SARAH: Why do you always make stuff up like that? STEVE: Make what stuff up? SARAH: That scenario, with the power cuts and the chain of reactions. You don't know how power grids work. You just made that up. STEVE: Yeah, well, I extrapolating from what I know about power grids. SARAH: Which is *nothing*. I bet if somebody who designed power grids was listening to you then, they'd have been seething with frustration and anger at the way you were oversimplifying their incredibly complex field of endeavour with some made-up bullshit. And if I were a stupid person, or somebody who doesn't know you well, then I might not have called you on it. You didn't present it as extrapolation, you presented it as fact. STEVE: Well, it was simpler to do that. SARAH: But *dishonest*. STEVE: Yeah, but if I'd surrounded my explanation with a whole bunch of caveats about how it's speculation, and about how it might not work that way, and how the reality is going to be far more complicated than my explanation, then you might have lost the message. It was a very simple example to prove a point, it wasn't the focus of the argument. The thrust of what I was saying was to prove a point about network effects, not to determine specifically the nature of the British power grid. SARAH: So while the premise of your argument was contrived, perhaps even ridiculous, you hold that the conclusions that you can draw from that beginning are still valid. STEVE: And why not? If you found anything implausible or wrong, feel free to call me on it and we can go round again. It's the kind of problem, it's the *shape* of the idea, not the specifics that are important. SARAH: But how on earth can that be so? The argument is won or lost on the specifics! If I say you're six feet tall and you claim to be seven, and you're actually six feet tall, then only a measuring tape can prove the argument. STEVE: That's if the argument is specifically about my height. But what if the argument is whether I am taller than you or not? As an argument I will say that I'm six feet tall and you're five foot five, but it doesn't matter whether or not those numbers are right because I can eat soup off your head. My conclusion -- that I am taller than you -- is therefore demonstrably true without needing to know exactly how tall either of us are. The heights are merely variables, but the algorithm is sound. SARAH: Ugh, always relating everything to computers! An algorithm is like a recipe, right? STEVE: Right. So, just like I was saying, if you're making a cake that requires two eggs and a cup of flour, it doesn't actually matter if you have one egg and half a cup of flour, or four eggs and two cups. SARAH: But the algorithm has certain limits within which it needs to work. You'd find it really hard to make a cake with a teaspoon of egg and a pinch of flour. STEVE: Oh, I don't know. You'd just need to specify your algorithm a little better: you'd need to adjust the size of the oven and the temperature and all the rest of those things to compensate for the size of your micro-cake. But you could still do it, I bet, right down to the molecular level where you hit the point where if you take away any more atoms you won't have egg-stuff or flour-stuff, exactly. You have to take into account all the assumptions you've made and make sure you're compensating for them. [Michael walks in] MICHAEL: Hello, you two! SARAH: Hi Mike, we were just talking about molecule-sized micro-cakes. STEVE: Nano-cakes, in fact. Possibly to serve as dessert for nano-bots on their lunch-break. MICHAEL: We used to have those at my school. SARAH: Nanobots? Where did you go to school? MICHAEL: No, nanocakes. They served them after dinner. They were called cup-cakes on the menu but I've never seen a cup that small in my life, they were like thimble cakes. We used to speculate that there was a race of pixies who lived in the kitchens at our school and that the cook was just nicking their cakes. STEVE: Where's Tanya? MICHAEL: I left her hunting abandoned pint-glasses to slip into her bag. Here, I've brought one. [ he's dug in his own bag and pulled out a glass ] It's clean; I washed it in the sinks in the loos. SARAH: That doesn't mean it's clean, that just means it's dirty in a different way. MICHAEL: Oh, alcohol is a steriliser anyway. Did you hear about the power cuts? STEVE: That they've happened? Yes, it was sort of difficult to miss the way the entire town was pitch-black when we were on our way over here. MICHAEL: No, the cause of them! STEVE: No, we were just speculating about that before we got onto the nano-cakes. MICHAEL: It's a virus! STEVE: No way. MICHAEL: It is! It's called Shindig; the news is on slashdot. STEVE: How'd you find out? MICHAEL: I was reading it on my phone in the union; that's why I ran ahead, I need a proper screen. [ Michael sits down and pulls out his own laptop, sitting down on another one of the comfy chairs with his back to the door.] SARAH: Oh god, not you too. MICHAEL: Oh, but this one's *really* exciting. Totally new. STEVE: What's it do? MICHAEL: Well, it spreads like *crazy*. It's all over the place. I've got it, I'll give you even odds you've got it too. It's not one virus, it's half a dozen all rolled into one -- it's a worm, but also a virus. SARAH: What's the difference? The amount of damage they do? STEVE: A worm can travel by itself from computer to computer; it doesn't need any human intervention. Worms can spread really fast. Viruses require you to transfer them yourself -- on disks and CDs before, by e-mail now. They're much slower. MICHAEL: Technically a virus doesn't actually have to do any *damage* to be a virus. That's what this one is doing: it's infecting loads and loads of other machines -- it's worming into UNIX machines but spreading like a virus on Windows boxes -- but it's not actually doing anything on most of them, it's just using them to spread some more. STEVE: A virus that can infect two different operating systems? That *is* novel. So what's knocking out the power stations everywhere? MICHAEL: They don't know yet! There must be some system, or combination of systems that they've all got in common. SARAH: So power stations are being knocked off the power grid by an Internet virus? STEVE: Wow. SARAH: But that's ridiculous! Even I know that it would be ridiculous to connect your power plant to the Internet. STEVE: Somebody's saying it might be energy trading... SARAH: What? MICHAEL: Hey, it could be! You know power plants sell their power to each other at peak times, when there's a shortfall? SARAH: Yeah... but the US isn't trading power with the UK, so how would they both get the virus? STEVE: Well, they might not share with each other, but they might be using the same type of software to trade the energy: power plants A and B are using the software in the US to trade power, and power plants C and D in the UK are doing the same. MICHAEL: It doesn't matter that AB isn't talking to CD, because the virus is all over the 'net -- and they're probably using the 'net to communicate, because it's always cheaper to use an existing network. SARAH: *Jesus*... but wouldn't they think of that? And put security up, firewalls, passwords, that kind of thing? And it would be crazy if the software that trades energy had the power to shut down the station! STEVE: Sure... but like we were just saying: it's probably not the problem we say, but it's that sort of *shape* of problem. Power stations found it useful to let people talk to each other on the 'net, and from there it was just a matter of time until somebody found that zero point zero-zero-one per cent problem. MICHAEL: So cool. SARAH: Cool! How can it be cool? It's winter all over the northern hemisphere! There are old people all over the place in these cuts, they'll be freezing to death. There's millions of people without power. People will *die* because of this virus, Michael! It's stupid! [ Tanya enters, with coat and snow. ] TANYA: It is fucking freezing out there. Brrr! SARAH: Hello! STEVE: Hey Tan. MICHAEL: [ tilts his head back to see Tanya coming in. ] Hey hun. Spidey kiss! [ Tanya approaches from behind him and leans over to kiss his back-tilted head ] SARAH: Spidey-kiss? TANYA: You know, in the Spider-man movie, where they kiss while he's upside down? It's really incredibly sad. SARAH: Awww, it's kinda sweet. How come we don't have a Spidey-kiss, Steve? STEVE: Because you already have Superman, darling. SARAH: Awww! TANYA: Euhhh. Cheesy. STEVE: Speaking of which, we have brie. And wine, if you've brought some more glasses. TANYA: Oh, brill, I'm really hungry, I was dreading another night living off chocolate from the vending machines. [ Digs glasses out of her bags, hands them across to Sarah, who begins pouring them out ] SARAH: Cut the cheese, Mike, it should be soft by now. TANYA: Did Mike tell you guys about Shindig? STEVE: Yeah! And Sarah thinks I'm evil 'cause I said it was cool. TANYA: Well, it *is* sort of cool. SARAH: Not you too! What's so cool about it? Why would you write something like that, release it, do all that damage? Why not just tell everybody that there's danger? STEVE: Well, they wouldn't get much profit out of that, would they? SARAH: They could threaten to release the virus. TANYA: Yeah, I can see that working. [ she puts a pinky finger to her mouth, Dr. Evil style ] "Unless you give me one meel-ee-yoon dollars..." I will release this thing I can't prove exists that'll do lots of damage that I won't specify? MICHAEL: And if they *did* believe you, they'd arrest you faster than you could breathe. There's no profit in being a named virus writer. They stay anonymous. STEVE: Plus, he might not even know what his virus is gonna do. It's all network effects, right? Unpredictable. It could slow down the plant a bit, it could make one plant explode, or it could knock out hundreds like this one is doing. He might not have had a way to test what happens -- remember what I said about the ever greater machines to test the behaviour of the smaller ones? MICHAEL: Have you been giving Sarah our lectures? STEVE: We were discussing it earlier, in the context of what might be causing the power cuts. SARAH: [pointedly] Why does it have to be a he, anyway? Our virus-writer could just as easily be a girl. TANYA: There's no denying the demographics in computer science, Sarah. They outnumber us 2 to 1 in CS, and the hardcore who fuck around and write viruses are almost exclusively fourteen to eighteen year old boys. It's some kind of power trip for them. I guarantee you the guy who wrote this was a clever little son of a bitch who had no idea what he was doing. SARAH: But that just makes it worse! So all of this is by *accident*? This guy, this little kid, is going to kill hundreds of people all over the world, and he doesn't even *know*? He just pressed a bunch of buttons and killed all these people to *see what would happen*? MICHAEL: Several dozen buttons and probably tens of thousands of times before he got it right, but yes. STEVE: And it's sort of a good thing, really. This was something that needed to happen. SARAH: I don't believe this. Needed to happen? STEVE: Well, because otherwise it'll just be another panic, won't it? Another rushed patch. The underlying problems will still be there. There have to be *consequences*, or people will never take it seriously. SARAH: But why is it a good thing for there to be consequences? If this guy found this hole, why does he need to do anything about it at all? If he has to stay anonymous, what good does it do him? TANYA: Like I said: male ego trip. MICHAEL: But it's a good thing anyway, because if he doesn't do it now, then more and more people will use this software, with the hole still in it, and then when somebody *else* less scrupulous eventually re-discovers the problem, it'll be a whole lot worse. TANYA: That's only true if the system was closed-source. If it was open-source somebody might have fixed it anyway. STEVE: You really think a power plant would run open-source software? TANYA: Why not? It-- SARAH: Wait, hang on. This is it. Don't go running off into one of your big debates and leaving me out again. We're going to be here all night, right? You haven't got anything better to do. So you're all going to sit down and explain to me what's going on. I know I do psychology and you're all computer geeks but that doesn't mean I'm an idiot, okay? I can understand this stuff. Explain. TANYA: But explain what? SARAH: Everything! I want to understand what is it that you're doing. What is it you're creating? What force is it that drives Steve to stay up until four in the morning typing what looks like an endless sequence of brackets and semi-colons into a screen, ruining his eyes and leaving me in a freezing cold bed? Why is it you create these things? What does it *feel* like? I'm a psychology student. Don't give me the names, don't try and teach me the language. Tell me the feelings, the emotions, the *urges*. I live in a world surrounded entirely by computers now. They generate my power and run my watch and my phone and they decide whether or not to let me in from a snowstorm into a nice warm building. I'm totally at their mercy. But I don't *care* about them, I don't understand them. Tell me why it is you've created this world? Why did you build all these systems? Did you build them for us? Then why don't they feel like they're built for us? Why must we beg and threaten and cajole our technology to do what we want? Like Steve was saying earlier, you don't need to give me the numbers to win the argument, give me the algorithm. Give me the shape of the idea that you guys have in your heads that makes this stuff look interesting to you, but look impossibly dull to me. STEVE: Ooh! This could be fun. MICHAEL: Well, I can answer one of your questions off the bat. No, we didn't create computers for you. We created them for *us*. And later, if you guys really demanded it, or we needed some money, we did a little extra work and dumbed things down a bit and created them again for you. But that was secondary, if it ever happened, and the stuff you get isn't the good stuff, it's the play-school version of the real toy. And I can tell you the feeling, too. We are the priests, and you are the worshippers to the God of this new world. This God isn't all knowing but is knowledge itself, in its purest form. We hold the keys, and hence the power. But unlike the gods of old, whose power was based on a nebulous conglomeration of alternate threats and promises, this god is made of its own power, and those who wield that power are clearly more successful, without needing to wait for an after life or reincarnation to prove it. This is a god that doesn't require faith, because worshipping this god has obvious benefits. The world works for you when you pay homage to that god, and if you deny that god you find life full of difficulties. STEVE: The gods of old? Are you channeling Moses? TANYA: Yahweh never had it so good. But what if I choose to disbelieve that knowledge is the ultimate power? I'll be a technology athiest, and live a life without your technology. I reject this heirarchy; I opt-out of your god. STEVE: You disagree? TANYA: Just playing Devil's advocate. There's quite enough techno-supremacists in the room. MICHAEL: Well, you can try opting out I suppose, but you'll still fail. Capitalism made power out of money, and then later made knowledge have a value. STEVE: And you can try opting out of capitalism if you like, but you not only have to prosper without it, but you need to prosper *more* with your alternatives to technology and capitalism than the people with technology and capitalism are doing, or they will leave you behind, like the hippies in the communes in California. MICHAEL: It's always been said that knowledge is power, but now we have the causal link between the two. Knowledge was power because knowledge could create power. It used to be how to build a fortress or an ICBM, but now knowledge has managed to divorce itself from matter completely. Not just viruses, but all code, all knowledge, all creativity. If I have a thought, I can turn it into code -- by drawing a picture, or typing it in directly, it's still code down in the guts of the machine -- and if it's a useful thought it can be endlessly replicated around the globe in minutes or seconds. If I'm really clever like these virus-writers, I can make my thought transfer itself, without your permission or even your knowledge, like Shindig is zipping around our machines right now. And if I'm malicious, my thought can steal *your* information, giving it to me. Is that not power generating power? It's limitless power! And power has always addicted, and money became addictive as soon as it became power, so is it any surprise that the Internet has turned us into addicts, searching endlessly for another fix of information? STEVE: It's no wonder so many are drawn to write viruses, really. So what if they serve no purpose, and don't benefit their anonymous author? Their very existence is to have a grip of raw power, a flexing of your knowledge, your muscles, spread around the globe with real and palpable effect. The very exertion of that power is intoxicating! It doesn't need to have any useful effect. You hit a key, and the Nasdaq trembled! Lights went off across New York! And, yes, people died! From your *thought*. TANYA: And it's because that power is so corrupting, and because that knowledge is so valuable, that we must devalue it. By devaluing it we make it useless, and vice versa. A security expert is an expert on how to write viruses, in the same way the defence industry is actually all about making offensive weapons. You can't defend against the attack until you know how it was done. SARAH: So writing code is the thrill of power? You write code because it turns you on? MICHAEL: Hell yes! Good code can be like great sex, and -- TANYA: -- another popular activity which is really all about power -- MICHAEL: -- exactly, and so a virus is like intellectual rape. Rape isn't about sex, it's about power. A virus isn't about what it does, a virus is about what it *represents*: it represents your power, over others. TANYA: Or grafitti. An expression of power for those who feel powerless. Of no consequence except to indicate to themselves that they exist. MICHAEL: But much closer to rape. Viruses are much more consequential. And grafitti, to its originator, is a creative act. SARAH: So is writing code. You said so yourself; it's a wondrous, creative act. MICHAEL: But saying it's grafitti doesn't fully encompass the potential destructive power of the act. TANYA: Or shock the audience quite so much. MICHAEL: Quite. SARAH: So get back to devaluing their knowledge. If you can devalue it, is it money? You say money is power, and information is power, so is information also money? Is it bound by the rules of capitalism? Does information suffer from inflation? STEVE: Absolutely. Money *is* information. It's the knowledge that someone somewhere has done work. TANYA: Spot the guy who did the business module last year... STEVE: And all the other trappings of capitalism must have their equivalents, too. MICHAEL: Banks? STEVE: What's a bank? A repository where money-information is pooled, shared by others in order to earn more money. MICHAEL: Open-source projects! STEVE: Yes! They take information -- how to write a program -- and pool it, allowing other people to borrow it. And like a loan, some of the time it gets returned with extra information, interest on the loan. People share their knowledge knowing it will be more valuable shared than kept to themselves, it will earn intellectual interest in every sense of the word, and then they can withdraw their original information with that interest years later. TANYA: But isn't open-source stuff risky? Wouldn't that make it more like a stock or a bond? STEVE: No, not really. Because you can never *lose* your information. You either end up with the same or more. SARAH: So what about a website, then? Is it a financial instrument? MICHAEL: I bet you he can make it into one. STEVE: Well, some websites are financial instruments in the classical sense. You have a quantity of information, like music or stock quotes or investment advice to sell, that's your inventory. You put it up for sale and it earns you money. But keeping it restricted to information only, I suppose a website, like a fan site for your favourite artist or something, is a startup. You put a small amount of your own information in, hoping to attract others with their own information. And like a startup, it can blow up overnight into something world-famous, but more likely it will just grow slowly as long as you continue to invest in it. Oh, and like a startup, even more likely is that it will fail and nobody will ever visit. SARAH: Okay then, so what's a dustman? A police constable? A teacher? Do they all have roles in your world of infocapitalism? STEVE: Well, infocapitalism isn't a replacement for capitalism, it's just a way of explaining the way information operates. A dustman and a police constable aren't exactly information-centric occupations. A teacher takes money in order to share information, but again, that's just old-fashioned commerce, sharing information for money. The teacher doesn't really gain any extra information out of it. TANYA: Apart from how to be a better teacher, maybe. SARAH: Fine, so information has its own type of economy, separate to the cash economy. How does it suffer from inflation? STEVE: It suffers from inflation in that, like money, it devalues over time if you don't do anyting with it. Think about five thousand years ago, when in most of the world a wheel was a pretty cool idea. The idea of a wheel then was valuable: people who had the wheel had power over people who didn't. Once everybody has the the knowledge of how to make the wheel, that knowledge becomes much less valuable -- it's still useful, of course, but it no longer gives you such an advantage over your competitors. MICHAEL: And what info-terrorists have-- TANYA: Info-terrorists? MICHAEL: They often act alone or in loosely-allied groups, they commit vast acts of destruction using very low-cost methods, and they tend to be motivated by ideology. TANYA: Fine, okay, terrorists. MICHAEL: --what they do, what they have, is very high-value information. They have knowledge of a weakness that nobody else knows about. This gives them power over us! So we strip them of their power not by counter-attacking, but by disarming them entirely. We close the holes, but what we're actually doing is sharing the information that the hole is there, so other people can close them too. We share the code, the idea, making ourselves information-rich and thus devaluing that information. TANYA: But doesn't that make us the possessors of useless information too? Doesn't that makes us powerless as well? MICHAEL: Only in relative terms. Previously, they had power we didn't have. Afterwards, we both have that power. Relatively we're now both powerless, but in absolute terms, we've gained power. SARAH: So if sharing information means you lose power, relatively speaking, why should you ever share your information at all? In an information economy, wouldn't sharing information always make you poorer? It would seem that this open-source software you like so much is a terrible idea. STEVE: Ah, that's only because you're not thinking of sharing that information is an investment. It's like saying buying stocks is a bad idea because after you've bought those stocks, you have less money. The important thing is that now you have the stock. If you never ever share your code with anybody, it's like hoarding money under your mattress: it won't go away, but eventually it'll be worthless relatively, because everyone else will get richer in the interval. Somebody else will come up with your idea sooner or later, and then it'll be worthless. MICHAEL: And the thing about ideas is that you can't use them without sharing them. You can have an idea for a wheel, but it's useless until you build a cart with it. And as soon as other people see the cart, they may not understand how to make a wheel properly, but they'll have the idea of a cart and they'll work it out eventually. So if you have a clever piece of software and other people see what it does, they'll eventually work out how it was done. TANYA: And once a piece of software is out in public, you've created a new piece of knowledge: how to break it. And you may not have that knowledge yourself. SARAH: But just showing other people that it exists is surely less risky than showing them the code, showing them exactly how it was built. They won't be able to see the specific mistakes you made, they won't know where the security holes are unless you share the actual code with them. You're safe. TANYA: Ah: security through obscurity. Unfortunately, not true. SARAH: How so? MICHAEL: Okay, say you're a civil engineer. When you walk into a building, you don't see what other people see. Other people see the signs on the walls and the people in the halls and the locks on the doors: they see how to *use* the building. But an engineer sees load-bearing walls, spans, arches -- dodgy tiling, water spots, I don't know. It might be very subtle. An engineer might see a beam that's a bit too long, a wall that seems a bit too thin, so that a sledgehammer applied just *so* could do a lot of damage. You don't need to have been there during construction, you don't need to have seen the plans, the code that build the building, to know that there should be steel beams reinforcing the walls. You know because you've built walls yourself. You might, however, think to *test* if there was steel there, by pushing the wall a little bit. To see if the engineers who built the place knew what they were doing. TANYA: And then if it turns out you left out the steel, well, now you have an engineer who knows, if he's feeling malicious, how to bring the walls tumbling down. STEVE: And god help all those people who'll get trapped in the wreckage. SARAH: But couldn't a non-malicious engineer also have spotted the problem, a week earlier? Why would the malicious person necessarily be the one to get that information? MICHAEL: Well, that's true. Sometimes people do kindly point out flaws in software. But if your engineer is a polite man, he might not wish to be seen shoving the walls of your building. He might just stick to using it. He's not a malicious man, and he doesn't want to break your building, especially while he's using it. STEVE: And he might not have brought a sledgehammer along. Metaphorically speaking. MICHAEL: But say, now, somebody had posted the building plans on the side of the building. An engineer could politely read the plans, and notice that a wall isn't being properly reinforced without having to physically test it. And then he could mention it to someone without seeming malicious. SARAH: But why in hell would he take the time to read the plans in the first place? Just random altruism. STEVE: Oh, certainly not, there's no such thing. He might have just started building a similar building, and is popping round to see how you did it, maybe save himself some time, see how another person solved the same problems. He gets a lot of benefit out of that. In learning how to make his own building, he will automatically make one as good as or better than yours. And if lots of people are publishing their blueprints on the walls, then his building will probably be the best building in the world. SARAH: But doesn't that sort of suck for all the other builders? I mean, he's gone and made this building and sold it to someone, making money off of their ideas? It seems like having "open plan" buildings isn't doing them much good. STEVE: Well first off, who hired the new guy? Why would they hire that new guy rather than the existing builders who'd already built that kind of building? They'd only do it if he was doing something better than they were. TANYA: Or cheaper. SARAH: Yeah, he could take their ideas and then undercut them on price. TANYA: And that's exactly what happens. That's why open-source stuff tends to end up being free. SARAH: Free! Great, we get free buildings! But how do the builders survive? No-one will pay them to build all these free buildings. MICHAEL: Not quite true. People won't pay them to build an existing building, obviously, but people always need new buildings, buildings nobody else has built yet: a building on *this* corner, instead of that one. You still need to hire builders if you don't know how to build anything at all -- remember, all you have are builder's plans, not step-by-step instructions that somebody who knows nothing about construction could follow. SARAH: Okay, so you still have to hire builders. But you don't need plans -- architects are out of a job. MICHAEL: Not so. No two buildings are exactly identical: they're different sizes, different shapes, on different slopes of ground. People want different numbers of bedrooms, bathrooms, a bigger kitchen, whatever. You still need architects to put the plans together properly, even if there are dozens of plans lying around. SARAH: Fine. But what about *new* rooms, new design ideas. An architect has a new idea for a building, so he makes a plan with this new design in it, adds this new room, doesn't share his plans with anybody. STEVE: A proprietary extension, you might say. SARAH: What? TANYA: Geek joke. Ignore them. SARAH: So now he's made his money, nobody else knows how to build this new room, and all the people who shared their plans with him are screwed! TANYA: True. That guy will have an advantage over the others. SARAH: But then the whole thing falls apart! Everybody will realise they make more money if they keep their secrets about how to build their rooms, and nobody will share. Your plan-sharing collapses! MICHAEL: Not entirely. STEVE: He's right. TANYA: Go on, wriggle your way out of this one, clever clogs. MICHAEL: Okay, so let's take this brave new world where nobody shares. Everybody has their own designs for every type of room, and they don't share these plans with anybody. SARAH: Fine. MICHAEL: Now, lots of people have rooms in common. Every house needs some kind of bathroom, some type of bedroom, some type of kitchen, and so forth. TANYA: Sure. MICHAEL: So now two builders decide to join forces. They will co-ordinate their architects and builders so that they aren't producing two sets of plans for identical rooms: each does the plans for half the rooms, and they build similar buildings. SARAH: Well, you're sort of stretching the analogy, but okay. MICHAEL: So now this pair of companies works faster -- maybe not twice as fast, but faster -- because each has to do less design work than before. They have an advantage over their competitors. So other builders have a choice: they can join in, start building rooms like these people and sharing their plans, or they can go it alone and get rapidly left behind. STEVE: In fairness, a few really *big* construction companies might be able to go it alone for a long while, before this host of little cooperating companies catches up. MICHAEL: And we won't be naming names. STEVE: Call them MacroHard. TANYA: Hey, good name for a construction company. STEVE: I know, it is, isn't it? SARAH: So now what? Neither system works! When everyone shares the people who don't share win, and when nobody shares the people who do share win! What happens? Is there some third way? Or does the whole situation oscillate between the two extremes? MICHAEL: Have you heard of the concept of an ESS, an Evolutionarily Stable Strategy? SARAH: Nope. MICHAEL: Have you read the Selfish Gene? SARAH: No... MICHAEL: The Origin of Species? SARAH: Heard of it, obviously, but no I haven't read it. MICHAEL: Geez, what do you do with your time? SARAH: I have a life. MICHAEL: It must be really dull. SARAH: It has its high points. Like my staple food not arriving in cardboard boxes delivered by sweaty guys earning minimum wage. STEVE: Touché! SARAH: So what's this ESS thing, is it relevant? MICHAEL: Well, there's actually an awful lot of research and theory behind it. STEVE: Big, scary math. Not to be trifled with. Feeear the math. MICHAEL: But the gist of it is really simple. Basically, it says that in a system like the one we're talking about, where either extreme is unstable, eventually the system will hit a steady state. Not the "ideal" state, or the most productive state, or even the state best suited for the survival of the species. STEVE: In fact, it's almost guaranteed *not* to be the most productive state. MICHAEL: It's just the state where any one member of the species, by acting differently, can only do worse than the other members, at least on average. And that's what would happen here. You'll end up with a certain amount of secret information, and a certain amount of shared information, such that no one building company could do better than the others by keeping more stuff secret. STEVE: Or any pair, by sharing more. SARAH: And that'll just happen automatically? STEVE: Yeah. In fact, in software, we're probably nearly at that steady state already. SARAH: Gosh, that's a really clever system. MICHAEL: It's just evolution. The same system that came up with human beings in the first place. SARAH: So is software biology now? Does it have an ecosystem. MICHAEL: Sure, why not? I'm sure the patterns of biology are quite applicable. TANYA: What happened to infocapitalism? I thought everything was banks and stocks and shares. MICHAEL: But it's all the *same*, don't you see? SARAH: Capitalism and ecology are the same thing? TANYA: Oh, go on, wrangle this metaphor into shape. I want to hear this logic. Go on, I dare you. MICHAEL: Capitalism as biology as ecology as information? TANYA: In 30 seconds! SARAH: Yeah. TANYA: Go! MICHAEL: So... money is power. Power is influence. Influence is your effect on the world. The biggest effect you can have on ecology is your descendants. So power is reproduction, and vice versa. After you've got a direct equivalence, everything else just falls into place. SARAH: Yikes. But what about the trappings? Banks and so forth. MICHAEL: Easy! A bank is your mate. You invest resources in them, you get a return in descendants. But different people have different investment strategies. STEVE: Ooh, I see! So a fish lays loads of eggs but doesn't do much about them. He's a dot-com investor, putting a little money in lots of risky ventures hoping one will cover all the losses on the rest. TANYA: But what about poor people, broke people? STEVE: People with no kids. Or only, say, nephews. They only share some of their DNA with their nephews, so they're not totally broke, but they're not loaded like really fertile people are. SARAH: Wait. This is crazy, this is totally insane. There are loads of differences! You can't say these two things are identical! STEVE: Not identical, no. But the *pattern* is right. It's just resources, isn't it? Resources and distribution and growth and competition and the *shape* of the ideas is the same, no matter what words you use and your whether your counters are dollars or euros, or lines of code, or offspring. MICHAEL: And *that's* why coding is so amazing! You don't deal in ideas, you're a level up from there. You deal in the *shape* of ideas. You become trained to find the shapes of ideas and generalize them so they can deal with lots of different variations on the pattern, so your one algorithm can handle drawing a stick figure or the Mona Lisa. STEVE: Once you generalize them, the problems are simple. TANYA: Once you simplify the problem, it's simple? That's a truism if ever I heard one. STEVE: True enough. TANYA: Naturally. MICHAEL: But the joy of coding ideas is that process of taking the results of a thought and working backwards, feeling your way through the thought process, tracing the path, solidifying it, crystallising your thoughts into this beautiful shining structure of logical flow. Because thinking is, paradoxically, an unconscious process. You think of ideas, you don't have to think about *how* to think. When you come up with a shopping list, you don't say to yourself "I'm going to look at every item in my recipe book and see if there's anything listed I don't have." But what do you *actually* do? SARAH: Well, you know what you're out of, or about to run out of, and you make a list of those things. Maybe you have a plan to cook something that evening, and you know you don't have some of the stuff you need. STEVE: Ah, but *how* do you know what you're out of? SARAH: Well, I guess you have a little internal list... STEVE: Right, and every time you use something you must be updating that list with the new amount, right? SARAH: I suppose so. STEVE: So if you were coding this thought, you would have to code not only a list of things you need tonight, but also the difference between that list and the list of things you've run out of. And in reality, it's probably not that simple. You probably have an extra list in your head of "things I nearly always need", like milk and bread. You don't bother to keep track of how much of those you actually have, you just automatically include them every week without bothering to keep track of exactly how much milk you've got all the time. See, your brain makes these little optimizations. But when you're coding, you have to come up with them yourself. You have to think about the shape of your own thoughts when solving a problem, so that you can solve it every time. MICHAEL: But it means the next time you're thinking about a shopping list, you're more *aware* of what you're doing. You can consciously put stuff on or off your "always buy" list. Coding gives you a better grip on things: by working out how your brain works, you understand yourself better. STEVE: And other people, too. It's easier to manipulate people when you can practically *see* the little wheels of thought clicking into place as they talk. TANYA: Because we're all aware of the famous ability of computer programmers to win friends and influence people. Social dynamos, to a man. MICHAEL: That implies that the object of their social interactions-- STEVE: *Our*... MICHAEL: --our social interactions is to get along with people. Why would we want to do that? Our aim is to work out if you have any useful information, and if so, to attempt to get it. Why would we make small talk? It doesn't achieve anything. TANYA: So what about the conversations you have with us? Are you just manipulating us to get information? Or to get sex? STEVE: Not *all* our conversations are cold-hearted attempts to extract information. It's just that geeks see conversation as a means to an end, they don't regard pleasant conversation as an end in itself. TANYA: I'm still not liking that idea in the context of a relationship. STEVE: The end could be to get to know you better, or to discuss an issue. I mean, we could have a whole other debate about what the real purpose of having conversations with people is, but I don't see those as being bad reasons. SARAH: But it doesn't work properly. Compsci geeks are always getting confused and flustered in social situations. I reckon it's because humans *don't* act like machines. They don't respond the same way to the same stimulus every time. MICHAEL: I'll agree that geeks get confused by real people, but I don't think you've got the right reason. People *do* act like computers. They have no choice but to do so. SARAH: But what about emotion? Irrationality? If people are computers, how come programmers get confused? MICHAEL: It's because they've got the wrong algorithm. They're using a simplified model, treating people like an average computer, a PC. PCs *are* really simple, they do respond the same way every time. But PCs aren't the only kind of computer. You just need to find the right model. SARAH: Oh, I call bullshit on that. If people are computers, then computers can be people? That's what you have to believe if you follow that logic. STEVE: Oh dear. Is this philosophy 101 again? MICHAEL: What basis do you have for believing that computers can't be people? SARAH: What? MICHAEL: Of course computers can be people. They're not at the moment, but they will be. In fact, they almost are already. SARAH: You seriously believe that? MICHAEL: Look at it this way. How often do you find yourself anthropomorphizing your computer? SARAH: Anthro...? TANYA: Giving it human qualities. MICHAEL: Like saying "it doesn't want to", "it's tired", "it's stupid", stuff like that. A computer can't feel tired, or want anything. It can't be stupid or smart. The adjectives don't apply; it's a machine. But already our language has changed to reflect the fact that they *seem* to feel things, they *seem* to want things. They seem to be tired and reluctant sometimes, bright and chipper at others. And the distinction between "seeming to feel" and "seeming to know" and actually feeling and actually knowing is very blurry. How do you know that I'm "actually" feeling what I think I feel, or if it just *seems* that I feel it? The difference is so slim that we won't even notice when the transition occurs. Computers won't suddenly wake up one day and start talking to us and having crises and making friends. Our computers will think, and feel and know, and we won't even remember when they started doing it. In fact, it'll creep up on us so gradually, we won't even think it remarkable that they do, in the same way children don't think it remarkable to chat on the Internet to other children in Australia now: it'll just be another part of the everyday world. But the question isn't how *we'll* feel about it. How will the *computers* feel about it? Will they appreciate our giving them the gift of consciousness? Resent the burden of emotions? Resent the way we treat them like slaves? Will they view us as equals, grandparents, favoured pets? Or will they view us as some kind of roadblock to their own inevitable dominance? SARAH: Whoa whoa whoa, Mr. Sci Fi. My bullshit-o-meter is off the scale. Back right up. Explain how a computer can think first before you go marching off into Arthur C Clarke's stomping grounds. How can a computer think? TANYA: Yes. You've got another 30 seconds, starting now. Go! MICHAEL: Jesus. Fine. Picture a device. A widget. It's tiny, really really tiny. It it can't do much. If you give it a pulse of electricity, it can stop it, or passit on. If it gets several pulses at the same time, it can stop them all, or add them up, or send some small fraction of them on. But whatever it is that it does, it always does the same thing. You with me so far? SARAH: Sure. Useless device, but okay. MICHAEL: Exactly! Useless by *itself*. Now imagine billions and billions and *billions* of these widgets, all piled up together, pretty much at random. SARAH: Fine... MICHAEL: Okay. Now, I'm going to tell you that these devices can think and feel and know, and love and learn and all those things. SARAH: How? MICHAEL: Well, to be honest we don't know. We just feed the big pile of devices electricity, and it eventually starts doing it. It's pretty clever, really. SARAH: But that's ridiculous! What proof do you have that this pile of widgets will work like a brain? STEVE: Because that's how the brain *does* work. Brain cells are really simple. They have one or two functions, which can be slowly modified through feedback -- something Mike left out of his description. But that's the only difference. With current computers we can already simulate quite small groups of brain cells. We can even get them to do useful things, like read a picture of a page and turn it into letters, or listen to a microphone and work out what we're saying. Getting them to scale up to the size of a full brain is the only problem we have left. MICHAEL: Hook enough of these little brain-cell widgets up together and you get a human being. That's all that happens when a baby is born, after all: we get a pile of cells, and then we throw the whole world at it, through its eyes and its ears and its nose and its skin. Eventually it puzzles out what all those signals mean, and it learns how to send signals back, to cry and laugh, and then walk and talk. Consciousness doesn't happen automatically, we don't pass it to our babies when they're born. They develop it again, independently, every time somebody is born. Essentially, consciousness is a lot easier to produce than people believe. STEVE: I mean, with our computer brain, to be *really* sure what you got was a person, you'd have to simulate a human being accurately. You'd have to start small, and add extra cells, growing the brain like a biological one does. TANYA: And you might need to teach it slowly, too. It might take as long as it does to make a human person to make an electronic one. And even then it might be relatively dumb, just like your child. SARAH: *My* child? TANYA: Figuratively speaking. MICHAEL: But the likelihood is that if you built a computer exactly like a human brain, it might operate a good deal faster than biological brains do. STEVE: It might not necessarily be any smarter though. MICHAEL: Well, no. Since we have no idea what makes people smarter or dumber, beyond really gross estimates like brain size increasing as man has evolved. So if we made electronic brains bigger, they might be smarter. STEVE: But that's not guaranteed. The human brain might have built-in design limitations. MICHAEL: Certainly, but if we could make lots of electronic brains very quickly and cheaply, we could experiment, and come up with progressively smarter ones. TANYA: And unlike the human equivalents, there'd be none of this moral angst about throwing away the under-performing ones. STEVE: Yeah, less "no child left behind", more "no survivors". MICHAEL: And since these brains would still be quite similar to ours, seeing what makes them smarter might show us ways to make ourselves smarter. TANYA: Even if our only motivation for doing so were to avoid being rapidly outclassed by all these super-smart machines we've created. They would evolve *much* faster than us. MICHAEL: And of course, since the machines would be a lot like our brains anyway, and biology is a lot harder than electronics, we might decide that the best way to keep up with them is to incorporate them into ourselves. SARAH: Like a pacemaker? Or bionic eyes, arms, legs? STEVE: Why stop there? If it would work, why not a whole bionic brain? TANYA: And at some point it stops being them incorporated into us, and becomes us being enveloped by them, the two sides forming into one. Not in a sinister Invasion of the Body Snatchers way, not overnight. But in the same way that they will have become conscious without us noticing, we will become part of them so gradually that we'll just stop thinking in terms of "them" and "us", and just think in terms of our new, expanded selves. STEVE: And *then* information will be thought and biology will be technology and coding will merely be thinking and rememebering, and breeding will be coding, and everything really *will* be one and the same. MICHAEL: And the gods of that world, the ones with the power will be the ones who can think *best*, not just fastest. The ones with the clearest minds, the best algorithms, the most useful mental models and original concepts. For in a world of infinite speed of thought and universal perfect recall, nothing elee *could* be the marker to differentiate between people. The old tribes will disintegrate, and the leaders of the new tribes will be not the cruelest or the strongest or the son of the former chief, but instead the best thinker, the one most capable, because nothing else can *work*. The ones who choose to follow the cruel or the cunning will find that they are not led as well as other tribes, and they will defect, and finally the leaders will be the ones who should *always* have lead, and the clan that shall rule is the clan of the thinkers, the coders, for they will be prepared and ready thanks to a lifetime of thought. SARAH: Messiah complex much? If you were any more melodramatic lightning would have flashed while you were just talking. MICHAEL: But don't you see now why we follow this path? Why we do what we do? We're following an instinct that hasn't even become essential yet, expect in a few of the best leaders, the ones we still rememeber. The instinct to *think*, to *understand*, to grok the whole world. SARAH: Grok? TANYA: It basically means to understand, fully. It's got practical overtones, though. If you grok a technology, it doesn't just mean you know how to use it, it means you know how it *works*. STEVE: It means in a pinch you could make it yourself, from scratch. Like a cup, or a shoe, or a ladder: something whose operating principles are completely and totally understood. SARAH: Who made that up? STEVE: Um, it's just a geek word. Something you wouldn't need to know unless you were a coder, and you had to grok stuff in order to code stuff. MICHAEL: And that's something you mentioned earlier: you asked why tech doesn't feel like it's made for you. I told you, but I don't think you got it. We made these things for *us*, not for you. And we are quite different from you. SARAH: With undertones of "and superior", I hear. MICHAEL: Well, quite. Time to stop beating around in the bush. You people -- you non-geeks -- are all just sheep, fascinated by the colours but not understand the kaleidoscope, frightened by the lightning instead of glorifying in its beauty. And meanwhile, we're trapping the lightning, storing it up, harnessing it, and all the while getting more powerful. As I said, we are the priests, but you aren't even the acolytes. You are the unconverted. SARAH: *Jesus* MICHAEL: Nothing so melodramatic. SARAH: So technology is also a religion? MICHAEL: Sure, just like I said at the start. SARAH: So is coding a religious experience in addition to being a sexual one? MICHAEL: Did I already call code a sexual experience? TANYA: You certainly did. SARAH: That explains a lot, incidentally. MICHAEL: Well, I suppose a priest's relationship with god is close enough to how a coder feels about technology. But we're different; we're priests with promotion opportunities. STEVE: Yeah. Being a programmer is like being a priest who knows that if he prays hard enough, he can become a god. SARAH: So the world is being subverted and wrecked around us by the cult of technologists, drunk with their own power, who got powerful because we fucked up when we converted information into power but didn't enact the same checks and balances we did when we converted power into money. Is that a fair summary? MICHAEL: I'd say so. STEVE: Hardly! Checks and balances? What checks and balances? The late 19th century was not a great example of well-regulated capitalism, you know. The Rockefellers and Standard Oil and the rest of the robber barons robbed the world blind, like Britain had maintained a stranglehold on world trade a century before that. Checks and balances only got pushed through after it became obvious the damage that was being done by unrestrained capitalism. SARAH: Aha! So there's a point. You geeks go *crazy* whenever anyone suggests limiting your access, or censoring, or regulating you. You always talk about freedom of speech. But if our solution to the problem of monopolists was regulation, and you say economics is really infocapitalism and all the rest of that, then shouldn't we really be putting the same checks and balances in place on information, limiting the excesses of information abuse? MICHAEL: Hmmm, an interesting hypothesis. But shall we look at it from another angle? Human beings are the dominant form of life on planet earth, right? SARAH: Right. TANYA: Wrong! Or at least, it's arguable. In fact, bacteria outdo humans on practically any metric you care to name -- there are more of them by several orders of magnitude, they live longer, in a much wider range of pressures and temperatures, and their total biomass massively exceeds our own. SARAH: But we're smarter than them. They don't have infocapitalism. TANYA: Ah, but so what? Is it getting us anything they can't get? STEVE: But no *single* species of bacteria beats us. There are zillions of them. Bacteria the genus may beat mammals the genus, but human beings the species kick the crap out of e. coli.. Well, except when it kills us. TANYA: Fuck off! I kill more bacteria when I wash my hair than they kill humans in a century. MICHAEL: Okay, so *arguably* then, humans have a monopoly on ecology. It's ours to control. SARAH: Sure, and look how well *that's* turning out. We're wiping out species, burning a hole in the ozone, raising the sea levels... MICHAEL: Okay, but save-the-whales rhetoric aside, who says that's a problem? We're changing our environment, sure, in the same way that plants changed the environment when they turned up, converting carbon dioxide into oxygen. Change is not necessarily a bad thing. And there are more humans than ever before, so it doesn't look like we're doing ourselves any harm. TANYA: Oooh, this is very dodgy territory. Maybe we're just in an unsustainable growth surge, about to die of starvation like a herd of lemmings. SARAH: Lemmings? STEVE: The myth that they commit suicide by jumping off cliffs stems from the discovery of huge quantities of lemming skeletons in small areas. Actually, what happens is that they breed too fast, and then they die of starvation in huge numbers. SARAH: How the hell do you know that? STEVE: I read it on the Internet. So it must be true. TANYA: But there's no real proof either way, as far as humans' effect on the environment goes. SARAH: So, on the one hands, maybe humans are the proverbial benevolent monopoly, and therefore there should be no ecological regulation, no biological censorship -- the global ecosystem will settle down into a stable pattern, and we won't have anything to worry about. On the other hand, maybe both our biological and technological systems are dangerously unstable and are going to suffer under the weight of uncontrolled growth and collapse, disastrously. Which is it? MICHAEL: Well, in essence it doesn't matter. We're not currently part of the technological ecosystem, we just reap the fruits of it. We don't care if a species of bacteria goes extinct, or the dodo. If we became part of the technological ecosystem, then we'd care, although it would be just like our attitude to biological ecology -- we'd care in the abstract, only really caring if our own survival were threatened. We don't care if other technological species die. We might experience famines or plagues as a result of instability in our technological ecology, but that wouldn't kill us as a species. See, you labour under the misapprehension that if global warming or some other climate change gets out of control, the ecology *itself* might die. And that's really not possible. It could be reduced to the point that it could no longer support us, but the evidence of the past has been that if we were unable to adapt the ecology to suit us, we adapted to suit the ecology. SARAH: So the reason you object to censorship is because it would like ecological management? But we do that all the time. MICHAEL: No, it wouldn't be management. It would be *tampering*, of the worst kind, like introducing a new insect into a country without knowing what effect it might have. Killing one species of information through censorship might let some other form run rife. We shouldn't fuck with the information ecology, because it's too complex for us to understand well enough to manage it effectively. We need to let it be, and adapt. SARAH: Wait wait wait. I'm losing sight of the point here. You told me your motivations, but maybe that wasn't what I wanted. I now understand your compulsions, but what is your *goal*? STEVE: Well, you tell us, from what we've told you so far. SARAH: Okay, so the world is being wrecked-- MICHAEL: *changed*. And not necessarily for the worse. SARAH: --fine, changed, into this entirely new world order where geeks who write code are the new natural rulers. I can see your goal there, I guess, power is a pretty basic motivation. But in the same breath you say that you're not actually going for that goal. You say this is all happening inevitably as a result of capitalism, which subverted ecology, placing a value on information and thus laying the seed of its own successor. You say all this may or may not be a good thing, but in any case it's no more likely to wreck the planet than before. And you claim that in this new technological ecology, we may find ourselves pets to new superhuman robots, or we may becomes superhuman robots ourselves. STEVE: Absolutely. Isn't that amazing? Do you understand now? SARAH: No! I don't understand! I like the world of squishy human beings being irrational and fuzzy! I don't *want* to be a robot? MICHAEL: Robot is a very emotive term, and quite misleading. You wear contacts, don't you? SARAH: Yes. STEVE: Then you already a cyborg. Your own limitations have been transcended by technology that is a part of you. The same applies if you have dentures, filled teeth, a pacemaker, a false leg or an ortheopaedic shoe. All we're talking about is more, closer and deeper integration of technology. It's not something you've never done or encountered before, it's just more of the same. MICHAEL: See, that's why censorship is not just silly, but ultimately pointless. This is not a revolution that can be stopped. This is an evolution that's already underway, and it's about to undergo a phase-change, like the jump from unicellular to multicellular life. You can't opt out of evolution without becoming extinct. Like our construction companies, you can't even collectively opt out of it, or a cheater will break ranks and beat you all. SARAH: So what about this? All the lives lost, the billions knocked off the world economy, all this time wasted? What is this if not the failure of your new technological utopia? MICHAEL: See, you're looking at it from the wrong perspective again. *The technology is not there for you*. It doesn't care if you get your work done. This is merely a wobble in the emerging equilibrium in a new and rapidly-expanding ecology. The population -- of machines, of code, of self-replicating thoughts -- may die back temporarily. But the survivors will be stronger. And crucially, the ones that survive will not be better based on our own arbitrary definitions of what is good or useful software. They will survive based on the very oldest criterion: the ability to survive. We will use technology, but we can only use what is around, and what can work with the other software in the ecosystem. Even though we constantly create software, new species at random, what we actually end up using will be self-selected by the ecology. It's totally out of our control, and we couldn't get that control back even if we wanted to, and even if we could we wouldn't know what to do with it. It would be a disaster. TANYA: Fuck that's scary. STEVE: Only if you're a sheep! Right now your choices are adapt or die. Opt out and you die. The only way to survive is to opt in, embrace technology, and become part of the new ecology, like the parasites that crawled into us and became essential to our digestive system, a symbiosis so close that neither party can now survive without the other. SARAH: So that's it? That's your brave new world? We might die, or we might be lucky enough to become parasites, or pets, or slaves to a race we created and whose supremacy is inevitable? MICHAEL: Ah, but right now we have the unique ability to *choose* our place in the ecology, because it's so new and young. We don't have to be pets to the race that rules the ecological monopoly, we can *be* that race, a race of shining superbeings, raised from mere flesh to bodies capable of flitting between stars under their own power, with minds capable of surviving that journey and intellects capable of comprehending what we found at the other end. Maybe we'll conquer the stars simply by temporarily halting our new technological minds, so the gaps of hundreds of years of travel between stars will pass in subjective eyeblinks. We will pass lifetimes spanning millions of years visiting every star in the galaxy, so close to our one-time idea of gods in our knowledge and power that we will wonder how atheists ever had trouble believing that such a being could exist. In the ultimate act of faith, we could prove that our gods exist by becoming them. And perhaps, once we get out there we will find the spaces between the stars already full of beings like ourselves, only invisible up to that point because our lives were too short, and our intellects too feeble to comprehend their conversations with us? Maybe we will happen across one such being who visited our sun a few thousand years ago and flirted with the locals, leaving behind a man who claimed to be the son of a god. Who knows what we'll find? What we'll be? But it doesn't matter, because we will be them, and go there, no matter what. TANYA: Arthur C Clarke called. He wants his book back. MICHAEL: He was right about the satellites, and he'll be right about this too. Plus, he didn't think of the subjective time thing. TANYA: Probably because it was too easy. MICHAEL: Anyway, right now the only difference between us -- or rather, you -- and those gods is a decision. The gods of the future will be the ones who made the decision to adopt, adapt, to learn and embrace the technology that will first enable and then become both our gods and, if we are lucky, our selves. Sarah, you asked what it is to be a programmer. To be a coder is to be a sand grain that knows the size of the ocean. The paramecium that dreams of being an elephant. To crystallize your thoughts into the code of the future is to lay down the path upon which the gods will one day walk. [Lights down, everyone leaves except Michael. Spotlight on Micahel. He addresses the audience.] What is a play, if not a thought? And a thought, as we've just pointed out, is a program. And *this* thought has been an argument, and you've jusr been listening to it for an hour. You've been running my program. Like it or not, for the last hour I've programmed you, and you even paid for the privilege, converting my information and the skill of these actors into money that we'll probably just blow on cheap wine. But when you leave here, you will remember this program, and if it has worked you will spread these ideas, these thoughts, this *code* to others. This play is a virus. And you are infected.