Home News & Opinion Techspoitation
  • No categories


Nine years of everything


› annalee@techsploitation.com

TECHSPLOITATION I’ve been writing this column for nine years. I was here with you through the dot-com boom and the crash. I made fun of the rise of Web 2.0 when that was called for, and screamed about digital surveillance under the USA-PATRIOT Act when that was required (actually, that’s still required). I’ve ranted about everything from obscenity law to genetic engineering, and I’ve managed to stretch this column’s techie mandate to include meditations on electronic music and sexology. Every week I gave you my latest brain dump, even when I was visiting family in Saskatchewan or taking a year off from regular journalism work to study at MIT.

But now it’s time for me to move on. This is my last Techsploitation column, and I’m not going to pretend it’s not a sad time for me. Writing this column was the first awesome job I got after fleeing a life of adjunct professor hell at UC Berkeley. I was still trying to figure out what I would do with my brain when Dan Pulcrano of the Silicon Valley Metro invited me out for really strong martinis at Blondie’s Bar in the Mission District and offered me a job writing about tech workers in Silicon Valley. My reaction? I wrote a column about geeks doing drugs and building insanely cool shit at Burning Man. I felt like the hipster survivalist festival was the only event that truly captured the madness of the dot-com culture I saw blooming and dying all around me. I can’t believe Dan kept me on, but he did.

Since then, my column also found a home in the Guardian and online at Alternet.org, two of the best leftist publications I’ve ever had the honor to work with. I’ve always believed the left needed a strong technical wing, and I’ve tried to use Techsploitation to articulate what exactly it would mean to be a political radical who also wants to play with tons of techie consumerist crap.

There are plenty of libertarians among techie geeks and science nerds, but it remains my steadfast belief that a rational, sustainable future society must include a strong collectivist vision. We should strive to use technologies to form communities, to make it easier for people to help the most helpless members of society. A pure free-market ideology only leads to a kind of oblivious cruelty when it comes to social welfare. I don’t believe in big government, but I do believe in good government. And I still look forward to the day when capitalism is crushed by a smarter, better system where everyone can be useful and nobody dies on the street of a disease that could have been prevented by a decent socialized health care system.

So I’m not leaving Techsploitation behind because I’ve faltered in my faith that one day my socialist robot children will form baking cooperatives off the shoulder of Saturn. I’m just moving on to other mind-ensnaring projects. Some of you may know that I’ve become the editor of io9.com, a blog devoted to science fiction, science, and futurism. For the past six months I’ve been working like a maniac on io9, and I’ve also hired a kickass team of writers to work with me. So if you want a little Techsploitation feeling, be sure to stop by io9.com. We’re there changing the future, saving the world, and hanging out in spaceships right now.

I also have another book project cooking in the back of my brain, so when I’m not blogging about robots and post-human futures, I’m also writing a book-length narrative about, um, robots and post-human futures. Also pirates.

The past nine years of Techsploitation would have been nothing without my readers, and I hope you can picture me with tears in my eyes when I write that. I’ve gotten so many cool e-mails from you guys over the years that they’ve filled my heart forever with glorious, precise rants about free software, digital liberties, sex toys, genetic engineering, copyright, capitalism, art, video games, science fiction, the environment, and the future — and why I’m completely, totally wrong about all of them. I love you dorks! Don’t ever stop ruthlessly criticizing everything that exists. It is the only way we’ll survive.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who is slowly working on fixing her broken WordPress install at www.techsploitation.com, so eventually you’ll be able to keep up with her there again.

The new privacy


› annalee@techsploitation.com

TECHSPLOITATION It’s shocking how quickly we’ve all gotten used to the idea that the government can and will listen in on everything we say on our telephones, as well as everything we do on the Internet. Case in point: the FISA Amendments Act passed in the House last week, and is predicted to pass the Senate this week. This is a bill that grants telecoms retroactive immunity for illegally giving the National Security Agency access to the phone calls and Internet activities of millions of US citizens. What this bill ultimately does, aside from not holding companies accountable to the Constitution, is open the door for future mass infractions.

We’re looking down a fiber-optic cable that leads to a future where US spies can snarf up everybody’s data without warrants, combing through it for potential suspects in an ongoing digital witch hunt for terrorists or other "bad guys." I’m not saying anything new here. This is just a quick recap of every progressive futurist’s nightmare: it’s an Orwellian world where nothing you do goes unseen.

My hope is that this absurd bill won’t pass the Senate. But if it does, at least we can hope it will be somehow held in check by other laws to come, and by constitutional challenges. But I still think it’s time that we kiss our old-fashioned notions of privacy goodbye.

And not because we will all reveal our secrets and therefore be equally naked, as "transparent society" shill David Brin has argued. We never will be equally naked. There will always be governments and wealthy entities that have the means to cover their tracks and hide their transgressions. I think we must shed the idea that somehow we can protect the rights of ordinary people by protecting what we in the United States once called privacy.

The notion that we should each be granted a special sphere where everything we do goes unseen, unremarked, and unrecorded is a relatively new notion in itself, something that could hardly have existed in a small-town society where everybody knew everybody else’s business. And it still hardly exists in many high-density countries like Japan and China, where privacy is not as prized as other rights are.

What we ask for when we ask for privacy in the United States is a simply a space (physical or digital) to do legal things without fear of reprisals. Even when we had a more tightly-wrapped notion of privacy, say, 50 years ago, it was hardly perfect. Secrets leaked; spies spied. But there were no 24-hour videocam logs and detailed records of your every correspondence available and searchable online. You could write love letters to your secret admirer, ask her to burn them, and be sure nobody would ever know about your forbidden love.

If those letters were intercepted in a small community, your infamy would live forever. Not so in the digital age, when there’s so much readily available infamy that nobody could be bothered to remember your indiscretions for more than a few seconds. What I’m trying to say is that we will never have the old privacy of the burned letter again.

Instead we will have the new privacy, where what we do can be seen by anyone, but will mostly be hidden by crowds. The problem is that we still lose the old privacy forever. My secret transgressions may be drowned out by multitudes, but anyone who is determined to spy on my most private life will probably be able to do so — without a warrant.

So what do we do? Develop new standards of propriety, becoming as formal and controlled behind closed doors as we are in public? I think that will have happen in some cases. And in most cases, people will rely on crowds to hide them, hoping they never fall under sustained scrutiny. The more noise all of us make, the more we can help to hide the innocent. There will be a kind of privacy in the crowd.

But there will also be a private class of people who never have to rely on crowds. To return to my earlier point, I don’t buy for a minute the idea that at some point everyone — including the rich and politically connected — will be subjected to the same scrutiny as those people whose phone records were illegally handed over the to NSA by AT&T. The powerful will continue to have old-fashioned privacy, while the rest of us must get used to living without it.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who tried to hide behind a crowd once but they dispersed.

Three Internet myths that won’t die


› annalee@techsploitation.com

TECHSPLOITATION Since I started writing this column in 1999, I’ve seen a thousand Internet businesses rise and die. I’ve watched the Web go from a medium you access via dial-up to the medium you carry around with you on your mobile. Still, there are three myths about the Internet that refuse to kick the bucket. Let’s hope the micro-generation that comes after the Web 2.0 weenies finally puts these misleading ideas to rest.

Myth: The Internet is free.

This is my favorite Internet myth because it has literally never been true. In the very early days of the Net, the only people who went online were university students or military researchers — students got accounts via the price of tuition; the military personnel got them as part of their jobs. Once the Internet was opened to the public, people could only access it by paying fees to their Internet service providers. And let’s not even get into the facts that you have to buy a computer or pay for time on one.

I think this myth got started because pundits wanted to compare the price of publishing or mailing something on the Internet to the price of doing so using paper or the United States Postal Service. Putting a Web site on the Net is "free" only if you pretend you don’t have to pay your ISP and a Web hosting service to do it. No doubt it is cheaper than printing and distributing a magazine to thousands of people, but it’s not free. Same goes for e-mail. Sure it’s "free" to send an e-mail, but you’re still paying your ISP for Internet access to send that letter.

The poisonous part of this myth is that it sets up the false idea that the Internet removes all barriers to free expression. The Internet removes some barriers, but it erects others. You can get a few free minutes online in your local public library, maybe, and set up a Web site using a free service (if the library’s filtering software allows that). But will you be able to catch anyone’s attention if you publish under those constraints?

Myth: The Internet knows no boundaries.

Despite the Great Firewall of China, an elaborate system of Internet filters that prevent Chinese citizens from accessing Web sites not approved by the government, many people still believe the Internet is a glorious international space that can bring the whole world together. When the government of a country like Pakistan can choose to block YouTube — which it has and does — it’s impossible to say the Internet has no boundaries.

The Internet does have boundaries, and they are often drawn along national lines. Of course, closed cultures are not the only source of these boundaries. Many people living in African and South American nations have little access to the Internet, mostly due to poverty. As long as we continue to behave as if the Internet is completely international, we forget that putting something online does not make it available to the whole world. And we also forget that communications technology alone cannot undo centuries of mistrust between various regions of the world.

Myth: The Internet is full of danger.

Perhaps because the previous two myths are so powerful, many people have come to believe that the Internet is a dangerous place — sort of like the "bad" part of a city, where you’re likely to get mugged or hassled late at night. The so-called dangers of the Internet were highlighted in two recent media frenzies: the MySpace child-predator bust, in which Wired reporter Kevin Poulsen discovered that a registered sex offender was actively befriending and trolling MySpace for kids; and the harassment of Web pundit Kathy Sierra by a group of people who posted cruelly Photoshopped pictures of her, called for her death, and posted her home address.

Despite the genuine scariness represented by both these incidents, I would submit they are no less scary than what one could encounter offline in real life. In general, the Internet is a far safer place for kids and vulnerable people than almost anywhere else. As long as you don’t hand out your address to strangers, you’ve got a cushion of anonymity and protection online that you’ll never have in the real world. It’s no surprise that our myths of the Internet overestimate both its ability to bring the world together and to destroy us individually. 2

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who is biased in favor of facts.

A space colony in Wisconsin


› annalee@techsploitation.com

TECHSPLOITATION Every year in late May, several thousand people descend on Madison, Wis., to create an alternate universe. Some want to build a galaxy-size civilization packed with humans and aliens who build massive halo worlds orbiting stars. Others are obsessed with what they’ll do when what remains of humanity is left to survive in the barren landscape left after Earth has been destroyed by nukes, pollution, epidemics, nanotech wipeouts, or some combination of all four. Still others live parts of their lives as if there were a special world for wizards hidden in the folds of our own reality.

They come to Madison for WisCon, a science-fiction convention unlike most I’ve ever attended. Sure, the participants are all interested in the same alien worlds as the thronging crowds that go to the popular Atlanta event Dragon*Con or the media circus known as Comic-Con. But they rarely carry light sabers or argue about continuity errors in Babylon 5. Instead, they carry armloads of books and want to talk politics.

WisCon is the United States’ only feminist sci-fi convention, but since it was founded more than two decades ago, the event has grown to be much more than that. Feminism is still a strong component of the con, and many panels are devoted to the work of women writers or issues like sexism in comic books. But the con is also devoted to progressive politics, antiracism, and the ways speculative literature can change the future. This year there was a terrific panel about the fake multiculturalism of Star Trek and Heroes, as well as a discussion about geopolitical themes in experimental writer Timmel Duchamp’s five-novel, near-future Marq’ssan series.

While most science fiction cons feature things like sneak-preview footage of the next special effects blockbuster or appearances by the cast of Joss "Buffy the Vampire Slayer" Whedon’s new series Dollhouse, WisCon’s highlights run toward the bookish. We all crammed inside one of the hotel meeting rooms to be part of a tea party thrown by the critically-acclaimed indie SF Web zine Strange Horizons (strangehorizons.com), then later we listened to several lightning readings at a stately beer bash thrown by old school SF book publisher Tor.

One of the highlights of the con was a chance to drink absinthe in a strangely windowless suite with the editors of alternative publisher Small Beer Press, whose authors include the award-winning Kelly Link and Carol Emschwiller. You genuinely imagine yourself on a spaceship in that windowless room — or maybe in some subterranean demon realm — with everybody talking about alternate realities, AIs gone wild, and why Iron Maiden is the best band ever. (What? You don’t think there will be 1980s metal in the demon realm?)

Jim Munroe, Canadian master of DIY publishing and filmmaking, was at WisCon talking about literary zombies and ways that anarchists can learn to organize their time better, while guest of honor Maureen McHugh gave a speech about how interactive online storytelling represents the future of science fiction — and fiction in general. Science fiction erotica writer/publisher Cecilia Tan told everybody about her latest passion: writing Harry Potter fan fiction about the forbidden love between Draco and Snape. Many of today’s most popular writers, like bestseller Naomi Novik, got their start writing fan fiction. Some continue to do it under fake names because they just can’t give it up.

Perhaps the best part of WisCon is getting a chance to hang out with thousands of people who believe that writing and reading books can change the world for the better. Luckily, nobody there is humorless enough to forget that sometimes escapist fantasy is just an escape. WisCon attendees simply haven’t given up hope that tomorrow might be radically better than today. They are passionate about the idea that science fiction and fantasy are the imaginative wing of progressive politics. In Madison, among groups of dreamers, I was forcefully reminded that before we remake the world, we must first model it in our own minds.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who bought way too many books at WisCon and can’t wait to read them all.

Human-animal hybrid clones


› annalee@techsploitation.com

TECHSPLOITATION I just love saying that scientists are creating "human-animal hybrid clones" because that single phrase pulls together about 15 nightmares from science fiction and religion all at the same time. Although if you think about it, one fear really should cancel out the other one. I mean, if you’re worried about human cloning, then the fact that these are clones created by sticking human DNA inside cow eggs should be comforting. I mean, it’s not really a human anymore at that point, right?

But the real reason I’m gloating over this piece of completely
ordinary biological weirdness is that last week the British Parliament began the process of legalizing human-animal hybrid embryo cloning. While not explicitly illegal in the United States, the process has been so criticized (including by former president Bill Clinton) that most researchers have stayed away from it. Now, however, this law could make it easy for Brits to advance their medicine far faster than people in the supposedly high-tech and super-advanced United States.

You see, these scary hybrids could become stem cell goldmines. One of the barriers to getting stem cells for research is that they only come from human embryos, and human embryos come from human women. Some of us may be cool with donating our eggs to science, but a lot of us aren’t — and that means scientists don’t have a lot of material to work with if they want to do stem cell research that could do things like reverse organ failure and cure Alzheimer’s.

And that’s where these human-animal hybrids come in. We can already inject DNA into the nucleus of a cow egg and zap it with electricity, thus reprogramming that egg to be human. And we can even get that egg to start dividing as if it were an embryo, creating a bunch of human stem cells. Beyond that, we just aren’t sure. Will these embryos create viable stem cells to treat all those nasty human diseases? Or will they just be duds that act too much like cow cells to be usable by humans? If there’s even a small chance that the former will come to pass, it’s worth investigating — and we’ll have solved the human stem cell shortage problem.

That’s why scientists in the United Kingdom are doing it, and why their government is debating exactly how the process should be regulated. You wouldn’t necessarily know that from the way it’s been covered in the media, where even the normally staid International Herald Tribune began an article about the potential UK law with this sentence: "The British Parliament has voted to allow the creation of human-animal embryos, which some scientists say are vital to find cures for diseases but which critics argue pervert the course of nature." Nice move, throwing in the word "pervert" there.

When the media writes about how scientists might "pervert the course of nature," and the anti-science group Human Genetics Alert is bombarding me and pretty much every other science journalist on the planet with crazed, uniformed screeds about how this law will lead to "designer babies," you start to feel like a huge portion of the population doesn’t know the difference between science and science fiction. Indeed, one of the most anticipated sci-fi horror movies for next year is Splice, which is about a pair of rock star geneticists who create a human-animal hybrid. Of course the hybrid happens to be a deadly, exotic-looking woman with wings and a tail and a super-hot body. Early images released from the production show her naked, with her animal parts looking sexy and dangerous.

The completely impossible "designer baby" in Splice is what most people think will happen when scientists create human-animal hybrid clones. But creating something like the sexy Splice lady is not only beyond the reach of current science, it is also illegal under the proposed UK law. The hybrid clones will only be permitted to develop for about two weeks, which is the time required to create stem cells. After that, they must be destroyed. So the UK law actually makes the nightmare scenario impossible, not possible.

And that’s why I’m psyched about getting my human-animal hybrid clones. *

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who can’t wait to see the world populated with human-elephant-dolphin hybrids.



› annalee@techsploitation.com

TECHSPLOITATION Last week I wrote about the premise of Oxford professor Jonathan Zittrain’s new book, The Future of the Internet and How to Stop It (Yale University Press). He warns about a future of "tethered" technologies like the digital video recorder and smartphones that often are programmed remotely by the companies that make them rather than being programmed by users, as PCs are. As a partial solution, Zittrain offers up the idea of Wikipedia-style communities, where users create their own services without being "tethered" to a company that can change the rules any time.

Unfortunately, crowds of people running Web services or technologies online cannot save us from the problem of tethered technology. Indeed, Zittrain’s crowds might even unwittingly be tightening the stranglehold of tethering by lulling us into a false sense of freedom.

It’s actually in the best interest of companies like Apple, Comcast, or News Corp to encourage democratic, freewheeling enclaves like Wikipedia or MySpace to convince people that their whole lives aren’t defined by tethering. When you get sick of corporate-mandated content and software, you can visit Wikipedia or MySpace. If you want a DVR that can’t be reprogrammed by Comcast at any time, you can look up how to build your own software TV tuner on Wikipedia. See? You have freedom!

Unfortunately, your homemade DVR software doesn’t have the kind of easy-to-use features that make it viable for most consumers. At the same time, it does prove that tethered technologies aren’t your only option. Because there’s this little puddle of freedom in the desert of technology tethering, crowd-loving liberals are placated while the majority of consumers are tied down by corporate-controlled gadgets.

In this way, a democratic project like Wikipedia becomes a kind of theoretical freedom — similar to the way in which the US constitutional right to freedom of speech is theoretical for most people. Sure, you can write almost anything you want. But will you be able to publish it? Will you be able to get a high enough ranking on Google to be findable when people search your topic? Probably not. So your speech is free, but nobody can hear it. Yes, it is a real freedom. Yes, real people participate in it and provide a model to others. And sometimes it can make a huge difference. But most of the time, people whose free speech flies in the face of conventional wisdom or corporate plans don’t have much of an effect on mainstream society.

What I’m trying to say is that Wikipedia and "good crowds" can’t fight the forces of corporate tethering — just as one person’s self-published, free-speechy essay online can’t fix giant, complicated social problems. At best, such efforts can create lively subcultures where a few lucky or smart people will find that they have total control over their gadgets and can do really neat things with them. But if the denizens of that subculture want millions of people to do neat things too, they have to deal with Comcast. And Comcast will probably say, "Hell no, but we’re not taking away your freedom entirely because look, we have this special area for you and 20 other people to do complicated things with your DVRs." If you’re lucky, Comcast will rip off the subculture’s idea and turn it into a tethered application.

So what is the solution, if it isn’t nice crowds of people creating their own content and building their own tether-free DVRs? My honest answer is that we need organized crowds of people systematically and concertedly breaking the tethers on consumer technology. Yes, we need safe spaces like Wikipedia, but we also need to be affirmatively making things uncomfortable for the companies that keep us tethered. We need to build technologies that set Comcast DVRs free, that let people run any applications they want on iPhones, that fool ISPs into running peer-to-peer traffic. We need to hand out easy-to-use tools to everyone so crowds of consumers can control what happens to their technologies. In short, we need to disobey. *

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd whose
best ideas have all been appropriated and copyrighted by corporations.

The Internet dystopia


› annalee@techsploitation.com

TECHSPLOITATION A couple of weeks ago I went to the annual Maker Faire in San Mateo, an event where people from all over the world gather for a giant DIY technology show-and-tell extravaganza. There are robots, kinetic sculptures, rockets, remote-controlled battleship contests, music-controlled light shows, home electronics kits, ill-advised science experiments (like the Mentos–Diet Coke explosions), and even a barn full of people who make their own clothing, pillows, bags, and more. Basically, it’s a weekend celebration of how human freedom combined with technology creates a pleasing but cacophonous symphony of coolness.

And yet the Maker Faire takes place against a backdrop of increasing constraints on our freedom to innovate with technology, as Oxford University researcher Jonathan Zittrain points out in his latest book, The Future of the Internet and How to Stop It (Yale University Press). After spending several years investigating the social and political rules that govern the Internet — and spearheading the Net censorship tracking project OpenNet Initiative — Zittrain looks back on the Net’s development and predicts a dystopian future. What’s chilling is that his dystopia is already coming to pass.

Zittrain traces the Net’s history through three phases. Initially it was composed of what he calls "sterile" technologies: vast mainframes owned by IBM, which companies could rent time on. What made those technologies sterile is that nobody could experiment with them (except IBM), and therefore innovation related to them stagnated.

That’s why the invention of the desktop PC and popularization of the Internet ushered in an era of unprecedented high-tech innovation. Zittrain calls these open-ended technologies "generative." Anybody can build other technologies that work with them. So, for example, people built Skype and the World Wide Web, both software technologies that sit on top of the basic network software infrastructure of the Internet. Similarly, anybody can build a program that runs on Windows.

But Zittrain thinks we’re seeing the end of the freewheeling Internet and PC era. He calls the technologies of today "tethered" technologies. Tethered technologies are items like iPhones or many brands of DVR — they’re sterile to their owners, who aren’t allowed to build software that runs on them. But they’re generative to the companies that make them, in the sense that Comcast can update your DVR remotely, or Apple can brick your iPhone remotely if you try to do something naughty to it (like run your own software program on it).

In some ways, tethered technologies are worse than plain old sterile technologies. They allow for abuses undreamed of in the IBM mainframe era. For example, iPhone tethering could lead to law enforcement going to Apple and saying, "Please activate the microphone on this iPhone that we know is being carried by a suspect." The device turns into an instant bug, without all the fuss of following the suspect around or installing surveillance crap in her apartment. This isn’t idle speculation, by the way. OnStar, the manufacturer of a car emergency system, was asked by law enforcement to activate the mics in certain cars using its system. It refused and went to court.

Zittrain’s solution to the tethering problem is to encourage the existence of communities like the ones who participate in Maker Faire or who edit Wikipedia. These are people who work together to create open, untethered technologies and information repositories. They are the force that pushes back against companies that want to sterilize the Internet and turn it back into something that spits information at you, television-style. I think this is a good start, but there are a lot of problems with depending on communities of DIY enthusiasts to fix a system created by corporate juggernauts. As I mentioned in my column ("User-Generated Censorship," 4/30/08), you can’t always depend on communities of users to do the right thing. In addition, companies can create an incredibly oppressive tethering regime while still allowing users to think they have control. Tune in next week, and I’ll tell you how Zittrain’s solution might lead to an even more dystopian future.

Annalee Newitz is a surly media nerd who thinks up dystopias in her spare time.

Obligatory video game outrage


› annalee@techsploitation.com

TECHSPLOITATION At this point, the outraged response to the latest installment in the Grand Theft Auto series of video games, GTA4, is pretty much obligatory. Mothers Against Drunk Driving is lobbying to get the video game rated "adults only" (effectively killing it in the US market, where major console manufacturers won’t support AO games) because there’s one scene in the game where you have the option to drive drunk. Apparently none of the good ladies of MADD have ever played GTA, since if they had they might have discovered that when you try to drive drunk, the video game informs you that you should take a cab. If you do drive, the cops immediately chase you down. Which is exactly the sort of move you’d expect from this sly, fun game, which hit stores last week.

GTA, made by edgy Rockstar Games, is basically a driving game franchise packed inside an intriguing, disturbing, elaborate urban world where you become a character whose life options are all connected to the ability to drive around in various cities. Usually you’re some kind of bad guy or shady character. Think of it as the video game equivalent of a TV show like The Wire or an urban gangster flick. What has made GTA so popular among gamers is the way it combines the fun of a driving game with the sprawling possibilities of gamer choice. And I think that’s what nongamers find so confusing — and therefore threatening — about it.

When you jump into a car in GTA, you aren’t rated on your driving skill. You don’t have to stay on a predetermined track. Sure, you have to complete a mission, but you can choose to just drive around insanely, exploring the big worlds of the GTA games, beating up cops and murdering people at random if you want. You can take drugs and get superspeedy or ram a truck into a building.

GTA4 is set inside an alternate version of New York City and takes the player even further into a world of narrative choices. You play a character named Niko, a Serbian war vet who comes to Liberty City to get revenge — or to make peace with his past. Along with several other characters, he’s just trying to get by in a huge city, but gets sucked into a world of crime and murder along the way. As you get deeper into the game, you realize that your interactions with characters are just as important as running your car missions. You can’t get anywhere without making friends, connections, and plunging deeper into Niko’s troubled past.

If GTA4 were a movie, it would have been directed by Martin Scorsese or David O. Russell, and we’d all be ooohing and aaahhing over its dark, ironic vision of immigrant life in a world at war with itself. But because GTA4 is a video game, where players are in the driver’s seat, so to speak, it freaks people out. Earlier installments of GTA-inspired feminist and cultural-conservative outrage (you have the option to kill prostitutes!), and concern over moral turpitude from Hillary Clinton (you can beat cops to death! Or anybody!).

And yet there are other video games out there, like the family-friendly role-playing game The Sims, where players can torture people to death in ways far more disturbing than those in GTA. I was just talking to a friend who told me gleefully how he’d taken one of his Sims characters, stuck him in a VR headset, and walled him into a room that only contained an espresso machine. The character kept drinking coffee and playing the headset, pissing in the corners of the room and crying until he died. Other players have reported that you can stick a bunch of characters in the swimming pool, remove the ladder, and drown them. Then you can decorate your yard with their tombstones. That’s not the point of the game, but people can do it.

The reason these horrible things can happen in The Sims is exactly the same reason they happen in GTA: these are cutting-edge video games defined by player freedom rather than locking the player into a prescribed narrative loop where veering off the racetrack means "lose" rather than "find a new adventure." When you give players the option to explore their fantasies, you’re going to get some dark stuff. Yes, it’s disturbing. But it’s also the foundation of great art.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who has just started playing GTA4 but has already read all the spoilers for it.

User-generated censorship


› annalee@techsploitation.com

TECHSPLOITATION There’s a new kind of censorship online, and it’s coming from the grassroots. Thanks to new, collaborative, social media networks, it’s easier than ever for people to get together and destroy freedom of expression. They’re going DIY from the bottom up — instead of the way old-school censors used to do it, from the top down. Call it user-generated censorship.

Now that anyone with access to a computer and a network connection can post almost anything they want online for free, it’s also increasingly the case that anyone with computer access and a few friends can remove anything they want online. And they do it using the same software tools.

Here’s how it works: let’s say you’re a community activist who has some pretty vehement opinions about your city government. You go to Blogger.com, which is owned by Google, and create a free blog called Why the Municipal Government in Crappy City Sucks. Of course, a bunch of people in Crappy City disagree with you — and maybe even hate you personally. So instead of making mean comments on your blog, they decide to shut it down.

At the top of your Blogger blog, there is a little button that says
"flag this blog." When somebody hits that button, it sends a message to Google that somebody thinks the content on your blog is "inappropriate" in some way. If you get enough flags, Google will shut down your blog. In theory, this button would only be used to flag illegal stuff or spam. But there’s nothing stopping your enemies in town from getting together an online posse to click the button a bunch of times. Eventually, your blog will be flagged enough times that Google will take action.

And this is where things get interesting. Google has the option of simply shutting down your access to the blog. They rarely do that, though, unless it’s a situation where your blog is full of illegal content, like copyright-infringing videos. Generally what Google does if you get a lot of flags is make your blog impossible to find. Nobody will be able to find it if they search Blogger or Google. The only people who will find it are people who already know about it and have the exact URL.

This is censorship, user-generated style. And it works because the only way to be seen in a giant network of user-generated content like Blogger (or MySpace, or Flickr, or any number of others) is to be searchable. If you want to get the word out about Crappy City online, you need for people searching Google for "Crappy City" to find your blog and learn about all the bad things going on there. What good is your free speech if nobody can find it?

Most sites that have user-generated content, like photo-sharing site Flickr and video-sharing site YouTube, use a system of flags similar to Blogger’s that allow users to censor each other. Sometimes you have to pick a good reason why you are flagging content — YouTube offers you a drop-down menu with about 20 choices — and sometimes you just flag it as "unsafe" or "inappropriate." Generally, most sites respond to flagging the same way: they make the flagged stuff unsearchable and unfindable.

Censorship isn’t working the old-fashioned way. Your videos and blogs aren’t being removed. They’re simply being hidden in the deluge of user-generated information. To be unsearchable on the Web is, in a very real sense, to be censored. But you’re not being censored by an authority from on high. You’re being censored by the mob.

That’s why I find myself rolling my eyes when I hear people getting excited about "the wisdom of crowds" and "crowdsourcing" and all that crap. Sure, crowds can be wise and they can get a lot of work done. But they also can also be destructive, cruel, and stupid. They can prevent work from being done as easily as they can make it easier. And just as the Web is making it easier for crowds to collaborate, the Web is also making it simple for mobs to crush free expression.

Annalee Newitz (annalee@techsploitaiton.com) is a surly media nerd whose blogs cannot be censored by the mob, even though she’s well aware that there are mobs who would certainly like to do it.



› annalee@techsploitation.com

TECHSPLOITATION For weeks now, analysts and armchair financial nerds have been mulling over what it will mean if software megacorp Microsoft buys Web monkey farm Yahoo! Would Microsoft-Yahoo! (known forevermore as Microhoo!) challenge Google to some kind of Web domination duel and win? Probably not. As much as I would love to see Bill Gates, Sergey Brin, and Jerry Yang in some kind of unholy three-way Jell-O wrestling match, I know it will never come to pass.

Microhoo! won’t ever have what Google has right now. Sure, Microhoo! will have some solid assets: control of most PC desktops with the Windows OS, Microsoft Office crap, and the Internet Explorer browser. After chomping up Yahoo!, Microhoo! will have a second-rate search engine used by a forlorn 22 percent of Web searchers, followed by a very confused 10 percent who use Microsoft search — I bet you didn’t know Microsoft even had a search engine, did you? It would also have a giant mess of users on free Yahoo! mail, as well as Yahoo! instant messenger. Plus it would acquire a host of Yahoo! things you also didn’t know existed, like Yahoo! Buzz and Yahoo! Answers. Along with about 8 percent of the Web advertising market.

What does Google have? Sure, it has a million things like Android and Orkut and Gmail and Reader and Blogger and Scoop and Zanyblob. But what it really has is Search. Fifty-nine percent of online searches go through Google servers. And if it can sell ads to 59 percent of the billions online? It owns the attention of the majority of the market. Google wins. That’s why the company isn’t worrying so much about Microhoo! and instead is doing things like investing in alternative energy research and letting its employees make psychotically long, company-wide e-mail arguments about whether it’s Earth-friendly to provide plastic bottles of water in the lunchrooms.

I shouldn’t be so glib. Google is making a halfhearted attempt to prevent Microhoo! from being born. The company offered Yahoo! an ad-sharing partnership where the two could pool their networks, put more ads in front of more eyes, and come out as an even more giant advertising machine. They’re doing a very limited test of the ad partnership over the next couple of weeks. Maybe we’ll see a Goohoo! after all.

I don’t think so. Most business pundits think the Goohoo! deal is just Yahoo!’s last-ditch effort to get a bigger offer from Microsoft. Apparently Yahoo! wants about $50 billion to become Microhoo!, and Microsoft is currently offering a little more than $40 billion. No matter what the price tag, my bet is that we’re going to see Microhoo! by this time next year. Microsoft is even contemputf8g a hostile takeover — that’s how serious the situation is.

So what does Microhoo! mean for us, the little guys, who just want a nice search engine that helps us find "hot XXX pussy" or "free MP3" on the Web? For one thing, it means we’ll have fewer options when it comes to online searches, using Web mail, and just plain goofing around online. Microsoft actually considered bringing News Corp, owners of MySpace, in on the Microhoo! deal. That would mean MySpace, Hotmail, Yahoo! mail, and your PC software would all come from a merged corporate entity.

Let’s say we did get a Micronewshoo! It’s online offerings, combined, would be very much a version of Google’s online offerings: mail, social networking, search, Web fun. There would be no cool new thing, no sudden breakthrough application that would transform our relationship to the Web the way Search did. It would be more of the same stuff, but from fewer players — and therefore blander and bigger, like Hollywood blockbusters. New applications and content creators on the Web will be incredibly hard to find unless they have a deal with Microhoo! or Google.

Then in 20 years, a woman in a physics graduate program in China will come up with an idea for the next cool communications network. At last, we’ll say, we finally have a network free from advertising! A place where we can share information without Big Business intruding! Not like the Web, which is all corporate content and has no place for the little guy.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who thinks Google should start recycling dinosaur bones.

Pregnant men


› annalee@techsploitation.com

TECHSPLOITATION Thomas Beattie is actually not the first man to get pregnant. Almost a decade ago, a San Francisco transgendered man named Matt Rice got pregnant and had a cute son. Several years after that, I met another pregnant transman in San Francisco. He was telling his story, with his wife, at a feminist open mic. So why is Beattie getting all the credit, and why now?

Beattie is the first pregnant man most people will ever meet. He’s the guy in People magazine right now looking preggers and hunky, and the guy who was on The Oprah Winfrey Show last week. And it makes sense that he’s the first wonder of tranny obstetrics medical science to hit the spotlight. He’s a nice, small-town Oregon boy, married for five years to a nice, small-town lady, and his full beard and muscles make it quite obvious that he’s a dude. In other words: he’s not a freak from a freaky city like San Francisco. He is, as they say in the mainstream media, relatable.

And he’s playing his poster boy role perfectly. On Oprah, you could tell he was a friendly, shy person (albeit with a black belt in karate). Visibly nervous, obviously proud as hell of his wife and soon-to-be-born daughter, he didn’t try to make a political statement or lecture anybody about gender binaries being stupid. He had a hard time explaining why he had become a man, too. Often when Oprah asked pointed questions he would shrug and say, "It’s hard to explain." Exactly like a dude to be sort of inarticulate about his own dudeness. So another part of his appeal to the mainstream media is that he fits gender stereotypes.

Plus, he’s the guy every woman wants to marry. Not only is he cute and happy to build things around the house, he’s willing to have your baby for you too. As Beattie’s wife said to Oprah with a grin, "What woman wouldn’t want her husband to get pregnant?"

So we know the answers to the "Why Beattie?" part. Every new minority needs a friendly, relatable poster child: lesbians have Ellen, and I suppose you could say mixed-race people have Barack Obama. The real question is: why now? Or even: can it happen now?

In some ways, those are the same questions people are asking about a possible Obama presidency. Can the majority of people in the United States accept a mixed-race guy in a role previously reserved for white dudes? To return to the issue of Beattie, can the majority accept a man taking on a role (pregnant dad) they’d never contemplated before, except when watching a bad Arnold Schwarzenegger sci-fi comedy called Junior?

I think they can, but not for the same reasons they might accept Obama. Beattie is not a political creation like Obama — he’s the creation of medical technology, pure and simple. Hormones and surgery made him male. Artificial insemination made him pregnant. There would have been no way to accept Beattie 10 years ago because he literally could not have existed. But contemporary medical technology has given us a chance.

Considering Beattie in that context — as the release version of a new kind of biotech-enabled man — makes it clear why this is happening now.

Of course, social changes have a lot to do with his emergence into the public spotlight. Gender roles are shifting, and it’s often hard to say what it means anymore to be a "real man" or a "real woman." The vast majority of people may have a common-sense definition of masculine and feminine, but even those definitions have changed a lot over the past 50 years.

So maybe medical technology is just now catching up with cultural shifts, or maybe cultural shifts are pushing us to use technologies we’ve had for a while in new gender-blurring ways. All I know is that biotechnology is making theories of gender fluidity concrete, making ideas into flesh. And we’re seeing a pattern that always emerges when we’re right on the edge of accepting a big social change. First, the ideas turn into something real that people can touch — or, in the case of Beattie, talk to. And then comes the next phase. Whatever that may be.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who has been a trannychaser since the second grade.

English is dead


› annalee@techsploitation.com

TECHSPLOITATION By the time English truly is a dominant language on the planet, it will no longer be English. Instead, say a group of linguists interviewed in a recent article by Michael Erard in New Scientist, the language will fragment into many mutually-unintelligible dialects. Still, some underlying documents will supply the grammatical glue for these diverse Englishes, the way Koranic Arabic does for the world’s diverse Arabic spinoff tongues. English-speakers of the future will be united in their understanding of a standard English supplied by technical manuals and Internet media.

People like me, native English speakers, are heading to the ashcan of history. By 2010, estimates language researcher David Graddol, 2 billion people on the planet will be communicating in English — but only 350 million will be native speakers. By 2020, native speakers will have diminished to 300 million. My American English, which I grew up speaking in an accent that matched what I heard on National Public Radio and 60 Minutes, is already difficult for many English-speakers to understand.

Hence the rise of Internet English. This is the simple English of technical manuals and message boards — full of slang and technical terminology, but surprisingly free of strange idioms. It’s usually also free of the more cumbersome and weird aspects of English grammar.

For example, a future speaker of English would be unlikely to understand the peculiar way in which I express the past tense: "I walked to the store." Adding a couple of letters (–ed) to the end of a verb to say that I did something in the past? Weird. Hard to hear; hard to say. It’s much more comprehensible to say: "I walk to the store yesterday." And indeed, that’s how many non-native speakers already say it. It’s also the way most popular languages like the many dialects of Chinese express tense. The whole practice of changing the meaning of a word by adding barely audible extra letters — well, that’s just not going to last.

When I read about the way English is changing and fragmenting, it has the opposite effect on me than what you might expect. Although I am the daughter and granddaughter of English teachers and spent many years in an English department earning a PhD, I relish the prospect of my language changing and becoming incomprehensible to me. Maybe that’s because I spent a year learning to read Old English, the dominant form of English spoken 1,000 years ago, and I realize how much my language has already changed.

But my glee in the destruction of my own spoken language isn’t entirely inspired by knowing language history. It’s because I want English to reflect the lives of the people who speak it. I want English to be a communications tool — like the Internet, a thing that isn’t an end in itself but a means to one. Once we all acknowledge that there are many correct Englishes, and not just the Queen’s English or Terry Gross’s English, things will be a lot better for everybody.

I’ll admit sometimes I feel a little sad when my pal from Japan doesn’t get my double entendres or idiomatic jokes. I like to play with language, and it’s hard to be quite so ludic when language is a tool and nothing more. But that loss of English play is more than made up for by the cross-cultural play that becomes possible in its stead, jokes about kaiju and non-native snipes at native customs. (My favorite: said Japanese pal is bemused by American Christianity, and one day exclaimed in frustration, "God, Godder, Goddest!")

For those of us who spend most of our days communicating via the Internet, using language as the top layer in a technological infrastructure that unites many cultures, the Englishes of the future are already here. In some ways they make a once-uniform language less intelligible. In other ways, they make us all more intelligible to one another.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd whose English is obsolete.

Hooker science


TECHSPLOITATION The outrage over former New York governor Eliot Spitzer hiring an A-list hooker makes me feel like throwing a gigantic, crippling pile of superheavy biology and economics books at everyone in the United States and possibly the world. Are we still so Victorian in our thinking that we think it’s bad for somebody to pay large amounts of money for a few hours of skin-time with a professional? Have we not learned enough at this point about psychology and neuroscience to understand that a roll in the sheets is just a fun, chemical fizz for our brains and that it means nothing about ethics and morality?

The sad fact is that we have learned all that stuff, and yet most people still believe paying money for sex is the equivalent of killing babies on the moral report card. And yet nobody bothers to ask why, or to investigate past the sensational headlines. As far as I’m concerned, the one unethical thing Spitzer did was to hire a sex worker after prosecuting several prostitution rings. That’s hypocritical of him, and undermines my faith in him as a politician.

But let’s say Spitzer hadn’t prosecuted so-called sex crimes before, and all he was doing was hiring a lady for some sex. Here is what I don’t get: why is this bad? On the scale of things politicians can do – from sending huge numbers of young people to be killed in other countries to cutting programs aimed at helping foster kids get lunch money – hiring a sex worker is peanuts. It’s a personal choice! It’s not like Spitzer was issuing a statewide policy of mandatory hookers for everybody.

What really boggles the mind is the way so-called liberal media like National Public Radio and the New York Times have been attacking Spitzer’s morals as much as the conservative Fox News types have. In some cases, they’ve attacked him more. The reasons given are always the same: sex work is abusive to women (male prostitutes don’t exist?), and being paid for sex is inherently degrading.

Let’s look inside one of those heavy economics books that I just beat you with and examine these assumptions for a minute, OK? Every possible kind of human act has been commodified and turned into a job under capitalism. That means people are legally paid to clean up one another’s poop, paid to wash one another’s naked bodies, paid to fry food all day, paid to work in toxic mines, paid to clean toilets, paid to wash and dress dead naked bodies, and paid to clean the brains off walls in crime scenes. My point is, you can earn money doing every possible degrading or disgusting thing on earth.

And yet, most people don’t think it’s immoral to wipe somebody else’s bum or to fry food all day, even though both jobs could truthfully be described as inherently degrading. They say, "Gee that’s a tough job." And then they pay the people who do those jobs minimum wage.

The sex worker Spitzer visited, on the other hand, was paid handsomely for her tough job. The New York Times, in its mission to invade this woman’s privacy (though in what one must suppose is a nonexploitative way), reported that she was a midrange worker at her agency who pulled in between $1000–$2000 per job. She wasn’t working for minimum wage; she wasn’t forced to inhale toxic fumes that would destroy her chances of having a nonmutant baby. She was being paid a middle-class salary to have sex. Sure, it might be an icky job, in the same way cleaning up barf in a hospital can be icky. But was she being economically exploited? Probably a hell of a lot less than the janitor in the hospital mopping up vomit cleaning up after you.

Sure, there are hookers who are exploited and who have miserable lives. There are people who are exploited and miserable in a lot of jobs. But the misery is circumstantial: not all hookers are exploited, just as not all hospital workers are exploited. It’s basic labor economics, people.

Audacia Ray, former sex worker and editor of the sex worker magazine $pread, has pointed out that the public doesn’t even seem to understand what exploitation really means. The woman who did sex work for Spitzer has had her picture and personal history splattered all over the media in an incredibly insulting way. Nobody seems to realize she’s being degraded far more now than she ever was when Spitzer was her client. And she’s not getting any retirement savings out of it, either.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who
once hired a prostitute for a few hundred bucks and had a pretty good time.

The users are revolting


› annalee@techsploitation.com

TECHSPLOITATION One of the social traditions that’s carried over quite nicely from communities in the real world to communities online is revolution. You’ve got many kinds of revolt taking place online in places where people gather, from tiny forums devoted to sewing, to massive Web sites like Digg.com devoted to sharing news stories.

And while they may be virtual, the protests that break out in these digital communities have much in common with the ones that raise a ruckus in front of government buildings: they range from the deadly serious to the theatrically symbolic.

How can a bunch of people doing something on a Web site really be as disruptive or revolutionary as those carrying signs, yelling, and storming the gates of power in the real world? By way of an answer, let’s consider three kinds of social protest that have taken place in the vast Digg community.

According to Internet analysis firm ComScore, Digg has 6 million visitors per month who come to read news stories rounded up from all over the Web. About half of those visitors log in as users to vote on which stories are the most important: the one with the most votes are deemed "popular," and make it to Digg’s front page to be seen by millions. A smaller number of people on Digg — about 10 percent — choose to become submitters of stories, searching the Web for interesting things and posting them to be voted on — in categories that range from politics to health. Digg’s developers use a secret-sauce algorithm to determine at what point a story has received enough votes to make it popular and worthy of front-page placement.

You can imagine that a community like this one, devoted to the idea of democratically generated news and controlled by a secret algorithm, might be prone to controversy. And it is.

Two years ago, I was involved in what I would consider one type of user revolt on Digg. It was a prank that I pulled off with the help of an anonymous group called User/Submitter. The group’s goal was to reveal how easy Digg makes it for corrupt people to buy votes and get free publicity on Digg’s front page. My goal was to see if U/S really could get something on the front page by bribing Digg users with my cash. So I created a really dumb blog, paid a couple hundred dollars to U/S, and discovered that you could indeed buy your way to the front page. Think of it as an anarchist prank designed to show flaws in the so-called democracy of the system.

But there have also been massive grassroots protests on Digg, one of which I wrote about in a column more than a year ago. Thousands of Digg users posted a secret code, called the Advanced Access Content System key, that could be used as part of a scheme to unlock the encryption on high definition DVDs. The goal was to protest the fact that HD DVDs could only be played in "authorized" players chosen by Hollywood studios. So it forced people interested in HD to replace their DVD players with new devices. It was a consumer protest, essentially, and a very popular one. Hollywood companies sent Digg cease-and-desists requesting that they take down the AACS key whenever it was posted, but too many users had posted it. There was no way to stop the grassroots protest. Digg’s founders gave up, told the community to post the AACS key to their hearts’ content, and swore they would fight the studios to the end if they got sued (no suit ever materialized).

Another kind of protest that’s occurred on Digg came just last month, and it was a small-scale rebellion among the people who submit stories and are therefore Digg’s de facto editors. After Digg developers changed the site’s algorithm so that it was harder to make stories popular, a group of Digg submitters sent a letter to Digg’s founders saying they would stop using the site if the algorithm wasn’t fixed. You could compare this protest to publishing an editorial in a newspaper — it reflected grassroots sentiment but was written by a small minority of high-profile individuals. Though the company didn’t change its algorithm, this protest did result in the creation of town hall meetings where users could ask questions of Digg developers and air their grievances.

Each of these kinds of protests has its correlates in the real world: the symbolic prank, the grassroots protest, and the angry editorial. So forgive me if I laugh at people who say the Internet doesn’t foster community. Not only is there a community there, but it’s full of revolutionaries who fight for freedom of expression.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who wants a revolution.

War on science


› annalee@techsploitation.com

TECHSPLOITATION Over the past eight years, the lives of millions of people in the United States and beyond have been endangered by the US government. No, I’m not talking about the war in Iraq. I’m talking about the quiet, systematic war the government has been waging against science.

You may have heard about gross examples of the government censoring scientific documents. For example, it was widely reported last year that a government regulatory group excised at least half the statements Centers for Disease Control director Julie Gerberding was set to make at a congressional hearing about how climate change will affect public health. You may also have heard about the scandal in 2004 when a whistleblower at the Environmental Protection Agency revealed that five of the seven members on a panel of "independent experts" stood to gain financially from shutting down a scientific investigation of a controversial mining technique called "hydraulic fracturing." The panel claimed that in its expert opinion, the technique didn’t require regulation, despite many scientists’ concerns that it might pollute groundwater.

But these are the stories that hit the headlines. There are hundreds more where they came from, and many of them are documented meticulously in a study released earlier this month by the Union of Concerned Scientists (UCS) called "Federal Science and the Public Good." (Download it for free online at http://www.ucsusa.org/scientific_integrity/restoring/federal-science.html.)

The UCS report documents, in chilling detail, how agencies have fired scientists who disagreed with government policies. For example, in 2003, experts in nuclear physics were dismissed from a panel within the National Nuclear Security Administration because some of them had published about how the George W. Bush administration’s beloved "bunker buster" weapons weren’t very effective. And scientists who spoke out against the administration’s stem cell policy were booted from the President’s Council on Bioethics.

Worse, the government has falsified scientific studies to bolster its policies and undergird its ideological positions. Perhaps the most egregious example of this was when the EPA lied outright to Americans that the air around ground zero directly after Sept. 11 was safe to breathe. In fact, according to the UCS report, the EPA made this statement without even testing the air. As a result, the authors of the report write, "thousands of rescue workers now plagued by crippling lung ailments continue to feel the impact of this public deception." There’s also an example of the Food and Drug Administration inventing a fake study to support its decision to approve the drug Ketek, along with many others.

Most intriguing, though, is the UCS report’s suggestion that many federal regulatory agencies may in fact be breaking the law by cutting real science out of government policy decisions. Both the Clean Air Act and the Endangered Species Act require the EPA and the US Fish and Wildlife Service to base their decisions on "the best scientific data available." And yet the UCS has documented countless examples of both agencies, as well as others, refusing to take into account the latest research on climate change, animal populations, and systems biology.

It would be intriguing to see a lawsuit based on the fact that these agencies aren’t using "the best scientific data available," but the UCS doesn’t suggest that as a remedy. Instead, the report concludes by looking to the future of federally funded science, suggesting ways the next presidential administration might remedy the failures of the last.

First on the agenda would be to bring a scientific adviser back into the cabinet. (Bush dismissed this adviser from the cabinet.) The UCS also suggests that the next president repeal Executive Order 13422, which gave an obscure regulatory body known as the Office of Management and Budget a lot of control over how regulatory agencies handle science. Currently the OMB has the power to revise the findings of scientists within those agencies, despite the fact that the OMB has little to no scientific expertise. And finally, the UCS asks that the government extend protections to whistleblowers within the government who come forward to report on the very kinds of abuses the UCS has reported (often with the help of whistleblowers who lost their jobs or worse).

Hopefully the next presidential administration will relegate this report to the status of historical document instead of a warning about our future. Science is crucial to the management of the nation, and without it we’re no better than a medieval kingdom.

Annalee Newitz (annalee@techsploitation.com) is a surly media nerd who is fifteen feet tall, and she has a federal agency science report that proves it.

You cannot afford Mars


› annalee@techsploitation.com

TECHSPLOITATION Mars used to teem with life, but now it’s a dead world. I’m not referring to actual Martian history, which we still know very little about. I’m talking about the way humans used to think of Mars and how they think about it now. As recently as the 1950s, Mars was packed with scary, incomprehensible creatures and hulking buildings set in a web of gushing canals. But now it’s a cold, dry land full of rocks that are fascinating mainly due to their extraterrestrial nature. We even have two robots who live on Mars, sending us back pictures of mile after mile of beautiful emptiness that looks like the Grand Canyon or some other national park whose ecosystem is so fragile that tourism has already half-destroyed it.

Mars has, in short, been demystified. It’s not an exotic source of threat or imagination; it’s a place to which President George W. Bush has vowed to send humans one day. And Feb. 12 to 13, a conference was convened at Stanford University to discuss the feasibility of a United States–led mission that would send humans to the Red Planet. The attendees, mostly scientists and public policy types, were all pragmatism.

Reuters reports that consensus at the conference was that the National Aeronautics and Space Administration would need an additional $3 billion per year to plan for a Mars mission that would leave in the 2030s. (NASA’s current budget is $17.3 billion per year.) So the question geeks like to ask one another — "What would you take with you to colonize another planet?" — now has a depressing and very non-science-fictional answer when it comes to Mars. It’s $75 billion, paid out over the next 25 years.

But just to put things in perspective, a congressional analysis done in 2006 pegged the cost of the US war in Iraq at $2 billion per week. Last year the total amount of money spent on the war surpassed $1.2 trillion.

So it’s a hell of a lot cheaper to colonize Mars than it is to colonize our own planet. Still, it’s too expensive. US aerospace geeks are hoping that we can turn to Europe, Russia, and perhaps Asia to collaborate on a Mars mission because nobody expects that NASA will ever get even a sliver of the budget that the US war machine does.

There is a tidy way to wrap this up into a lesson about how we’re willing to spend more on destroying life as we know it than extending life to the stars. About how we’d rather burn cash on war than healthy exploration of other planets. But that’s not the whole story.

Let’s say the US government decides to leave Iraq alone and spends $2 billion per week on a mission to Mars instead. A mission that would culminate in a human colony. We could follow a plan somewhat like the one outlined in Kim Stanley Robinson’s book Red Mars (Bantam, 1993), in which we first send autonomous machines to create a base and begin some crude terraforming. And then we send a small group of colonists, to be followed by bigger and bigger waves of colonists, who eventually live in domes. And who wage wars and rape the Martian environment.

I think the problem with colonizing Mars is that it would look all too much like colonizing Earth. We might even be killing a fragile ecosystem that we’re not yet aware of. But most of us haven’t demystified Mars enough to realize that. Sure, we know it’s not packed with cool aliens, but we haven’t realized that hunkering down on another planet isn’t going to solve our basic problems as humans. On a planet, given the chance, we’ll exploit all natural resources, including one another.

It’s not that I’m against a mission to Mars. I just think getting the money for that mission is really the least of our problems. What I’m worried about is what humans tend to do with money when they aim it at something, whether that’s a nation, a people, or a planet. Maybe it’s better for Mars that we can’t afford to go there.

Annalee Newitz is a surly media nerd who would rather live on an artificial halo world than a colonized planet.

Techsploitation: Information dystopia


TECHSPLOITATION I was raised on the idea that the information age would usher in a democratic, communication-based utopia, but recently I was offered at least two object lessons in why that particular dream is a lie.

First, a dead surveillance satellite, one roughly the size of a bus, fell out of orbit and into a collision course with Earth. It will likely do no damage, so don’t worry about being crushed to death by flying chunks of the National Security Agency budget. The important part is that nobody knew when the satellite died. Maybe a year ago? Maybe a few days? A rep from the National Security Council would only say, "Appropriate government agencies are monitoring the situation."

Is this our info utopia, wherein we literally lose track of bus-size shit flying through space over our heads? I mean, how many surveillance satellites do we have? It’s not like I love the techno-surveillance state, but it is a little shocking that the SIGINT nerds who run it are so out of touch that they can’t even keep track of their orbiting spy gear. Still, it’s hard to be too upset when Big Brother isn’t watching.

But that satellite could just as easily have been a forgotten communications satellite dive-bombing our atmosphere. And that would have sucked, especially since last week’s mega Internet outage across huge parts of Africa, the Middle East, and Asia didn’t bring down the global economy largely because people had satellite access to the Internet. This Internet outage, which took millions of people (and a few countries) off-line, happened when two 17,000-mile underwater fiber-optic cables running between Japan and Europe were accidentally cut. No one is quite sure how they were severed, but it was most likely due to human error — an anchor was probably dropped in the wrong place.

And so big chunks of Dubai went dark, as did many Southeast Asian countries. Businesses couldn’t operate; people couldn’t communicate. The people and businesses that were able to keep running were by and large the ones that didn’t depend on cheap Internet services that use only one or two cables to route their traffic. It’s cheaper to rent time on one cable, but if that cable is cut, you lose everything. Most customers don’t research how their Internet service providers route Internet traffic across the Asian continent — or across the Pacific Ocean — so they don’t realize their communications could be disrupted, possibly for weeks, if some drunken sailor drops anchor in the wrong spot.

In fact, few of us anywhere in the world consider the fact that our info utopia is a fragile thing based on networks that are both material and vulnerable. We think of the Internet as a world of ideas, a place "out there," unburdened by physical constraints. Even if you wanted to research which physical cables your ISP uses to route your traffic, it would be very difficult to do without a strong technical background and the help of the North American Network Operators’ Group list, an e-mail list for high-level network administrators.

So why do a crashing spy satellite and a partly dark Internet mean we’ve entered the age of information dystopia? Quite simply, they are signs that our brave new infrastructure is failing around us even as we claim that it offers a shining path to the future. It’s as if the future is breaking down before we get a chance to realize its potential.

But the information age doesn’t have to end this way, in a world where
can-and-string-network jokes aren’t so funny anymore. There are a few simple things we could do. We could help consumers better understand what happens when they buy Internet access by showing them what routes their traffic might take and giving them realistic statistics about possible outages. People could then make better choices about what services to buy. And so could telcos and nations.

Why shouldn’t we have solid research on which ISPs are most likely to suffer the kind of network outages we just witnessed from the severing of those two cables? Consumer groups could undertake this research. Or, since developed nations suffer more, perhaps the United Nations might want to conduct the investigation as a matter of Internet governance. We know where car traffic and sea traffic go. Why don’t we know where Internet traffic goes?

Another thing we could do to stop the information dystopia is to cut down on spy satellites, but that, as they say, is another column.

Annalee Newitz is a surly media nerd who is investing in semaphore communication networks.

Polite message from the surveillance state


› annalee@techsploitation.com

TECHSPLOITATION Say what you want about Google being an evil corporate overlord that steals all of your private data, turns it into info-mulch, and then injects it into the technoslaves to keep them drugged and helpless. There are still some good things about the company. For example, Google’s IM program, Google Talk, sends you a warning message alerting you when the person on the other end of your chat is recording your chat session.

Just the other day I was chatting with somebody about something slightly personal and noticed that she’d suddenly turned on Record for our chat. I knew everything I was saying was being logged and filed in her Gmail. In this case I wasn’t too concerned. For one thing, I wasn’t saying anything I’d regret seeing in print. I’m used to the idea that anything I say on chat might be recorded and logged.

What was different about this experience was that Google warned me first — told me point-blank that I was basically under surveillance from the Google server, which would automatically log and save that conversation. I appreciated that. It meant I could opt out of the conversation and preserve my privacy. It also meant that other people using Gtalk, who might not have had the expectation that all of their chat sessions might be recorded, would be enlightened.

It also reminded me forcefully that Google is a far more polite and privacy-concerned evil overlord than the United States government.

Right now members of Congress are trying to pass a law that would grant immunity to large telcos like AT&T that have been spying on their customers’ private phone conversations and passing along what they’ve learned to the National Security Agency. The law, called the Protect America Act, would allow telephone and Internet providers to hand over all private data on their networks to the government — without notifying their customers and without any court supervision of what amounts to mass wiretapping.

Last year the Electronic Frontier Foundation sued AT&T for vioutf8g the Fourth Amendment when a whistle-blower at AT&T revealed that the company was handing over private information to the NSA without warrants. That case has been working its way through the courts, and making some headway; in fact, it was starting to look like AT&T would be forced to pay damages to its customers for vioutf8g their rights. But the Protect America Act would stop this court case in its tracks by granting retroactive immunity to AT&T and any other company that spied on people for the NSA without warrants.

The whole situation is insane. First, it’s outrageous that telcos would illegally hand over their private customer data to the government. And second, it’s even more outrageous that when its scheme was discovered, the government tried to pass a law making it retroactively legal for AT&T to have broken one of the most fundamental of our civil rights: protection of our private data from the government.

Imagine what would happen if the phone and Internet systems in our country had the same warnings on them that Gtalk has. Every time you picked up the phone to make a call or logged on to the Internet, you’d get a helpful little message: "Warning: the government is recording everything that you are saying and doing right now." Holy crap.

The good news is that it’s not too late. The Protect America Act must pass both houses of Congress to become law, so you can still alert your local congress critters in the House that you don’t want retroactive immunity for telcos that are logging your private conversations for the NSA. Find out more at stopthespying.org.

And remember, everything you say and do is being logged. This polite message has been brought to you by the surveillance state.

Annalee Newitz is a surly media nerd who yells "Fuck you!" into her phone as often as she can — you know, just to let the NSA know how she really feels.<

Let’s eat clone


› annalee@techsploitation.com

TECHSPLOITATION I’m looking forward to eating my first clone hamburger. I mean, why not? I eat cloned plants all the time, and I admire cloned flowers. Clone meat seems like the next logical step. And yet I can’t tell you how many bizarre conversations I’ve had with people over the past few days about the apparently controversial move by the Food and Drug Administration this month to approve meat from cloned cows as a foodstuff.

People are really freaked out by eating the meat from a clone. They want it labeled so they can choose to buy "naturally reproduced" meat, by which I suppose they mean cows that are the result of forced breeding, that have been raised in stinky, crowded pens where they eat grain mixed with poop and bubblegum. I mean, I can understand not wanting to eat meat at all — that makes sense. Most farms abuse the hell out of their meat and poultry, and the situation is ugly enough to make you lose your appetite for steak forever.

But cloning? Not so much. It’s just a duplicate cow, people. Nobody has added anything weird to it, like snake genes that will make it spit acid. And if the cloned cow is treated well, allowed to roam free and eat decent food, I don’t see what the big deal is. Cloning has been used to reproduce tasty breeds of vegetables and fruit for centuries (using cuttings), and it’s not likely that animal cloning is going to be any more dangerous.

At least, it won’t be more dangerous for people eating the resulting meat. The clones may have crappy lives — in fact, they probably will, since clones tend to be unhealthier than nonclones anyway. And life in a factory farm isn’t exactly healthy either.

Meanwhile, as people chow down on clone steaks or steaks made from the offspring of clones (what do you call them? Paraclones? Miniclones?), a fertility researcher and a biotech company investor are busy cloning themselves. This month’s hottest clone news wasn’t anything to do with steak. It was the quiet announcement, in the journal Stem Cell, that a company called Stemagen had created viable human embryos from adult skin cells. One of the clones was of Samuel Wood, a guy who runs a fertility clinic next door to Stemagen. Another was of an anonymous investor in Stemagen.

Stemagen claims it won’t be turning these embryos into humans anytime soon, even though the clone embryos they wound up with were as viable as any embryo they might implant in a woman undergoing in vitro fertilization treatments. Of course, the company could just be covering its ass: human reproduction through cloning is illegal in the United States. Still, people desperate for children might be willing to try cloning at, say, a fertility clinic next door to a biotech company that does cloning. They would certainly keep their mouths shut about their illegal baby, at least if they wanted to keep it.

Just as I am perplexed by the uproar over eating the meat of animal clones, I’m perplexed by people’s discomfort about breeding human clones. Certainly there are ethical issues with creating a human being as part of an experimental procedure. But that doesn’t seem like the main objection people are raising. Mostly they’re saying that there’s something sacrilegious about clones, or something creepy about making babies that don’t require any sperm. (Stemagen’s method involves taking DNA from a skin cell and popping it into an egg to make an embryo — no men are required for this procedure.)

Clones are so scary that one of the best sci-fi comic book series of the past few years — Y: The Last Man (Vertigo), by Brian K. Vaughan — takes as its premise the idea that a woman cloning herself sets off a chain of events that kills every man on Earth.

I think the best way to end this hysteria is to start labeling everything that’s cloned, from the tomatoes you ate last week to the roses you bought your sweetie on Saturday. Once everyone realizes they have clones in their homes and bellies already, it might make them a lot less fearful when they finally meet a human clone. "Oh yes," they can say. "I’ve eaten something like that." *

Annalee Newitz is a surly media nerd who would rather eat a cloned cow than a factory-farmed anything. Also, she isn’t interested in eating cloned human babies, no matter how cute they are.