Annalee Newitz

Kids safer online!

0

› annalee@techsploitation.com

TECHSPLOITATION There’s a horrifying new menace to children that’s never existed before. Experts estimate that 75 to 90 percent of pornography winds up in the hands of children due to novel technologies and high-speed distribution networks. That means today’s youths are seeing more images of perversion than ever before in the history of the world.

What are the "new technologies" and "distribution networks" that display so much porno for up to 90 percent of kids? I’ll give you one guess. Nope, you’re wrong; it’s not the Internets. It’s print.

The year is 1964, and I’m getting my data from financier Charles Keating. He had just formed Citizens for Decent Literature, an antiporn group whose sole contribution to the world appears to have been an educational movie called Perversion for Profit. Narrated by TV anchor George Putnam, the flick is an exposé of the way "high-speed presses, rapid transit, and mass distribution" created a hitherto unknown situation in which kids could "accidentally" be exposed to porno at the local drugstore or bus station magazine rack. Among the dangers society had to confront as a result of this situation were "stimulated" youths running wild, thinking it was OK to rape women, and turning into homosexuals after just a few peeks at the goods in MANifique magazine.

A lot of the movie — which you can watch for yourself on YouTube — is devoted to exploring every form of depravity available in print at the time. We’re treated to images of lurid paperbacks, naughty magazines, and perverted pamphlets. At one point, Putnam even does a dramatic reading from one of the books to emphasize their violence. Then we get to see pictures of women in bondage from early BDSM zines.

But the basic point of this documentary isn’t to demonstrate that Keating and his buddies seem to have had an encyclopedic knowledge of smut. Nor is the point that smut has gotten worse. Putnam admits in the film that "there has always been perversion." Instead, the movie’s emphasis is on how new technologies enable the distribution of smut more widely, especially into the hands of children. In this way, Keating’s hysterical little film is nearly a perfect replica of the kinds of rhetoric we hear today about the dangers of the Web.

Consider, for example, a University of New Hampshire study that got a lot of play earlier this year by claiming that 42 percent of kids between the ages of 10 and 17 had been accidentally exposed to pornography on the Web during the previous year. The study also claimed that 4 percent of people in the same age group were asked to post erotic pictures of themselves online. News coverage of the study emphasized how these numbers were higher than before, and most implied that the Web itself was to blame.

But as Perversion for Profit attests, people have been freaking out about how new distribution networks bring pornography to children for nearly half a century. Today’s cyberteens aren’t the first to go hunting for naughty bits using the latest high-speed thingamajig either; back in the day, we had fast-printing presses instead of zoomy network connections.

It’s easy to forget history when you’re thinking about the brave new technologies of today. Yet if Keating’s statistics are to be believed, the number of children exposed to porn was far greater in 1964 than it is today. Perhaps the Web has actually made it harder for children to find pornography. After all, when their grandparents were growing up, anybody could just walk to the corner store and browse the paperbacks for smut. Now you have to know how to turn off Google’s safe search and probably steal your parents’ credit card to boot.

And yet Fox News is never going to run a story under the headline "Internet Means Kids See Less Pornography Than Ever Before." It may be the truth, but you can only sell ads if there’s more sex — not less. *

Annalee Newitz is a surly media nerd who learned about sex before she learned about the Internet.

Pain and fun

0

› annalee@techsploitation.com

TECHSPLOITATION A couple of economic researchers have proven via scientific experimentation something that artists have known for millennia: people can feel pain and have fun at the same time. At last, we have a scientific theory that explains why the torture-tastic movie Saw is so popular. Not to mention the writings of Franz Kafka.

Eduardo Andrade and Joel Cohen, both business professors interested in consumer behavior, wanted to know why people are willing to plunk down money for what they called "negative feelings," the sensations of disgust and nastiness that arise during hideous but financially successful flicks like Hostel, the Jason and Freddy franchises, and The Silence of the Lambs. It’s a good question, especially if you’re one of those business types who want to peddle gore to the fake blood–loving masses. As a huge consumer of gore myself, I was immediately intrigued by the scholarly article Andrade and Cohen produced, which sums up four experiments they did with hapless undergraduates paid to watch bad horror movies and describe how this exercise made them feel. The researchers had two basic questions: Do audiences experience fear and pleasure at the same time while watching somebody get dismembered? If yes, how?

First, a word about the researchers’ methods. Let it be known that they did not display discerning taste in horror movies. As a connoisseur of the genre, I’d have made those students watch Hostel, with its shocking scenes of eyeball gouging. Or perhaps 28 Days Later, with its white-knuckle zombie chase scenes. But Andrade and Cohen picked the 1973 seen-it-so-many-times-it’s-no-longer-frightening flick The Exorcist and the craptastic, unscary 2004 version of ‘Salem’s Lot. Hey guys, call me before you do the next round of experiments, OK?

Aesthetic choices aside, the results of these movie-watching experiments were intriguing. Students were shown "scary" clips from both films and asked to rate how they felt during and after watching. Previous scholars had suggested that people who enjoy horror movies have a reduced capacity to feel fear or have fun only when the yuck is over and they leave the theater. What Andrade and Cohen found, however, was that students who loved horror movies reported nearly the same levels of fear as students who avoided these movies. Plus the horror lovers reported having fun during the movies, not just afterward. So, as I said earlier, science uncovered what literary critics have known forever: ambivalent feelings are the shit.

Horror movies appeal because humans like to feel grossed out and entertained pleasurably at the same time. There’s a payoff in coexperiencing two conflicting emotions.

But Andrade and Cohen are careful to explain that the fun of ambivalence doesn’t work for everyone and may not translate into real-world horrors. They suggest that people who enjoy the yuck-yay feeling of horror movies are masters at psychological framing and distancing. Horror viewers who have the most fun are also the ones who are most convinced that what they’re watching isn’t real. People who sympathize too much with tortured characters feel only horror. That also means horror fans who see real-life violence won’t get a kick out of it.

The researchers proved this point by showing people horror films alongside biographies of the actors playing the main characters, constantly reminding viewers that these were just movies and the "victims" were playing roles. Even viewers who normally avoid horror movies reported that they were a lot more comfortable and had some fun when they were reminded that the action was staged.

I would argue that Andrade and Cohen’s research into distancing is the key to understanding horror fans. Our pleasure in horror is not depraved — it is purely a function of our understanding that what we’re seeing isn’t real. This knowledge frees us to revel in the frisson of ambivalent feelings, which are the cornerstone of art both great and small. *

Annalee Newitz is a surly media nerd whose book about horror movies only involved experiments on herself.

Futures not taken

0

› annalee@techsploitation.com

TECHSPLOITATION The future is a crowded graveyard, full of dead possibilities. Each headstone marks a timeline that never happened, and there’s something genuinely mournful about them. I get misty-eyed looking at century-old drawings of the zeppelin-crammed skyline over "tomorrow’s cities." It reminds me that the realities we think are just around the corner may die before they’re born.

A few weeks ago I was trolling YouTube and stumbled across a now-hilarious documentary from 1972, Future Shock, based on the 1970 futurist book of the same name by Alvin Toffler. The documentary focused on a few themes from the book and tarted them up by throwing in a lot of trippy effects and sticking in Orson Welles as a narrator.

As Welles intones ponderously about how fast the future is arriving, we learn that "someday soon" everybody will be linked via computers. Essentially, it was an extremely accurate prediction about Internet culture. Score one for old Toffler.

Things go tragically incorrect when the documentary turns to biology. Very soon, Welles assures his audience, people will have complete control over the genome and drugs will cure everything from anxiety to aging. Through the wonders of pharmaceuticals, we’ll become a race of immortal super-humans. It sounds almost exactly like the kinds of crap that futurists say now, 37 years later. Singularity peddlers like futurist Ray Kurzweil and genomics robber baron Craig Venter are always crowing about how we’re just about to seize control over our genomes and live forever. So far we haven’t. But every generation dreams about it, hoping they’ll be the first humans to cheat death.

Some dreams of the future, however, shouldn’t outlast the generation that first conceived them. Suburbia is one of those dreams. In the fat post-war years of the 1940s and ’50s, it seemed like a great idea to build low-density housing to blanket the harsh desert landscapes of the Southwest. But now the green lawns of Southern California have become an environmental nightmare of water-sucking parasitism. Just think of the atrocious carbon footprint left behind when you lay pavement, wires, and pipes over a vast area so that nuclear families can each have huge yards and swimming pools instead of living intelligently in high-density green skyscrapers surrounded by organic farms.

Oh wait — I just gave away my own crazy futurist dreams, inspired by urban environmentalism. Today, many of us imagine that the future will be like the green city of Dongtan, an ecofriendly community being built outside Shanghai using recycled water, green building materials, and urban gardens that will allow no cars within its limits. The hope is that Dongtan will have a teeny tiny carbon footprint and be a model of urban life for the future. Of course, that’s what suburbia was supposed to be too — a model of a good future life. No future is ever perfect.

Perhaps the saddest dead futures, though, are the ones whose end may mean the end of humanity. I suppose one could argue that the death of an environmentally conscious future is in that category. But what I’m talking about are past predictions that humans would colonize the moon and outer space. As the dream of a Mars colony withers and the idea of colonizing the moons of Saturn and Jupiter becomes more of a fantasy than ever before, I feel real despair.

Maybe my desperate hopes for space colonization are my version of Kurzweil’s prediction that one day we’ll take drugs that will make us immortal. Somehow, I think, if we could just have diverted the global war machine into a space-colony machine sometime back in the 1930s, then everything would be all right. Today the planet wouldn’t be suffering from overpopulation, plague, and starvation. We’d all be spread out across the solar system, tending our terraforming machines and growing weird crops in the sands of Mars.

Of course, we might just be polluting every planet we touch and bringing our stupid dreams of conquering the genome to a bunch of poor nonhuman creatures with no defenses. But I still miss that future of outer-space colonies. I can’t help but think it would be better than the future we’ve got. *

Annalee Newitz is a surly media nerd whose Martian colony has a better space elevator than yours.

iPhone politics

2

› techsploitation.com

TECHSPLOITATION The marketing maestros at Apple have turned the iPhone into the summer’s biggest consumer electronics blockbuster, and they didn’t even have to pay Michael Bay millions of bucks to write robot piss jokes to do it. Everybody’s talking about the damn things — of course the usual gizmo-obsessed pubs like Wired and PC Magazine are drooling all over it, but some unexpectedly political critics and fans have gotten into the mix.

The tech community made its annoyance at iPhone boosterism felt when hacker David Maynor announced that he’d found a bug in Safari (the iPhone’s Web browser) that would allow him to seize control of iPhones remotely. The Daily Show, which usually exhibits a modicum of geek savvy, blithely ignored tech criticisms and led off one episode last week with a breathy noncommentary on how the iPhone is the greatest thing ever. Then politicians started sounding off. Demos snarked at Republicans last week about the iPhone during a House subcommittee hearing on wireless innovation. Rep. Ed Markey (D-Mass.) told the committee that the iPhone was the "Hotel California" of mobiles because of an exclusive deal Apple cut with AT&T to provide network service for the multimedia devices. (Apparently Markey’s one big pop culture moment was to listen to the Eagles’ famous ’70s song about a hotel where "you can check out any time you like, but you can never leave.")

CNET commentator Declan McCullagh spoke the latent convictions of many libertarian nerds when he responded to Markey’s analogy: "Apple makes the iPhone. It has every right to sell it via only AT&T if it wishes…. More broadly, Apple has the right to [make] iPhones only available for purchase on the third Monday of the month in even-numbered zip codes if it chooses." Activist group Free Press responded to ideas like McCullagh’s by starting a "Free the iPhone" campaign (freetheiphone.org) designed to spur the Federal Communications Commission and Congress to consider passing regulations that would force vendors like Apple to make mobile phones interoperable with all phone network operators so that consumers could choose which carrier they want.

Meanwhile, digital freedom lovers have been up in arms over Apple’s many closed-door policies for the phone. Not only are the damn things locked into using AT&T as a carrier, but iPhones are also designed to prevent users from writing additional software for them. Nothing but Apple-approved software may run on the iPhone. That means people who want to play music on the iPhone will have the same problems they have with iTunes on the iPod — you can put as much music on the phone as you want, but you can’t transfer it to another device. Nor can you choose a secure browser over Safari, or an e-mail program of your choice. Even free-software activist Richard Stallman is pissed about the iPhone, and he’s a guy who rarely gives little toys from Apple a second thought.

So what’s the big deal? Why do people even want a $600 phone, and why has this luxury device for the pampered techie become such a hot political issue? I think the answer to the first question is easy: the iPhone is the first truly cool convergence phone that combines multimedia with multispectrum goodies like Bluetooth, wi-fi, and of course, a phone network. Who doesn’t wish to combine phones, iPods, and laptops into one nifty thing?

That’s where politics come in. In the United States we have a long history of government regulations on the phone network, as well as on what can plug into the phone network, so naturally the public wonders what the government is going to do with the iPhone. Especially when other components of the iPhone, such as its ability to play music, touch on another government-regulated area: copyright law. And then there’s another issue that few people have commented on, which is that Apple’s chosen carrier for the iPhone, AT&T, has a history of letting the government spy on its phone networks. So every way you slice it, the iPhone is subject to government.

The iPhone is political because it somehow manages to capture the essence of authoritarianism in its shiny little box. Totally locked down, it runs only preapproved software on a prechosen phone network that is subject to government surveillance. Long live the iPhone! Long live democracy! *

Annalee Newitz is a surly media nerd who thinks the iPhone’s telephone network makes surveillance as fun as iTunes made DRM.

“Transformers” without irony

0

› annalee@techsploitation.com

TECHSPLOITATION There is absolutely nothing wrong with enjoying truck commercials. Enjoying truck commercials can even be a politically innocent act — it does not signify that you secretly lust after fossil fuels. Plus, there’s a payoff to admitting that such pleasures can be had guilt free: you can enjoy watching Michael Bay’s latest sci-fi actionfest, Transformers, on its own terms. If you’re one of the people who helped the flick earn more than $100 million during its opening week, you may not need my help. For those still fighting the urge to cheer for shiny trucks, I offer a few arguments to persuade you.

The first, most obvious case in favor of this movie is that it just looks neat. There are giant robots that turn into, among other things, SUVs, tanks, fighter planes, scorpion things, race cars, and yes, trucks with flames painted on the sides. It shouldn’t be too surprising that computer-generated imagery is the perfect tool for demonstrating how cars morph into robots. Haven’t you always wished that one day your boring old Prius would twist itself into a gigantic alien robot from the planet Cybertron?

Ah, Cybertron. This brings me to my next argument, which is that Transformers is one of those rare action movies about incredibly silly things that take those silly things dead seriously. No doubt you are as heartily sick of knee-jerk irony as the next chump who shelled out cash to see Ghostrider (OK, so I liked Ghostrider, but you know what I mean). There are no great actors in Transformers showing us how distanced they are from the trashy source material by "acting" à la Nicolas Cage. In Transformers, characters discussing the robots refer to them, with straight faces, as Optimus Prime and Megatron. The good guys are Autobots and the bad guys are Deceptacons. They are all trying to find a giant, unexplained box called the All Spark. Nobody raises an eyebrow at the total goofiness of this scenario. The film’s straightforwardness is downright refreshing.

Like other kid-friendly action films, Transformers is low on bloodshed and high on "Wow, that’s cool!" Even the film’s worst bad guy, a government secret agent played with snarky relish by John Turturro, never kills anybody. Instead of murderous mayhem, the movie offers us rampaging teenage hormones, packing the dialogue with cute but groanworthy double entendres about asses and dicks and humping. Not since E.T. has a movie aimed at tweens been this honest about how kids really talk: there’s a lot of creative cursing, and main character Sam (Shia LaBeouf) spends the entire flick trying to snog his hot pal Mikaela (Megan Fox). Thank you, Michael Bay, for removing rampant death from the action-movie genre and replacing it with dumb masturbation jokes.

What truly surprised me about the movie was that Bay did a fairly good job updating the concept for the 2000s. The film’s plot hinges on something Sam is selling on eBay, and there are a few good jokes about how the Autobots learned English on the Web (surprisingly, this does not mean that they yell "LOL" or "OMG" all the time). I was deeply amused when the evil Deceptacons hunt Sam down via his eBay listing, ambush him, then grab him in their giant metal fists so they can scream in his face, "Are you user LadiesMan217?"

Another way Bay updates the Transformers premise is by connecting the Deceptacons with the Middle East. The film has this sort of murky, inexplicable opening sequence that takes place in what we’re told is "Qatar, Middle East," where good US soldiers encounter mean, scorpion-shaped Deceptacons who smash the crap out of them. The Middle Eastern ‘bots look bizarrely like improvised explosive devices come to life; made of scrap metal and old tires, they hide in the sand and strike at unwary troops who are trying to be nice to the native folks. This is possibly the only part of Transformers in which Bay attempts to grasp feebly at political relevance and make something other than a zoomy truck commercial. Of course, he fails miserably. If you want to enjoy this flick without guilt, you will have to ignore the whole Middle East issue. Of course, one could say the same thing about living in the United States. Maybe Bay has succeeded in pulling off some social commentary after all: welcome to the United States — ignore the Middle East stuff, but stay for the masturbation jokes and cool special effects. *

Annalee Newitz is a surly media nerd whose battle cry is "All hail Megatron!"

Never mind the steampunk

0

› annalee@techsploitation.com

TECHSPLOITATION If someone were to hold a knife to your throat and ask what the aesthetic sensibilities of the computer age are, you’d probably babble something about the iPod and its curvy, candy-colored precursor, the iMac. You’d think of the typical PC laptop, dumb and square and black, and you’d wonder whether this question about aesthetics was actually a trick. Because there are no computer age aesthetics.

Of course I’m exaggerating. There are a million interesting designs for consumer electronics and computers, but most don’t call attention to themselves. Computer aesthetics say "I am functional" — even the iPod Shuffle’s, whose colorful clip-on version kids attach to the gold chains around their necks as techno-bling.

But your Gateway computer, with its stalwart rectangular tower, is not the last word in how technology can look. Think of the crazy dial phones from the 1920s, with their curlicues and shiny brass and polished wood handsets. Or recall early radios, with their curving wooden exteriors meant to look like fancy furniture. And if you really want to see some seriously decorated machines, just check out pictures of devices from the 19th century, when everything from radiators to dynamos was covered in filigree and iron flowers and stamped, embossed shiny crap. For the record, I fucking love embossed shiny crap.

I think the search for an over-the-top tech aesthetic is driving the current craze for steampunk, a design and fashion style that combines Victorian sensibilities with contemporary gizmos. The ideal steampunk device would probably be a coal-powered cyborg, such as the creatures found in the novels of British fantasist China Miéville. In the real world, one of the most popular steampunk tinkerers is Jake von Slatt, who recently rebuilt his desktop computer as a vision in brass, marble, and old typewriter parts. He even offers a step-by-step guide to making your own functioning steampunk computer on his Web site, the Steampunk Workshop. Whenever von Slatt produces a new creation — a telegraph sounder that taps out RSS feeds, for example — pictures of it are always wildly popular on social news site Digg and elsewhere on the Web. Geeks who might not know what the word aesthetic means are instinctively drawn to the way von Slatt has made artifice from functionality. I expect to see cheap, knockoff steampunk computers for sale any day now.

As steampunkish critic John Brownlee has pointed out in several articles on the topic, steampunk designers tend to reverse-engineer ordinary electronics — say, a computer keyboard — and enhance them with parts that look antique. The idea is not just to create machines whose beauty goes beyond functionality. It’s also, Brownlee contends, to recall an era when amateurs could contribute meaningfully to the development of science and technology. We live in a time when no single human being can fully comprehend the Windows operating system. No wonder we’re nostalgic for the days when beachcombers could be naturalists and tinkerers could invent the telephone.

I think the popularity of steampunk also expresses our collective yearning for an era when information technology was in its infancy and could have gone anywhere. In 1880 we hadn’t yet laid the cables for a telephone network, and computer programming was just an idea in Ada Lovelace’s head. Nineteenth-century technology was often operated by factory laborers, and it meant backbreaking work and the ruination of healthy bodies. Information technology, to the 19th-century mind, would be something that set us free from brutal assembly lines.

One hundred years later, I wish it were so. Information technology has its own brutal assembly lines, mind-numbing data work that cripples our fingers with repetitive strain injuries and mangles our backs with the hunched postures required to work at a computer all day long. Seen from this perspective, steampunk is an aesthetic that tells the truth about us. We are no better off than our Victorian ancestors, bumbling into the future with crude technologies whose implications we barely understand. But let’s make our devices pretty, at least. Let’s remember the days when the machines that now cage us promised liberation. *

Annalee Newitz is a surly media nerd whose flat is full of servers and anaglypta.

The future of paper

0

› annalee@techsploitation.com

TECHSPLOITATION Twenty years from now, paper will no longer be a tool for mass communication. Instead it will be a substance akin to plastic, a mere fabricated building material with industrial and consumer applications. At least, those were the thoughts that ran through my mind when I received a strange news release last week from a Finnish company called VTT, which trumpeted a business model that included developing new products based on what it called "printing technology" and "paper products." VTT has developed a prototype for bioactive paper that responds to enzymes and biomolecules by changing color. One idea is to use it in food packaging or air filters to get an early warning about toxins.

Weird innovations are great, but the most interesting part of this news release was about markets: "The goal is … to create new business for the paper industry … to introduce new innovations and market initiatives between the traditional ICT [information communication technology] and paper industries by combining IT, electronics and printing technologies."

Let us parse the high-flown language of commerce. VTT is saying the paper industry needs new markets, and high-tech, bioactive paper will help create them. But why? Obviously, paper has its uses — there are newspapers, magazines, notepads, and books to be printed! Why worry about making the stuff bioactive when you can just sell it to Random House or Conde Nast? You already know the answer. Print communication is dying out, and with it goes the paper industry. Over the past few months, I’ve witnessed the two biggest daily papers in my area, the San Francisco Chronicle and the San Jose Mercury News, announce budget cuts that will slash their staffs by one-quarter. What does that mean for the paper industry? Fewer orders for newsprint.

When Karl Marx wrote that every great historical event occurs twice — "first time as tragedy, second time as farce" — I doubt he had print media in mind. And yet the upset of the paper industry feels to me like the joke that comes after the tragedy of print media’s fast decline. Don’t get me wrong: I’m not one of those people who think that barbarians are storming the gates because anyone can publish their ramblings on MySpace instead of having to get David Remnick’s permission to publish their ramblings in the New Yorker. Still, I cannot help but feel wrenchingly bad when I think about what it will be like in the Mercury newsroom after a quarter of the editorial staff has left the building.

I won’t miss the paper, but I will miss the journalists.

What’s tragic is that print journalism has not tried to diversify its market as methodically as the paper industry has. Right now, VTT is just one of many companies trying to figure out cool new ways to use paper. But who is trying to figure out cool new ways to employ smart, highly trained print journalists? Maybe Dan Gillmor and a few other people running small nonprofits. But mostly, print journalists are having to figure the future out on their own.

Some will do what I’ve done, gradually moving from print media to online. I’ve gone from a print zine to an online zine to a weekly newspaper to print magazines to running a blog. This column you’re reading is syndicated to both print newspapers and Web sites. Nobody gave me guidance. No slick marketing dude from Finland came in and said, "Hey, maybe you should diversify and start creating bioactive journalism." Instead, I fumbled along on my own, trying to find the most stable place where I could settle down and write for a living. Other journalists won’t be as lucky or as willing to change. They may stop writing; they may become shills for the companies they once investigated; they may feel bitter or liberated or panicked. None of them deserve it. Somebody should have helped them get ready for this transition five years ago.

I live in a world where corporations care more about the future of paper than the futures of people who have made their living turning paper into a massive network of vital, important communications. This is not how technological change should work. You cannot discard a person the way you discard a market niche. That’s because people revolt. Especially journalists. *

Annalee Newitz is a surly media nerd looking for a few good geek journalists to help her run a blog. Serious nerd experience needed. Inquire within!

True TorrentSpies

0

› annalee@techsploitation.com

TECHSPLOITATION It’s no big surprise that entertainment megacorp Columbia is suing more file sharers. But there is something quite shocking about its latest infringement lawsuit against Web site TorrentSpy.com. With this lawsuit, Columbia is attempting to do nothing short of changing the way evidence is gathered via the legal discovery process. That means the entertainment industry has finally figured out a way to screw everybody in the United States — not just the geeks using peer-to-peer software.

Columbia is suing TorrentSpy for infringement because the site makes it easy for people to find information about where to download illegal copies of movies owned by Columbia. TorrentSpy doesn’t make the movies themselves available — it offers a search engine that locates files people can download via the file-sharing program BitTorrent. The suit says the guys who own the site are "inducing" others to infringe, as well as gaining secondary benefits from infringement because the site’s popularity and ad sales are boosted by pirates.

Here’s where things get hairy. During discovery, the period in a lawsuit in which both sides gather evidence, Columbia ordered TorrentSpy to hand over its user logs, electronic records of what people have done on the site. The problem is that TorrentSpy doesn’t keep user logs. So Columbia’s lawyers came up with a freaky, technically dubious argument. They claimed that TorrentSpy had technically been keeping logs anyway because user data passed through the Web site computer’s RAM — the part of the computer’s memory that never gets written to disk and saved. The mere fact that the data had flashed through the RAM was enough to make it discoverable, the lawyers claimed.

But all that stuff in RAM was gone. So how to get it back? Columbia’s lawyers told the judge that the owners of TorrentSpy could start keeping user logs during the discovery process and in essence re-create the missing logs. This was hugely controversial because discovery is only supposed to apply to already existing evidence. You can’t order witnesses or defendants to start gathering data today for you to subpoena in the future. But the judge, Jacqueline Chooljian, went for Columbia’s argument about the RAM: if the data had been in RAM for even a nanosecond, it existed in the past and was therefore subject to discovery.

The ramifications of this decision are far-reaching indeed. If the California ruling holds — it’s in the appeals stage right now — Columbia may have created a legal loophole that allows lawyers to order people to generate new evidence during discovery. Electronic Frontier Foundation attorney Fred von Lohmann, who has been following the case, told me via e-mail, "Because the ruling is based on the notion that ephemeral RAM copies are ‘records’ subject to preservation and production in litigation, it reaches deep into many businesses. For example, if you have a VOIP-based phone system (where conversations appear momentarily in RAM in your data center), are you responsible for recording every phone call for potential disclosure in litigation? What about IM conversations? Does everything created by a computer become a ‘producible’ record, just because it’s digital and therefore must rely on RAM?"

While the case is on appeal, TorrentSpy won’t have to start tracking its users. But if the appeal fails, TorrentSpy will have to spy on its customers to produce evidence. There is one hopeful sign: the judge has requested that TorrentSpy not hand over the unique IP addresses of its customers in logs, so the evidence can’t be used to go after individuals. However, the precedent of asking companies to create logs as evidence may remain in place.

Does this mean that the discovery process could become a way to wiretap parties to a lawsuit? After all, as von Lohmann points out, VoIP companies preserve phone conversations in RAM for a few brief seconds. One could easily imagine a plaintiff arguing that a VoIP company should start keeping audio files of all the phone calls between two parties to a case, since those audio files should have existed before. As a result, the plaintiff will have access to everything those parties say to each other after the lawsuit has been brought. Unfair? You bet. Legal? According to Judge Chooljian, yes.

If you’re worried about government-issued wiretap orders, maybe it’s time to start worrying about Hollywood-issued ones too. *

Annalee Newitz is a surly media nerd who has a hell of a lot of information about you stored in her short-term memory.

Google in my bedroom

0

› annalee@techsploitation.com

TECHSPLOITATION A couple of weeks ago Google announced its latest map widget with much fanfare. Called Street View, it’s an option on Google Maps that gives you (literally) a view from street level of the address you’re searching for. When you go to Google Maps, click "Street View" in the upper right corner (not all cities have it — try San Francisco or New York), and you’ll get a little icon shaped like a human that you can move around the city grid. Move the human into place, click it, and suddenly you find yourself looking at a picture of the houses on the street. You can navigate down the block with arrows, even turning your point of view left or right to get a full 360-degree view of the spot.

All the images on Street View were taken over the past few months by a camera mounted on a roving van. Later Google used special software to "knit" the discrete pictures together, creating the illusion that you’re seeing seamless images of streets. If this sounds futuristic to you, it’s not — a couple of years ago, Amazon made a similar service available via its search tool A9. But after Google hired Udi Manber, who ran A9 for Amazon, the service went downhill, and it’s now no longer available. Instead we have Google’s Street View.

When you first use Street View, it feels like Google has turned the real world into a video game. I recently took a "walk" all around a San Francisco neighborhood where I might like to live. By clicking the arrow, I moved down Guerrero Street, "looking" to my right and left at the houses and local businesses to figure out how many blocks my potential residence would be from crucial things like cafés and a grocery store. I felt like I was in the virtual world Second Life, except that I couldn’t fly and most of the people on the street weren’t giant centaurs with wings and magic powers.

Still, it was hard to take my eyes off the people on the street. Captured on film without their knowledge or permission, they’ll be online for all to see for at least a couple of years — possibly more. Some naughty bloggers over at Wired.com have already asked people to submit the best "street sightings" they’ve found on Street View. Several pictures of seminaked people sunbathing or undressing near open windows turned up right away, as did pictures of people pissing against buildings. Searchers also found a picture of somebody being arrested (Google took that one down), as well as a snapshot of two women on San Francisco’s Hyde Street who appear to be exchanging money for drugs. And there are thousands more like these.

What are the ethics involved here? Is this an invasion of people’s privacy? All the photographs were taken in public places, and therefore nobody in them has any reasonable expectation of privacy under the law. But then again, privacy laws weren’t written with Street View in mind. It’s lawful to eavesdrop on people on the street because they’re in public. But is it lawful to publish online in perpetuity a picture of someone that captures him or her making out with somebody at a bus stop? Soon, lawsuits may seek to answer that very question.

In the meantime, Google is hoping you won’t ask because you’re so impressed with the prettiness and usability of its shiny new thing. As I mentioned before, I’ve already found the service helpful in my search for a new place to live. It might also be good for figuring out the best places to park near your destination, or whether a hotel is as nice and well located as it claims to be. Mostly, though, I don’t know why anyone would consider Street View to be more of a useful tool than a slightly creepy toy. I suppose it could be a great way for stalkers and thieves to find houses that are isolated, shielded from the street by greenery, or accessible by bottom-floor windows without bars. One day, even burglars might find their targets by Googling.

For now, however, Google Street View only covers a few cities, and the interface is a little slow. But the van is still out there, taking pictures automatically, posting everything it sees online. And the interface will improve. Is the dubious convenience of this tool worth the privacy trade-off? Do you really want to walk down the street never knowing whether your furtive nose-picking or secret meeting with a colleague has been captured and broadcast to the Google-using public? *

Annalee Newitz is a surly media nerd who will not stop picking her nose furtively in public and reserves the right to be pissed if you publish a picture of her doing it.

Wikipedia activism

4

› annalee@techsploitation.com

TECHSPLOITATION When I edit Wikipedia, I am fighting for the future. There are certain things and people whose memories I want preserved for generations to come so that curious searchers a century from now will know the full story. Via Wikipedia, they will get more than stories of great politicians and giant corporations from glossy histories. I want this user-edited, online encyclopedia to tell tales of the brave and the marginal as well as the notorious and the powerful. That’s why I’ve become a Wikipedia activist.

For years I was a passive reader of Wikipedia, particularly entries on obscure technology and pop culture. I think of Wikipedia as the first place to go when I’m researching something off the beaten track, like early episodes of Doctor Who or technical specs for the outputs on DVR players.

Last week, however, I finally shed my Wikipedia passivity and started editing entries myself. I hit a personal tipping point.

I was writing a profile about a novelist for an online magazine and discovered that this author’s Wikipedia biography page had been summarily deleted the week before on the grounds that it wasn’t notable enough. I had previously visited his entry early in my research because it contained a fairly complete list of everything he’d written. To make matters worse, when I read the history of the deletion, it turned out to have been done by a guy who knew absolutely nothing about this novelist’s areas of expertise. The deleter was a big contributor to Wikipedia, it’s true — but only on the topic of religion, particularly Lutheranism. How could that background possibly grant him the authority to determine whether a postmodern novelist and video game designer was notable or not?

So I signed up for a Wikipedia account and re-created this novelist’s entry from the Google cache and sources I’d gathered while writing the profile. I also wrote an explanation to the deleter, requesting that he not do it again.

And then, while I was at it, I re-created another entry recently deleted for not being notable enough — that of Sonia Greene, a pulp fiction writer and publisher of the 1920s who was briefly married to H.P. Lovecraft. Of all the insulting things to have happen, her entry had been erased, and people searching for her were redirected to an entry on Lovecraft. How’s that for you, future scholars? Looking for information about a minor pulp fiction writer? Too bad she’s not notable — but we can redirect you to an entry on a guy she was married to for two years. (A guy, I might add, who pissed her off so much that she burned all his letters when they divorced.) Yuck.

My experiences have made me strongly question the idea of "notableness" on Wikipedia. I am genuinely offended by the notion that obscure authors, technologies, ideas, and events should be deleted from what’s supposed to be a vast compendium of knowledge. It’s not as if Wikipedia is running out of disk space and needs to delete stuff to keep going. And it’s not as if an entry on an obscure writer will somehow undermine somebody’s ability to search for less obscure ones.

Besides, who is to say what is notable or not? Lutheran ministers? Bisexual Marxists? Hopefully both. For me, the utopianism of Wikipedia comes from its status as a truly democratic people’s encyclopedia — nothing is too minor to be in it. Everything should be noteworthy, as long as it is true and primary sources are listed. If we take this position, we avoid the mistakes of 19th-century chroniclers, who kept little information about women and people of color in archives because of course those groups were hardly notable. Yet now historians and curious people bang their heads against walls because so much history was lost to those deletions.

If the goal is to preserve knowledge, we shouldn’t be wasting our time determining what’s notable enough to stay in Wikipedia. Instead, we should be preserving in a searchable form everything we can that’s truthful, so the culture and history of the minor and the obscure can be remembered just as easily as those of the famous and the mighty. *

Annalee Newitz is a surly media nerd who is going to re-create Danah Boyd’s entry if you delete it, you bastards.

Green libertarians

0

› annalee@techsploitation.com

TECHSPLOITATION It sounds crazy, but it just might work: green libertarianism could become the new reformist movement in politics and cultural life.

In the 1980s, suggesting that green culture could be combined with libertarianism would have been worse than foolish. Those were the days when libertarians protested having to get their cars smog-checked because it represented government control of their personal property. But now that even staunch Republicans like Governor Arnold Schwarzenegger are promoting ecofriendly policies and business leaders like Silicon Valley venture capitalist Vinod Khosla are hanging out at the Sierra Club, it seems that the times, they are a-changin’.

Over the past decade, experts have slowly and quietly been publishing studies on how to bring green sensibilities into line with the free-market agenda of libertarians. Natural Capitalism, published in 2000, was one of the first books to advance this idea. Last year two Yale environmental researchers, Daniel Esty and Andrew Winston, published Green to Gold, which explores ways that companies like Wal-Mart are attempting to bring sustainability into their business models. Though Esty and Winston conclude that there are no companies currently doing enough to be truly green, they acknowledge that some are on the right track.

They also explain quite succinctly why free-market leaders have joined what they call the Green Wave. No, it’s not out of the goodness of their hearts. "Behind the Green Wave are two interlocking sources of pressure," they write. "First the limits of the natural world could constrain business operations, realign markets, and perhaps even threaten the planet’s well-being. Second, companies face a growing spectrum of stakeholders who are concerned about the environment."

A lot of Green Wave entrepreneurs are probably disingenuous. One imagines they’re like the antihero of underrated movie I Heart Huckabees, a slimy corporate type who feigns interest in green development to sucker a community into signing over its land to condo and mall developers. But I believe some real-life Green Wavers are genuinely fascinated by strange new ideas that could encourage economic growth and sustainable development. These are people who are talking about carbon credits, emissions trading, and various financial incentives for entrepreneurs who limit their environmental impact, recycle, use alternative energy sources, or encourage their employees to carpool.

The question is why would anybody want to marry green and libertarian values? It sounds like a way of letting business do an end run around international bodies and governments, groups that have traditionally set limits on industry. There’s no doubt that states should have a role in setting policies for local corporations, but those corporations need rewards for their good behavior too. That’s where capitalism comes in. Combining libertarianism with green values might be a pragmatic way to convince some of the worst polluters to cut back by essentially bribing them with cash. The state can step in to punish bad actors who refuse to try for the carrot.

On a less cynical note, one might say that libertarians and greens go together because both are focused on maintaining economic development in the long term. They aren’t looking at next quarter: they’re looking at next century. A green libertarian has realized that the freedom of future markets depends on maintaining a healthy environment.

If green libertarianism prevails, I’m guessing the future will look nothing like ecotopia and nothing like capitalist Utopia either. Business will behave more like government, limiting its growth for the sake of sustainability. And ecology as we know it will probably be a lot more engineered and synthetic than ever before because communities will carefully plan their ecosystems to remain healthy and whole alongside cities and corporations. We will reach a stage in our technological development when we have to manage our natural environments as well as our economic ones. Perhaps one day the capitalism that results from green libertarianism will know itself to be only one piece of a healthy social ecology. That’s the kind of capitalism that even a grumpy old Marxist like myself can get behind. *

Annalee Newitz is a surly media nerd who thinks the next best thing to smashing capitalism is changing it entirely.

09 F9

0

› annalee@techsploitation.com

TECHSPLOITATION I have a number, and therefore I am a free person. That’s the message more than a million protesters across the Internet have been broadcasting throughout the month of May as they publish the 128-bit number familiarly known as 09 F9. Why would so many people create MySpace accounts using this number, devote a Wikipedia entry to it, post it thousands of times on news-finding site Digg, share pictures of it on photo site Flickr, and emblazon it on T-shirts?

They’re doing it to protest kids being threatened with jail by entertainment companies. They’re doing it to protest bad art, bad business, and bad uses of good technology. They’re doing it because they want to watch Spider-Man 3 on their Linux machines.

In case you don’t know, 09 F9 is part of a key that unlocks the encryption codes on HD-DVD and Blu-ray DVDs. Only a handful of DVD players are authorized to play these discs, and if you don’t own one of them, you can’t watch Spidey in high definition — even if you purchase the DVD lawfully and aren’t doing any copying. For many in the tech community, this encryption scheme, known as the Advanced Access Content System (AACS), felt like a final slap in the face from an entertainment industry whose recording branch sues kids for downloading music and whose movie branch makes crappy sequels that you can’t even watch on your good Linux computer (you guessed it — not authorized).

When a person going by the screen name arnezami managed to uncover and publish the AACS key in February, other people immediately began reposting it. They did it because they’re media consumers angry about the AACS and they wanted Hollywood and the world to know that they don’t need no stinkin’ authorized players. That’s when the Motion Picture Association of America and the AACS Licensing Administrator (AACS LA) started sending out the cease and desist letters. Lawyers for the AACS LA argued that the number could be used to circumvent copy protection measures on DVDs and posting it was therefore a violation of the anticircumvention clauses in the Digital Millennium Copyright Act. They targeted blogs and social networks with cease and desists, even sending notice to Google that the search engine should stop returning results for people searching for the AACS key (as of this writing, Google returns nearly 1.5 million pages containing it).

While some individuals complied with the AACS LA, in many cases community sentiment was so overwhelming that it was impossible to quell the tide of hexadecimal madness. Popular news site Digg tried to take down articles containing the number, and for a while it appeased the AACS LA. But Digg is a social network whose content is determined by millions of people, and as soon as Digg staffers took down one number, it would pop up in hundreds of other places. At last Digg’s founder, Kevin Rose, gave up and told the community that if Digg got sued, it’d go down fighting. Many other sites, such as Wikipedia and Wired.com, deliberately published the number in articles, daring the AACS LA to sue them. Sites like MySpace and LiveJournal are also rife with the number — like Digg, these sites are made up entirely of user content, and it would be practically impossible for administrators to scrub the number out.

The AACS key protests have become so popular because they reach far beyond the usual debates over copyright infringement. This isn’t about my right to copy movies — it’s about my right to play movies on whatever machine I want to. The AACS scheme is the perfect planned obsolescence generator. It will absolutely force people to upgrade their existing DVD players because soon they won’t be authorized to play new DVDs. Even worse, the AACS scheme allows movie companies to revoke authorized status for players. Already, the AACS LA has revoked the authorized status of the WinDVD media player, so anybody who invested in WinDVD will have to reinvest in a new player — at least, until that player’s authorized status is revoked too.

The AACS, more than any other digital rights management scheme, has revealed that the Hollywood studios have formed a cartel with electronics manufacturers who will do anything to suck more money out of the public. If you want to watch lawfully purchased movies, the only sane thing to do is post the number. Stand up and be counted. *

Annalee Newitz is a surly media nerd who can’t help but notice that you’re reading this column on a nonauthorized device.

We can be heroes

0

› annalee@techsploitation.com

TECHSPLOITATION Imagine a world where your genome isn’t just the result of long-term natural selection and random mutation. Instead, its composition and expression actually mean something — not just about you, but also about the fate of the world.

No, I’m not talking about a genetic engineer’s utopia with humans made by design. I’m talking about the driving fantasy behind hit TV show Heroes, now heading into the homestretch of its first season on NBC. I was a doubter when I first started watching this X-Men homage, which is full of ordinary people who suddenly start manifesting mutant powers (flying, telekinesis, superhearing, time travel) due to some genetic whatsit. Created by Tim Kring, best known for the medical melodrama Crossing Jordan, the show was uneven and slow for the first handful of episodes. We got the boring origin story of each hero and learned that they all have a genetic destiny via an irritating voice-over from the nonsuperpowered (so far) Dr. Suresh, who studies these "special" people to find out what makes them tick.

But then things got interesting. Unlike the mutants of X-Men, none of the special people in Heroes has a visible mutation that makes him or her look strange — there are no giant blue cat professors or women made of pure diamond. Instead, there are, among others, a flying politician, a superhealing cheerleader, a time-traveling Japanese comic book otaku, a comic book artist who can paint the future, a psychic police officer, and a villain who absorbs mutant powers by extracting and possibly eating the brains of heroes. The plot is typical comic book fare: our future-painting artist has predicted that New York will be blown up by one of the heroes, eventually resulting in the election of the corrupt flying politician as president. Somehow, these events will destroy the world. The time-traveling otaku‘s future self warns his past self that the fate of the cheerleader is bound up with all this by using the show’s cult tagline, "Save the cheerleader, save the world."

I’ve gone from being a skeptical watcher to a rabid fan of this show for two reasons: one, the hero team that forms around the wacky time travel plot manages to capture what’s so seductive about comic books generally; and two, I think the TV show is an interesting fantasy about terrorism.

So: the seductions of the comic book. One of the benefits of comic books over, say, movies is that they last for decades and thus have plenty of time to evolve complicated relationships between characters whose powers are foils for their personal vulnerabilities. A superhero team is like a cast of characters in a speculative soap opera — they have bang-pow adventures, but the best writers and artists in the medium force them to grapple with the human cost of being a hero. The Hulk is a good example: over the years Bruce Banner and his green alter ego have fought, gone to therapy to reconcile their warring impulses, joined and then been expelled from superhero teams that couldn’t trust the Hulk, and generally played out the drama of what it means to be a high-functioning manic-depressive.

Heroes offers us the bizarro soap opera pleasures of comic books and at the same time sets up the collective power of the heroes as a foil for the problems of the world. There are no terrorists in Heroes — only heroes whose powers go wrong and destroy New York in the process. In other words, the only menace to the United States is its own citizens. In the show’s fantasy reenactment of 9/11, the al-Qaeda bombers are recast as misunderstood heroes who are hunted by shady pseudogovernment agencies and go mad, or as power-hungry politicians who see destruction as the best route to power. I’m intrigued by the implication, in this season’s plot arc, that the destruction of New York is a deliberate effort to ruin the world on the part of US politicians and businessmen. There’s a strong dose of social criticism in that simple idea. Our heroes aren’t trying to stop terrorists from outside the country — they’re trying to stop forces working on the inside.

Sure, you can watch Heroes just for the bang-pow, and I definitely recommend it for that. At its best the show is action packed and edge-of-your-seat thrilling. But it’s also, like great comic books, about the real world. Best of all, it’s about fixing the real world and making it safe for geeks, cheerleaders, and regular people. *

Annalee Newitz is a surly media nerd who thinks the Planet Hulk story line should be the basis for the next Hulk movie.

Myth of the universal library

0

› annalee@techsploitation.com

TECHSPLOITATION A lot of Web geeks believe that one day everything ever created by humans will be available online. Call it the myth of the universal library. Here’s how the myth goes: because there is unlimited real estate in cyberspace and because media can be digitized, we can fill cyberspace with all human knowledge and give everyone access to it. Without further ado, I present to you three arguments for the elimination of the myth of the universal library.

1. Cyberspace does not exist. The term cyberspace was invented in the late 1970s by a science fiction writer named William Gibson, who used it to describe a "consensual hallucination" experienced by people who were neurologically linked to computer networks. Even within Gibson’s novels, the author is careful to explain that the illuminated buildings, glowing roads, and avatars that his heroes meet in cyberspace are simply convenient representations of abstract data structures.

My point is that computer networks are not space and they are not real estate. They are data storage and manipulation devices connected together by wires and radio waves. They cost money and require massive amounts of power. They take up real-world space. And they break. In other words: no computer network is infinite. Storing all of human knowledge on a computer network would be expensive and intensely difficult to maintain. There is no infinite cyberspace — only finite computer networks subject to wear and tear.

2. Your human knowledge sucks. I was recently in a very interesting conversation with several smart librarians, all of whom are keen to use computers for preserving and disseminating information. Somebody pointed out that a good example of publicly accessible universal knowledge is the French Gaumont Pathé Archives, which makes hundreds of hours of searchable historic newsreel footage available online for free. The problem, as film archivist Rick Prelinger pointed out, is that the Gaumont Pathé project, like many of its kind, has had to pick and choose which films it can afford to archive. So the group focused heavily on politics and left the fashion and pop culture reels undigitized and therefore less accessible. The guy who’d brought up the archive thought this was just fine.

"No, it’s not," Prelinger replied. "If you want to know what everyday people cared about historically, fashion is going to tell you a lot more than newsreels about famous politicians."

The point is, people don’t agree on what "all of human knowledge" means. Is it great art and political history? Or is it xeroxed zines and fashion history? Who decides what gets digitized and what gets tossed in the ashtray of the unsearchable, the unnetworked? Do commercials go into our mythical universal library? What about hate speech and instruction manuals for hair dryers? Are those documents not also part of human knowledge? We will never reach an agreement on what all of human knowledge really is, and therefore we will never be able to preserve all of it.

3. Digitizing everything is impossible. Consumers can buy terabyte-size disk storage. The glorious Internet Archive buys petabyte storage devices by the bushel. You can fit your entire music collection in your pocket, and your book collection too. But even if we agreed on what all of human knowledge really is — which we never will — you couldn’t digitize all of it. Turning books into e-books takes time, as does turning film and television into digital video files. And what about rare scrolls, artworks, and machines? How do you put them online? Some medieval manuscripts and textiles are so delicate they can’t be exposed to light. Making something digital isn’t like waving a wand over it — poof, you’re digital! No matter how hard we work and no matter how much money we throw at this problem, there is simply no way to turn all physical media into digital formats.

The myth of the universal library is not only widespread, it’s also dangerous. Believing in the myth makes us forget that we need to be working hard right this second to preserve information in multiple formats and to make it available to the public any way that we can. *

Annalee Newitz is a surly media nerd who has a very large collection of nondigital books.

Stop getting things done

0

> annalee@techsploitation.com

TECHSPLOITATION Among business-oriented tech nerds, there is an acronym that is a cult: GTD. It stands for "getting things done," and it comes from the title of a popular time-management book by productivity coach David Allen. Not only has Allen turned GTD into a multimillion-dollar consulting and advice business, but he’s also infected the hearts and minds of an entire generation trying to work as fast as the processors in their computers do. At its heart, the GTD philosophy is simple: list your tasks ahead of time, and complete them as systematically as possible. In the end, you’ll work more quickly, zooming through your life the way you zoom through your e-mail in-box.

But for those of us who confront bulging e-mail boxes and multiple, multistage projects every morning, GTD can become a freaky addiction. We’re never fast enough. That’s why some GTD solutions go beyond the friendly kind you’ll see on productivity blogs such as Lifehacker and 43 Folders, which are devoted to finding ingenious, technical solutions to get around work-blocking procrastination.

Possibly the weirdest example of extreme GTD can be found in a recent book, The 4-Hour Workweek: Escape the 9-5, Live Anywhere, and Join the New Rich, by a guy named Tim Ferriss. The book combines two biz-geek obsessions, saving time and getting rich, which is probably why his Web site lists endorsements from tons of people, including "Lazer Tag consultant" Stephen Key and Firefox cofounder Blake Ross.

I met Ferriss, an affable if slightly overenthusiastic fellow, at the South by Southwest Interactive conference. His book hadn’t come out yet, but he was already trying to convert the masses to his "lifestyle design" solution. Unlike a typical GTD plan, his book is also about glamor: he preaches the art of taking "mini-retirements," trips to different countries where you can have fun while still working occasionally (this is after you’ve somehow convinced your bosses to let you work remotely).

At various points while reading Ferriss’s book I was reminded of Steve Martin’s old routine "How to Make a Million Dollars and Not Pay Taxes." His solution? First make a million dollars. And then when the tax people come around, just tell them you forgot to pay. It sounds good, but the problem is implementation. In a chapter called "Outsourcing Your Life," Ferriss tips you off to his best time-saving solution: hire cheap labor in the developing world to save yourself time and money. In fact, this is eerily like all of his solutions, such as living in Thailand while working for a US company to give yourself a mini-retirement and grow richer.

Ferriss’s GTD plan is so extreme that it winds up revealing the dark side of productivity mania. Many of his time-saving techniques depend on making other people work more. For example, Ferriss interviews a guy for his book who saves time by hiring staffers at a company in Bangalore who do all his research for him, answer his e-mails, and even send his wife an apology when the two of them have a fight. Suddenly, this guy has a lot more time and feels more productive. I’m not sure that when GTD guru Allen writes about delegating tasks he means that you assign your work to other people. Ferriss’s GTD fiends may be getting four-hour workweeks, but it’s only because three women in Bangalore are working 70 hours a week.

My fantasy, on considering the extreme end of GTD culture, is that more and more people will begin following Ferriss’s advice. Get things done by outsourcing all your work to the developing world, so that soon women in Bangalore and China have access to all your personal correspondence, financial data, and work-related activities. This could possibly create the conditions for the first-ever bloodless but violent revolution. One day, people in the United States and Europe will discover that all their data is in the hands of angry workers who want to do the GTD thing their own way. They want their own four-hour workweeks, and they’re going to use all your data to get them.

It would be the perfect demise for a data-obsessed, time-obsessed culture. Deprived of our data, we’ll have all the time in the world. But of course, if we want to live, we’ll have to start working again. And this time we’ll have to work the old-fashioned way: by doing it ourselves. *

Annalee Newitz is a surly media nerd who saves time by talking and sticking her feet in her mouth at the same time.

How to control my body

0

> annalee@techsploitation.com

TECHSPLOITATION The biological functioning of my body is all over the news right now. Lawmakers and federal regulatory agencies are asking themselves whether I should be allowed to have abortions, and whether I should be allowed to take a drug that prevents me from menstruating. You probably know about the brouhaha over abortion, spurred by the recent Supreme Court decision, but you may not have realized that decision came as the Food and Drug Administration decides the fate of Lybrel, a birth control pill that could liberate millions of women from paying Tampax for "wings" every month. But these two issues are not unrelated. They are both symptoms of how much the government loves to regulate the basic functioning of my body. Still, there are some key differences.

Most arguments over abortion boil down to whether you think a woman’s right to control her future is more or less important than the much-debated rights of a potential human. Because the legal status of a fetus has become part of the abortion debate, it’s hard to cast abortion purely as a female reproductive rights issue (as much as I’d like to do that). These days the abortion debate is also about how we define human life and whether a fetus constitutes a being that deserves legal protection.

However, the issue of controlling menstrual cycles is unequivocally about the female reproductive cycle, untainted by questions of embryo civil rights. Why should there be any controversy over pharmaceutical company Wyeth marketing Lybrel, which is exactly like a birth control pill without the seven-day placebo cycle that creates a fake period? (In case you aren’t a Pill geek, the period women have while taking contraceptive pills is caused only by hormone fluctuation and not a biological need to flush out unused eggs – the Pill works by preventing the ripening of said eggs. So it’s purely a cosmetic menstrual cycle.)

There are good reasons to test Lybrel, since nobody is completely sure what might happen in the long term to women who stop menstruating. But now that Wyeth has demonstrated the safety of this pill, what’s the big deal? The New York Times recently published a much-discussed article about negative reactions to Lybrel and other drugs like it. Canadian psychologist Christine Hitchcock told the paper she didn’t like "the idea that you can turn your body on and off like a tap." Giovanna Chesler, who just made a documentary about "the end of menstruation," objects to the idea that taking a daily pill makes women appear defective. "Women are not sick," she said. "They don’t need to control their periods for 30 or 40 years."

It’s interesting that Chesler uses the word "control" in her comment. Why are women eager to relinquish control over their periods, arguably one of the most annoying parts of being a biological female? After all, we take calcium pills to control bone density; we take showers to control odor; and take ibuprofen to control pain. None of these things are necessary. We don’t do them because we are sick, and not doing them won’t kill us. So why shouldn’t we take control of our bodies and stop having periods if we want to? There are no fetuses being harmed here. Why should we reject Lybrel, if not for the dogma that it’s unnatural for women to control their reproductive functions?

Yes, Wyeth stands to make money on Lybrel, and I’m no fan of pharmaceutical companies, but women already pay to deal with their periods. We pump billions of collars into feminine hygiene products so Kotex can sell us more wings and soft applicators and superabsorbent crap. I say if we can take pills that free us from having to deal with the monthly goo and bother, then let’s do it. Nobody is saying periods are sick or wrong here. It’s just that they’re annoying and uncomfortable – and if women don’t want to deal with them, they shouldn’t have to.

The social rejection of drugs such as Lybrel – which the FDA has already turned down for approval once – is based on the idea that there is something about women’s bodies that women themselves should not be allowed to control. Even in the absence of the fetus debates, we’re still seeing women who are afraid to control their reproductive systems. As long as we are in thrall to this fear, we will never triumph in the struggle for abortion rights and effective birth control. *

Annalee Newitz is a surly media nerd who gets horrible migraines from birth control pills, so she (alas) will remain trapped in a prehistoric female body.

Another digital divide

0

› annalee@techsploitation.com

TECHSPLOITATION A couple weeks ago I moderated a panel discussion about free wireless Internet access in San Francisco. The audience and panelists included people who work on tech projects for the city, activists from impoverished neighborhoods, and civil liberties wonks. We were there to talk about what to do now that EarthLink has submitted a contract to San Francisco, offering to blanket the region with free wi-fi under certain conditions.

One of those conditions is that anyone who wants high-speed access will have to pay roughly $25 per month for it. So the only free wi-fi will be slow and spotty. Another condition is that Google will provide the software side of this free wi-fi network, potentially serving up location-based ads and keeping track of where people are when they log on the network.

A few minutes after panelists started discussing the EarthLink deal, a debate emerged over whether San Francisco should accept the contract with EarthLink as is or try to change some of the terms. Nicole Ozer from the American Civil Liberties Union was lobbying for more privacy-friendly provisions such as the ones EarthLink included in its contract with Portland; technical experts Tim Pozar and Bruce Wolfe wanted terms that promised better technical infrastructure. While their requests seemed reasonable to the geeks in the room, local teacher George Lee and African American community activist Reverend Arnold Townsend disagreed.

"What you don’t seem to understand," Lee said, "is that there are people in this city right now who don’t have any access to computers at all. They don’t know how to use Google or where to buy a USB drive. They can’t do their homework or apply for jobs because they don’t have Internet access. These people don’t care about being ‘pure.’ They just need to get online." Townsend echoed Lee’s sentiments, arguing that changing EarthLink’s contract would only delay much-needed high-tech resources for people in low-income areas in San Francisco — areas that are also heavily populated by blacks and other people of color.

Townsend said the concerns of civil liberties activists sounded to him like ideological quibbling. He added that Pozar’s and Wolfe’s suggestions for different technological approaches would just take longer and keep members of his community offline. Addressing the techies on the panel, Lee’s former student Chris Green said, "It’s like somebody is bleeding to death, but instead of giving him a tourniquet you’re saying that you’ll drive him to the hospital where you have really great facilities."

Ozer and others pointed out that asking EarthLink for better contractual terms isn’t likely to slow the wi-fi rollout in the city. The Board of Supervisors still needs to deliberate on the contract, and it could be more than a year before the supervisors accept the contract even if they don’t ask for changes. Plus, EarthLink’s technology may not serve the low-income communities. Wi-fi signals have a hard time traveling through walls and may not reach above the second floor on most buildings. It’s possible that EarthLink is courting low-income groups with promises of free wi-fi that the company can’t actually deliver.

Just for the sake of argument, however, let’s assume that EarthLink does manage to deliver wi-fi to low-income communities and that members of those communities can afford to get wi-fi-ready computers. Given that there are so few privacy protections in the EarthLink contract, I worry that we may close one digital divide only to open another.

Already, it’s easy for a company like Google to track what users do online and sell that information to the highest bidder. What happens when companies link that capability with the ability to know where users are physically when they log onto the wi-fi network? We might see a new era in racial profiling, where Google or companies like it sell information to police about what people in black neighborhoods are searching for online. If anybody does a suspicious search for "drugs" or "the Nation of Islam," that person could easily become the object of a fishing expedition by police.

There are many software tools that people use to protect their privacy online, but will impoverished people on the free wi-fi network know about them or be able to use them over slow connections? The new digital divide won’t be between people who can get online and those who can’t; instead, it will be between people who can afford to create privacy for themselves on the Web and those who don’t have the resources to do it. *

Annalee Newitz is a surly media nerd who wants everybody to have equal access to both the Internet and digital privacy.

Smoking Yahoo!’s pipes

0

› annalee@techsploitation.com

TECHSPLOITATION I’ve been playing around with Yahoo!’s latest technological experiment on the Web. It’s called Pipes, and it’s a system designed to help Web-savvy people write simple programs without ever having to read a book about Java. If you visit pipes.yahoo.com, you can take a peek. Visitors to the site are presented with a sheet of virtual graph paper and a list of modules that you can drag onto the paper and connect with pipes. In this early stage, the modules mostly allow users to build a really customized news feed or online research tool.

You can tell a source module to pull information from, say, a Google search for "Windows Vista" or the RSS feed of your favorite newspaper. Then you pipe that information to an operator module, which allows you to filter it, list it by date, translate it into another language, and more. Other modules let you do more complicated things, such as annotating each piece of data with geographical information or merging the RSS feeds from several sites so that you get one big daily news feed instead of 20 from various progressive blogs. Just think: you could mix the latest wankery from porno news site Fleshbot with the latest wonkery from Talking Points Memo! That’s the beauty of a customized news feed.

Pipes isn’t for everyone — it’s too complicated for casual Web surfers, who may not be familiar with the inner workings of RSS feeds and search queries. But a quick Google search reveals some excellent tutorials that will aid even the most RSS-clueless person in creating a pipe. Plus, you can clone other people’s pipes — so if you want a customized news feed, you can just use one that already exists, fill in your own news sources of choice, and save it to your own account. There are hundreds of cool pipes available on the site, and they’re all cloneable.

Now I sound like a cheerleader for Pipes, which I’m not. In fact, I recently spent an evening making fun of Pipes with one of the creators of the RSS standard (no, it wasn’t Dave Winer). Our mockery was inspired by two things: one, Pipes could be an overhyped proof of concept that nobody will ever use; and two, it could actually limit people’s control over data.

How could a tool designed to help you manipulate all kinds of information actually limit your control? To answer this, we need to delve briefly into the origin of the pipes idea. The name comes from a powerful command in UNIX, one of the first operating systems, which converts the output of one function into the input for another. It’s hard to convey how utterly awesome and time-saving this command was when it was invented. It meant that data could be crunched, sorted, alphabetized, merged, and recombined more easily than ever before.

Yahoo! Pipes aims to do the same thing, only the data you use is what’s publicly available on the Web. So if you want to use Pipes to organize or sort your personal data, you’ll have to publish it online. This is obviously quite different from the UNIX pipe, which is so powerful in part because you can use it on private stuff such as passwords and financial documents. Yahoo! Pipes treats the Web as if it were the hard drive of your UNIX box — you can pipe data from Google into a sorting program or pipe the New York Times RSS feed into a filter that will remove all stories that refer to Yahoo! Pipes. It’s marvelously cool, but I worry that it will inspire people to put sensitive data online just because it’s more convenient to crunch via Pipes.

At this point, my fears are probably unjustified. Pipes is in beta, and it may not catch on with the general public. More likely, a user-friendly version of Pipes will come along and get widely adopted in a couple years. It will become just one more way we’re being seduced into dumping all our personal stuff online. I like the idea of turning all the data on the Web into my raw material, to do with what I please. That’s the beautiful part of Pipes. Still, the more data we deposit in the hive-mind of the Web, the less power we have over it. *

Annalee Newitz is a surly media nerd who still hears the voice of her UNIX teacher in her head saying, "Now pipe it to MORE."

Vote Mac

0

› annalee@techsploitation.com

TECHSPLOITATION A Barack Obama fan, supposedly operating on his own time and not as part of the campaign, recently released a rather clumsy attack ad smearing Hilary Clinton on YouTube. No, it’s not particularly amazing that spin-doc wannabes are splattering DIY attack ads on video-sharing networks. What’s surprising is the content of this particular ad, which rips off an old Macintosh commercial from the 1980s. The message? Vote Obama because he’s just like an Apple computer.

The ad is a mashup of Apple’s infamous Big Brother commercial that aired just once, during the 1984 Super Bowl. Directed by Ridley Scott (Blade Runner), it depicts a black-and-white world of industrial hell where only Macs can save us from fascism. Slack-jawed office slaves file into an auditorium where Big Brother delivers a garbled speech from an immense television screen. Just when the grimness gets overwhelming, a woman appears in bright red shorts and a Macintosh T-shirt. She runs through the auditorium in slow motion, wielding a sledgehammer, fleeing police. As Big Brother’s speech reaches a crescendo, she hurls her hammer into the screen and shatters it. A few words scroll into view over the storm of glass dust: "On January 24th, Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like ‘1984.’ "

The only difference between the Obama ad and the old Mac commercial is that Hilary Clinton has been pasted into Big Brother’s place on screen. She’s droning out some speech about everybody working together, and the final words on screen read, "On January 14, the Democratic Primary will begin. And you’ll see why 2008 won’t be like ‘1984.’ "

I’m weirded out by the idea that it’s meaningful to compare the Democratic primary to the release of a new technological gizmo. Are we really supposed to feel stirred by the notion that our political leaders are computers designed by marketers? Or that the only symbol the grassroots politicos can come up with to represent their candidate of choice is a computer that’s been obsolete for 20 years? How, exactly, did we wind up with such impoverished political imaginations?

The fact is we didn’t. Macintoshes are just the latest pop culture symbol that politicians have seized on to fake their connection to everyday American life. Hell, even Ben Franklin pulled the old pop culture trick when he plopped a coonskin hat on his head so that he’d look folksy when he arrived in France to round up some cash to fund the Revolutionary War. Two centuries later, Bill Clinton used the Fleetwood Mac song "Don’t Stop Thinking about Tomorrow" to symbolize his hipness when he was inaugurated. Obama’s supporters are using hippie computers instead of hippie rock to make the same point. Think about it: Apple computers of the ’80s represent a hopefulness about the power of technology to bring us together that the country has all but forgotten. Sort of the way we forgot about prog rock.

But do Apple computers represent what they used to back in the day? Not if you are keeping up with the times. Over the past few months, in fact, Apple has launched its own series of attack ads on the Windows PC. You know the ads I mean — the ones where the Macintosh is personified as a snotty, black-clad hipster type who goes around feeling sorry for the PC, a bumbling, nerdy guy in a suit who can never quite get his peripherals to work.

Unfortunately for Apple, the attack ads have backfired. The PC character is played by John Hodgman, a popular satirist who appears regularly on The Daily Show and This American Life. His PC comes across as a populist everyman being unfairly taunted by a younger, cuter model with lots of nice hair but no brains. Everybody wants Hodgman in the living room, even if he crashes occasionally. He’s us. He’s America. The only person who wants that annoying Mac guy around is, well, the sort of person who thinks it’s brilliant to change one tiny aspect of an old TV commercial and rebroadcast it online as if it’s the new citizen media taking on the political system. If Obama is the Mac, then I’m voting PC. No, wait, I’m buying a PC! Oh crap — am I at the store or in a voting booth? It’s so hard to tell the difference. *

Annalee Newitz is a surly media nerd who figures all the voting machines are rigged to vote for the Zune anyway.

Exploitation

0

› annalee@techsploitation.com

TECHSPLOITATION Among hackers, exploitation is a social good. Exploiting a piece of software means discovering a little chink in its armor, a vulnerability that could allow a crook to slip through and do unwanted things to innocent people’s computers. Researchers write an exploit — a little program that takes advantage of the vulnerability — and then show it to everybody involved so that the vulnerability can be patched up.

But things are not always so tidy, and a case in point is an exploit recently released by a researcher named HD Moore. He publicized a vulnerability in a system called Tor, which facilitates anonymous Web surfing and online publishing. Used by political dissidents, journalists, and people who just want additional privacy, Tor routes Internet traffic through a special network of protected servers run by thousands of volunteers.

To run his exploit, dubbed Torment, Moore set up a series of fake Tor nodes that did the opposite of what a real Tor node would do: they looked at every bit of traffic passing through and did some tricks to tag that traffic and follow it back to its source so that the people using Tor could be identified. Like many exploits, Torment only works on people who have misconfigured Tor. So anyone who has faithfully followed the instructions on how to use Tor is still safe — but of course, even the most anal-retentive of us make mistakes sometimes when installing and configuring software.

Moore has said that he decided to launch this attack on Tor because he suspects that child pornographers are using the anonymous network to hand out kiddie porn. But it’s also more than that. Via e-mail, he told me, "If anything, I want my demonstration site to serve as a warning for anyone who believes their Web traffic is actually anonymous."

There are two problems here. First, there’s a technical problem. Moore’s exploit isn’t new research that will help improve Tor’s security — it’s simply a rehash of exploits that work on anyone who has misconfigured their browser software. As Tor developer Nick Mathewson pointed out in an online chat with me, "I don’t think that polishing exploit code for existing attacks that depend on users being improperly configured really helps the research field much. When you’re demonstrating new attacks, that looks like research to me."

Contrast Moore’s work with that of UK researcher Steven Murdoch, who last year published an unusual new exploit that could reveal the identities of Tor users who have all the proper configurations. In other words, Murdoch found a vulnerability in Tor; Moore found a vulnerability in software users — they misconfigure stuff — that would apply no matter what program they used.

And this leads to the second problem that Moore’s exploit raises. Given that he found a general problem that goes far beyond Tor, why call it a vulnerability in Tor? It would almost be more accurate to say he’s noticed that it’s hard to surf the Internet anonymously while using a browser because most browsers hand out your IP address to anyone who asks for it. Although I can’t speculate about Moore’s motivations, his disclosure winds up coming across as a potshot at the Tor community. The way Torment works only shores up this interpretation. He’s asked people who use Torment to watch the traffic going through their fake Tor nodes. He wants them to read and track people’s private data — not only in violation of those people’s wishes, but also potentially in violation of the law.

It would be easy to claim that Moore’s motivation is political in nature. He says he built Torment to help law enforcement. Perhaps he believes only criminals want anonymity and innocent people shouldn’t be worried about publishing articles that can be traced back to their computers’ IP addresses. Those of us who want to protect the identities of dissident journalists, privacy lovers, queer activists, and human rights workers in Central America obviously feel otherwise.

Of course, this debate highlights the problem with releasing exploits in general. When hackers find vulnerabilities in Windows, they’re accused of wanting to destroy Microsoft rather than make the world a safer place. Same goes for hackers who exploit government computer networks. But unlike real-world exploitation, nearly all computer exploitation can be turned to good in the end. Even Torment has had good side-effects. "We’re working on clarifying the instructions for configuring Firefox and Tor," Mathewson said. "Moore has helped us to realize we should do that." *

Annalee Newitz is a surly media nerd who isn’t anonymous but is glad that she could be.

Men are not men

0

› annalee@techsploitation.com

TECHSPLOITATION A couple weeks ago I gave a presentation at the annual meeting of the American Association for the Advancement of Science about how journalists often misreport the results of gender research because they have a lot of preconceived notions about men and women. Most of these notions come from popular culture, and since journalists are in the pop culture biz, none of this should be a big surprise.

Still, sometimes a story is so egregiously reported — and based on such flimsy research — that it takes my breath away. Such was the case with a recent Associated Press story about how a Stanford graduate student had proven that men in online virtual worlds behave just like men in real life. The story focused on a study by Nick Yee, who entered the virtual world Second Life (SL) to examine the behavior of avatars, or online representations of people. SL is an experimental virtual world where many avatars don’t have a gender. Many SL avatars are animals or fairies or geometric objects.

Nevertheless, Yee wanted to prove that men in SL act the same way psychologists say they do in real life. A few studies have shown that two men talking, on average, stand farther away from each other than women do. Yee postulated that you would see similar behaviors among male avatars in SL. By recording the interactions between several male and female avatars in various combinations, he and his research crew determined that male avatars do indeed tend to stand farther away from other male avatars than female avatars do. Thus, the AP headline crowing "Virtual Men Also Keep Distance." Ah yes, everybody loves it when science confirms their stereotypes. Even the New York Times jumped on the bandwagon, covering the study uncritically, as if it made perfect sense that men would always be men, even in a virtual space.

I was, however, extremely skeptical. First of all, as I mentioned earlier, SL is already unlike the real world in that people can pick their gender (or lack thereof). My avatar in SL is a Hapa boy with blue hair. In real life, I am a white girl with brown hair. If I were truly reflecting my alleged real-life behavior, my avatar should act like a woman since I am a woman in real life. I wrote to Yee and asked what he thought. He replied, "We are suggesting that male avatars, regardless of whether they are being controlled by male or female users, follow the social norms of men. This point isn’t elaborated in the paper because we didn’t have the right kind of data to prove this one way or another." Too bad that the AP thought he did have the data to prove that and reported it as such.

What Yee really discovered is that avatars don’t reflect social norms at all: women are acting male and vice-versa. This, I can tell you from experience, would not be viewed as the social norm in real life. Moreover, Yee admits in his scientific paper that he and his researchers basically had to guess at the genders of the avatars they met, since it’s hard to tell with many avatars. Are you getting the picture here? It’s a classic example of researchers imposing their preconceptions onto a culture that doesn’t conform to their norms. Upon encountering a society of many genders, where nongendering is part of the norm, Yee and his crew still attempted to figure out a way to find real-world gendered behavior. It’s like Margaret Mead’s work, only worse because we should know better.

Basically all Yee did was go into a virtual world whose gender norms were hard to understand and try to find ways that it reflected gender norms he did understand. He imposed his own notion of male and female on avatars who are often neither. And he topped it off by trying to map real-life body language onto the clunky movements of digital representations. Are you surprised that Yee found exactly what he wanted to prove? No, I’m not either. *

Annalee Newitz is a surly media nerd who doesn’t appreciate anthropologists coming around and trying to make her world just like theirs. Yee’s study is at www.nickyee.com; the AP story can be found at www.msnbc.msn.com/id/17279588.