The following op-ed by Harvard Law School Professor Jonathan Zittrain appeared in the Nov. 30 edition of the Technology Review.
In addition to his HLS professorship, Zittrain is faculty co-director of the Berkman Center for Internet and Society at Harvard University. He is also a professor of law at the Harvard Kennedy School, and professor of computer science at the Harvard School of Engineering and Applied Sciences.
Zittrain is the author of the 2008 book “The Future of the Internet—And How To Stop It.”
The personal computer is dead
by Jonathan Zitttrain
The PC is dead. Rising numbers of mobile, lightweight, cloud-centric devices don’t merely represent a change in form factor. Rather, we’re seeing an unprecedented shift of power from end users and software developers on the one hand, to operating system vendors on the other—and even those who keep their PCs are being swept along. This is a little for the better, and much for the worse.
The transformation is one from product to service. The platforms we used to purchase every few years—like operating systems—have become ongoing relationships with vendors, both for end users and software developers. I wrote about this impending shift, driven by a desire for better security and more convenience, in my 2008 book The Future of the Internet—and How to Stop It.
For decades we’ve enjoyed a simple way for people to create software and share or sell it to others. People bought general-purpose computers—PCs, including those that say Mac. Those computers came with operating systems that took care of the basics. Anyone could write and run software for an operating system, and up popped an endless assortment of spreadsheets, word processors, instant messengers, Web browsers, e-mail, and games. That software ranged from the sublime to the ridiculous to the dangerous—and there was no referee except the user’s good taste and sense, with a little help from nearby nerds or antivirus software. (This worked so long as the antivirus software was not itself malware, a phenomenon that turned out to be distressingly common.)
Choosing an OS used to mean taking a bit of a plunge: since software was anchored to it, a choice of, say, Windows over Mac meant a long-term choice between different available software collections. Even if a software developer offered versions of its wares for each OS, switching from one OS to another typically meant having to buy that software all over again.
That was one reason we ended up with a single dominant OS for over two decades. People had Windows, which made software developers want to write for Windows, which made more people want to buy Windows, which made it even more appealing to software developers, and so on. In the 1990s, both the U.S. and European governments went after Microsoft in a legendary and yet, today, easily forgettable antitrust battle. Their main complaint? That Microsoft had put a thumb on the scale in competition between its own Internet Explorer browser and its primary competitor, Netscape Navigator. Microsoft did this by telling PC makers that they had to ensure that Internet Explorer was ready and waiting on the user’s Windows desktop when the user unpacked the computer and set it up, whether the PC makers wanted to or not. Netscape could still be prebundled with Windows, as far as Microsoft was concerned. Years of litigation and oceans of legal documents can thus be boiled down into an essential original sin: an OS maker had unduly favored its own applications.
When the iPhone came out in 2007, its design was far more restrictive. No outside code at all was allowed on the phone; all the software on it was Apple’s. What made this unremarkable—and unobjectionable—was that it was a phone, not a computer, and most competing phones were equally locked down. We counted on computers to be open platforms—hard to think of them any other way—and understood phones as appliances, more akin to radios, TVs, and coffee machines.
Then, in 2008, Apple announced a software development kit for the iPhone. Third-party developers would be welcome to write software for the phone, in just the way they’d done for years with Windows and Mac OS. With one epic exception: users could install software on a phone only if it was offered through Apple’s iPhone App Store. Developers were to be accredited by Apple, and then each individual app was to be vetted, at first under standards that could be inferred only through what made it through and what didn’t. For example, apps that emulated or even improved on Apple’s own apps weren’t allowed.
The original sin behind the Microsoft case was made much worse. The issue wasn’t whether it would be possible to buy an iPhone without Apple’s Safari browser. It was that no other browser would be permitted—or, if permitted, it would be only through Apple’s ongoing sufferance. And every app sold for the iPhone would have 30 percent of its price (and later, that of its “in-app purchases”) go to Apple. Famously proprietary Microsoft never dared to extract a tax on every piece of software written by others for Windows—perhaps because, in the absence of consistent Internet access in the 1990s through which to manage purchases and licenses, there’d be no realistic way to make it happen.
Fast forward 15 years, and that’s just what Apple did with its iOS App Store.
In 2008, there were reasons to think that this situation wasn’t as worrisome as Microsoft’s behavior in the browser wars. First, Apple’s market share for mobile phones was nowhere near Microsoft’s dominance in PC operating systems. Second, if the completely locked-down iPhone of 2007 (and its many counterparts) was okay, how could it be wrong to have one that was partially open to outside developers? Third, while Apple rejected plenty of apps for any reason—some developers were fearful enough of the ax that they confessed to being afraid to speak ill of Apple on the record—in practice, there were tons of apps let through; hundreds of thousands, in fact. Finally, Apple’s restrictiveness had at least some good reason behind it independent of Apple’s desire for control: rising amounts of malware meant that the PC landscape was shifting from anarchy to chaos. The wrong keystroke or mouse click on a PC could compromise all its contents to a faraway virus writer. Apple was determined not to have that happen with the iPhone.
By late 2008, there was even more reason to relax: the ribbon was cut on Google’s Android Marketplace, creating competition for the iPhone with a model of third-party app development that was a little less paranoid. Developers still registered in order to offer software through the Marketplace, but once they registered, they could put software up immediately, without review by Google. There was still a 30 percent tax on sales, and line-crossing apps could be retroactively pulled from the Marketplace. But there was and is a big safety valve: developers can simply give or sell their wares directly to Android handset owners without using the Marketplace at all. If they didn’t like the Marketplace’s policies, it didn’t mean they had to forgo ever reaching Android users. Today, Android’s market share is substantially higher than the iPhone’s. (To be sure, that market share is inverted in the tablet space; currently 97 percent of tablet Web traffic is accounted for by iPads. But as new tablets are introduced all the time—the flavor of the month just switched to Kindle Fire, an Android-based device—one might look at the space and see what antitrust experts call a “contestable” market, which is the kind you want to have if you’re going to suffer market dominance by one product in the first place. The king can be pushed down the hill.)
With all of these beneficial developments and responses between 2007 and 2011, then, why should we be worried at all?
The most important reasons have to do with the snowballing replicability of the iPhone framework. The App Store model has boomeranged back to the PC. There’s now an App Store for the Mac to match that of the iPhone and iPad, and it carries the same battery of restrictions. Some restrictions, accepted as normal in the context of a mobile phone, seem more unfamiliar in the PC landscape.
For example, software for the Mac App Store is not permitted to make the Mac environment look different than it does out of the box. (Ironic for a company with a former motto importuning people to think different.) Developers can’t add an icon for their app to the desktop or the dock without user permission, an amazing echo of what landed Microsoft in such hot water. (Though with Microsoft, the problem was prohibiting the removal of the IE icon—Microsoft didn’t try to prevent the addition of other software icons, whether installed by the PC maker or the user.) Developers can’t duplicate functionality already on offer in the Store. They can’t license their work as Free Software, because those license terms conflict with Apple’s.
The content restrictions are unexplored territory. At the height of Windows’s market dominance, Microsoft had no role in determining what software would and wouldn’t run on its machines, much less whether the content inside that software was to be allowed to see the light of screen. Pulitzer Prize-winning editorial cartoonist Mark Fiore found his iPhone app rejected because it contained “content that ridicules public figures.” Fiore was well-known enough that the rejection raised eyebrows, and Apple later reversed its decision. But the fact that apps must routinely face approval masks how extraordinary the situation is: tech companies are in the business of approving, one by one, the text, images, and sounds that we are permitted to find and experience on our most common portals to the networked world. Why would we possibly want this to be how the world of ideas works, and why would we think that merely having competing tech companies—each of which is empowered to censor—solves the problem?
This is especially troubling as governments have come to realize that this framework makes their own censorship vastly easier: what used to be a Sisyphean struggle to stanch the distribution of books, tracts, and then websites is becoming a few takedown notices to a handful of digital gatekeepers. Suddenly, objectionable content can be made to disappear by pressuring a technology company in the middle. When Exodus International—”[m]obilizing the body of Christ to minister grace and truth to a world impacted by homosexuality”—released an app that, among other things, inveighed against homosexuality, opponents not only rated it poorly (one-star reviews were running two-to-one against five-star reviews) but also petitioned Apple to remove the app. Apple did.
To be sure, the Mac App Store, unlike its iPhone and iPad counterpart, is not the only way to get software (and content) onto a Mac. You can, for now, still install software on a Mac without using the App Store. And even on the more locked-down iPhone and iPad, there’s always the browser: Apple may monitor apps’ content—and therefore be seen as taking responsibility for it—but no one seems to think that Apple should be in the business of restricting what websites Safari users can visit. Question to those who stand behind the anti-Exodus petition: would you also favor a petition demanding that Apple prevent iPhone and iPad users from getting to Exodus’s website on Safari? If not, what’s different, since Apple could trivially program Safari to implement such restrictions? Does it make sense that South Park episodes are downloadable through iTunes, but the South Park app containing the same content was banned from the App Store?
Given that outside apps can still run on a Mac and on Android, it’s worth asking what makes the Stores and Marketplaces so dominant—compelling enough that developers are willing to run the gauntlet of approval and take a 30 percent hit on revenue instead of simply selling their apps directly. The iPhone restricts outside code, but developers could still, in many cases, manage to offer functionality through a website accessible through the Safari browser. Few developers do, and there’s work to be done to ferret out what separates the rule from the exception. The Financial Times is one content provider that pulled its app from the [iOS] App Store to avoid sharing customer data and profits with Apple, but it doesn’t have much company.
The answer may lie in seemingly trivial places. Even one or two extra clicks can dissuade a user from consummating what he or she meant to do—a lesson emphasized in the Microsoft case, where the ready availability of IE on the desktop was seen as a signal advantage over users’ having to download and install Netscape. The default is all-powerful, a notion confirmed by the value of deals to designate what search engine a browser will use when first installed. Such deals provided 97 percent of Firefox-maker Mozilla’s revenue in 2010—$121 million. The safety valve of “off-road” apps seems less helpful when people are steered so effortlessly to Stores and Marketplaces for their apps.
Security is also a factor—consumers are willing to consign control over their code to OS vendors when they see so much malware out in the wild. There are a variety of approaches to dealing with the security problem, some of which include a phenomenon called sandboxing—running software in a protected environment. Sandboxing is soon to be required of Mac App Store apps. More information on sandboxing, and a discussion of its pros and cons, can be found here.
The fact is that today’s developers are writing code with the notion not just of consumer acceptance, but also vendor acceptance. If a coder has something cool to show off, she’ll want it in the Android Marketplace and the iOS App Store; neither is a substitute for the other. Both put the coder into a long-term relationship with the OS vendor. The user gets put in the same situation: if I switch from iPhone to Android, I can’t take my apps with me, and vice versa. And as content gets funneled through apps, it may mean I can’t take my content, either—or, if I can, it’s only because there’s yet another gatekeeper like Amazon running an app on more than one platform, aggregating content. The potentially suffocating relationship with Apple or Google or Microsoft is freed only by a new suitor like Amazon, which is structurally positioned to do the same thing.
A flowering of innovation and communication was ignited by the rise of the PC and the Web and their generative characteristics. Software was installed one machine at a time, a relationship among myriad software makers and users. Sites could appear anywhere on the Web, a relationship among myriad webmasters and surfers. Now activity is clumping around a handful of portals: two or three OS makers that are in a position to manage all apps (and content within them) in an ongoing way, and a diminishing set of cloud hosting providers like Amazon that can provide the denial-of-service resistant places to put up a website or blog.
Both software developers and users should demand more. Developers should look for ways to reach their users unimpeded, through still-open platforms, or through pressure on the terms imposed by the closed ones. And users should be ready to try “off-roading” with the platforms that still allow it—hewing to the original spirit of the PC, perhaps amplified by systems that let apps have a trial run on a device without being given the keys to the kingdom. If we allow ourselves to be lulled into satisfaction with walled gardens, we’ll miss out on innovations to which the gardeners object, and we’ll set ourselves up for censorship of code and content that was previously impossible. We need some angry nerds.