The following op-ed, Catalysts for corporate responsibility in cyberspace, co- written by Harvard Law School Clinical Professor John Palfrey ’01 and Visiting Professor Jonathan Zittrain ’95, was published in Cnet News on August 14, 2007.

You’ve just decided that your technology company should expand into the white-hot Chinese market and then turn to Vietnam, Thailand and Singapore. Each of these new markets promises enormous growth.

You’ve got the usual things to worry about. You need to set up subsidiaries or joint ventures, hire local management, figure out how to scale your infrastructure, think through tax implications, and so forth.

There’s also a more unusual challenge that you may not be able to ignore. If you are in the business of Internet or telecommunications services, you are going to be asked to censor the information you provide to users. And you’ll probably be asked to turn over information about your users to the local police, rarely with anything approaching what we’d call “due process” in the United States.

These are typical requirements when operating almost anywhere–even liberal democracies identify information to be removed, such as that which infringes copyright or meets some test of obscenity. They require help identifying users at times, and some impose blanket data-retention requirements for these purposes.

But in more authoritarian places like China the practices have extra bite. The information the government seeks to censor can relate to civic dialogue and freedom. The people they seek to identify might be political dissidents or religious practitioners. Often, the requirements to redact or block will be stated or implied only generally without specific requests for individual cases. That means that your company will have to be prepared to operate in something of a gray zone, trying to divine what the regulators have in mind–and act to censor without explicit orders to do so.

Over the past five years, there has been a steady rise of Internet filtering and surveillance practices. In 2002, only a small handful of states censored the Internet; the number is more than two dozen states in 2007, as our research through the OpenNet Initiative (a consortium of four university teams: the University of Cambridge, Harvard Law School’s Berkman Center for Internet & Society, Oxford Internet Institute, and the Citizen Lab of the University of Toronto) has made plain. These states rely on private enterprise for their control, and some of America’s most prominent Internet companies have found trouble trying to follow local law in parts of the Middle East, the former Soviet Union, and East Asia–where censorship and surveillance on the Web is most extensive–against a backdrop of international criticism.

Yahoo has been excoriated–and subjected to a human rights lawsuit–for turning over information to Chinese authorities about a journalist that allegedly led to his arrest and imprisonment. The problem? The jailing was for no crime that a court in Yahoo’s home jurisdiction of California could recognize. Human rights activists won’t let the world forget Yahoo’s role.

Cisco Systems has been attacked for selling the routers and switches that make censorship and surveillance possible. So, too, has Microsoft, for offering a blog service that generates an error rejecting “profanity” when a user includes the word “democracy” in the title of a blog. Google has come under fire for offering a search product in China that omits certain search results compared to what you’d find if you searched from the United States or Western Europe.

Side-by-side comparisons of a Google image search for “Tiananmen Square” in and show the stark results of censorship. Anyone who can see both sets of images, the latter lacking any shots of a person staring down a tank in 1989, is forced to consider what it would be like to live under an authoritarian regime.

This represents two opposing pressures on nearly every corporation whose business involves information technologies. While liberal democracies have so far remained remarkably hands off as the Internet has matured, the desire of more closed regimes to tap the Internet’s economic potential while retaining control of the information space pressures firms to limit the freedoms they can offer many users, with rules that can be contradictory from one jurisdiction to another. And as firms accede, a second pressure arises from perceived betrayal of the values of the company’s owners, customers or watchdogs.

What’s a corporation to do? The thorny ethical problem arises when the corporation is asked to do something squarely at odds with the law, norms or ethics of the corporation’s home state. Should a search engine agree to censor its search results as a condition of doing business in a new place? Should an e-mail service provider turn over the name of one of its subscribers to the government of a foreign state without knowing what the person is said to have done wrong? Should a blog service provider code its application so as to disallow someone from typing a banned term into a subject line?

Reasonable people disagree as to the best means of resolving these emerging ethical concerns. One might contend that there is no ethical problem here–or, at least, that the ethical problem is nothing new. If an Internet censorship and surveillance regime is entirely legitimate from the perspective of international law and norms, the argument goes, then a private party required to participate in that regime has a fairly easy choice.

If a firm disagrees with what it is being asked to do, then it should simply exercise its business judgment and refuse to compete in those markets. Alternatively, a company could decide to refuse to comply with the demands that it believes puts the firm in a position in which its values are compromised–and then accept the consequences, including possibly being forced to leave the market.

But this is not an ideal state of affairs. It means that ethically sensitive companies may squander the chance to engage with these countries and may yield development opportunities to more mercenary firms. Those firms may prove more willing to carry out repressive mandates. Moreover, leaving the market to firms with a tin ear for ethical implications will ratchet up calls for new legal barriers to international commerce.

In the United States, Rep. Christopher Smith (R-N.J.) has proposed the Global Online Freedom Act, which would sharply restrict the business that technology firms could conduct overseas–so much so that opening your new business line in China would probably be a nonstarter. Such legislation could help advance the cause of human rights, and it would have the benefit of applying equally to all companies under its jurisdiction, but it ought to be seen as a second resort. The threat of legislation may be more effective in improving behavior than actually passing the law.

The most efficient and thorough way to address this conundrum is for corporations themselves to take the lead. Technology companies, acting as an industry, are best placed to work together to address some of the ethical issues by adopting a code of conduct to govern their activities under authoritarian regimes. This approach could, at a minimum, clarify to citizens within those regimes what they need to know about what companies will and will not do in response to demands from the state.

In the simplest form, individual firms could each come up with their own principles, much like a privacy policy on today’s Internet. Microsoft set forth a partial version of such a policy at a speech by General Counsel Brad Smith in 2005, in which he pledged the company to follow a “broad policy framework” for responding to restrictions on the posting of blog content. While setting in place these public commitments, Microsoft’s executives have continued to exercise leadership in the industry in the effort to come up with a common set of principles.

Google has refused to bring certain services into places like China, so as to avoid having to turn over personal information. Yahoo’s chief executive, Jerry Yang, made a bold statement denouncing online censorship and surveillance at the company’s most recent shareholder meeting, no doubt provoking the ire of the Chinese authorities–while not satisfying the demands of human rights critics. At the same time, Yahoo has established a senior, cross-functional team of Yahoo executives worldwide to coordinate its efforts to address privacy and freedom-of-expression issues moving forward.

While each of these individual corporate acts is laudable, this kind of firm-by-firm model suffers from the variation among approaches bound to ensue and the lost opportunity of these firms learning from one another as they tackle these hard problems. Users would be forced to sort through legalese, much as privacy policies and terms of use force the curious to do on today’s Internet, and to compare policies of the relevant firms–a task few people are prepared to invest the time to undertake, and which would disadvantage those who cannot easily parse fine print. And by not standing together, the firms would have only as much leverage as each firm has to begin with–nowhere near the impact they could have together.

The more promising route is for one or more groups of industry members to come up with a common, voluntary code of conduct to guide the activities of individual firms in regimes that carry out online censorship and surveillance. Such a process has begun. Google, Microsoft, Vodafone, Yahoo and TeliaSonera are actively working together on a code. This process includes nongovernment organizations (NGOs)–including Business for Social Responsibility and the Center for Democracy and Technology, which chair the group–and academics, including teams from the Berkman Center for Internet & Society at Harvard Law School, the Oxford Internet Institute, and the University of St. Gallen in Switzerland.

Regulators with relevant expertise and authority have also weighed in on the process, as have investors and leading human rights groups. Just as noteworthy are those who are not yet involved in this process, especially those firms that sell relevant services and products directly to governments, such as Cisco, WebSense and Secure Computing.

The code that this group develops will most likely set out broad, common principles. These principles ought to contain enough detail to inform users about what to expect and to hold the firms to a meaningful standard, but without being so prescribed as to make the code impossible to implement from firm to firm and from state to state–especially in a fast-changing technological environment. This ever-changing context means that the code must continually evolve, taking on new challenges to speech and privacy, and ensuring that companies’ responses are both dynamic and treated as internally-driven organizational priorities. The code should also provide a road map for when a firm might refuse to engage in regimes that put them in a position where they cannot comply with both the code and with local laws.

Developing (meaningful) codes of conduct If the industry itself does not succeed through such an approach, the likelihood increases that an outside group will come up with a set of principles that will gain traction. This approach might place more pressure on companies to act. The Paris-based Reporters Sans Frontieres has drafted such a set of principles, as have a group of academics based at the University of California-Berkeley’s School of Law, Boalt Hall–while also participating in the firm-centered process. An outsider’s code might be something to which firms could be encouraged to subscribe, on the model of the Sullivan Principles and the Apartheid-era South Africa, with a governing institution to support the principles and the companies that subscribe to them.

The development of a code of conduct itself solves only a small part of the problem; it is in the successful application of the code that a long-term solution lies. In the context of other instances of corporate codes of ethics implicating human rights, such as the sweatshops issue, those involved say that getting to the code was the easy part.

A critical part of such a voluntary process to establish a code, regardless of its substantive terms and who drafted it, is to develop an institution charged with monitoring (and ideally supporting through best practices) adherence to the code and pointing out shortcomings. One might imagine an institution–perhaps not a new institution, but a pre-existing entity charged with this duty–that might include among its participants representatives of NGOs or other stakeholders without a direct financial stake in the outcome of the proceedings.

The best way to make this approach sustainable would be for the industry consensus to be given the status of law over time. This process would help to address three of the primary shortcomings of the industry self-regulation model. First, self-regulation can amount to the fox guarding the chicken coop. Second, self-regulation permits some actors to opt-out of the system and to gain an unfair competitive advantage as a result. Last, the self-regulatory system could collapse or be amended, for the worse, at any time–and may or may not persist in an optimal form, even if such an optimal form could be reached initially.

An industry-led approach would also bring with it the benefit of improved clarity for end users. If the code is well-drafted and well-implemented, users of Internet-based services would know what to expect in terms of what their service provider would do when faced with a censorship or surveillance demand.

The benefit of such an approach could well extend further. By working together on a common code, and harnessing the support of their home states and the NGO community, investors, academics and others, the affected industry might well be able to present a united front that would enable individual firms to resist excessive state demands without having to leave the market as a result of noncompliance.

Industry need not–and ought not–go it alone.