At first, the quirky photo that flooded the internet in March seemed entirely authentic. The viral online image depicted Pope Francis, looking like he had just stepped off a Parisian runway, strutting down the street clad in an uber trendy, oversized white puffer coat.

The internet went crazy. But when viewers took a closer look, they realized the picture of the pontiff was actually fake, created by the prompts of a Chicago resident using the artificial intelligence image generator Midjourney. It was just another example of the kind of increasingly sophisticated product AI is capable of turning out.

Who can claim the right to an image or written work or piece of music created with AI?

Such believable machine-generated output is raising a host of legal and ethical questions around authorship, fair use, copyright, and more. Who can claim the right to an image or written work or piece of music developed with AI? Should the artists, whose works are part of the massive data sets computers rely on to generate their results, be credited and compensated? Who should be held accountable for misinformation and disinformation? And should the law be updated to reflect the rapidly changing AI landscape?

“I do think we are at a moment of many questions, and some of them feel pretty profound and existential, especially when it comes to how we think about creativity in an age where artificial intelligence is going to take center stage across a number of fields,” says John Palfrey ’01, visiting professor at Harvard Law School, and president of the John D. and Catherine T. MacArthur Foundation.

Palfrey and other Harvard Law experts know those questions have complex answers that will require time to effectively sort out. They also know the AI clock is ticking.

Many seemed caught off guard by the efficiency of programs such as ChatGPT, OpenAI’s language processing tool, released to the public for testing last fall. The chatbot scours massive troves of text to generate new content based on a user’s prompts. Primitive versions of such AI technology have been around since the 1960s, but in recent decades advances in machine-learning algorithms, better access to big data, and enormous investments in computing power have made it lightning fast and eerily effective. Users were shocked last year when ChatGPT instantly produced everything from a plausible Shakespearean sonnet to a passable high school history essay. (Other AI tools use similar technology to produce images, audio, and video.)

While embracing technological change is part of the human experience, when the pace of that change seems to ramp up exponentially, the rules and regulations meant to keep that technology in check can fall further and further behind.

Some experts worry that in light of developments in AI, copyright and intellectual property law need an overhaul. Rebecca Tushnet, Frank Stanton Professor of the First Amendment at Harvard Law School, isn’t so sure. Tushnet thinks current copyright law is clear when it comes to those employing the technology for creative use. 

The U.S. Copyright Office has long granted copyright only to works created with the significant involvement of a human hand or “author,” a policy that the courts have routinely reaffirmed, says Tushnet. “The U.S. courts are generally in agreement that you need a human being sufficiently in the loop to have an author. And a lot of AI-generated works are not that.”

Artist Jason Allen, whose work “Théâtre D’opéra Spatial,” which took a top prize at last year’s Colorado State Fair digital art competition and which he created with Midjourney, might disagree. Allen is appealing the office’s rejection of his request for copyright and has said he is prepared to take his case all the way to the Supreme Court. He argues his work uses AI as a legitimate tool of artistic expression. The copyright office said it declined his request because his piece lacked “human authorship.”

But both Tushnet and the copyright office acknowledge there are times when work created with the help of AI does merit protected status. In March the office released its latest guidance around such works, noting, for example, that if a human selects or arranges AI-generated material in a sufficiently creative way such that, according to U.S. law, “the resulting work as a whole constitutes an original work of authorship,” copyright is merited. The office also acknowledged the nature of the changing AI landscape, writing that it will “publish a notice of inquiry later this year seeking public input on additional legal and policy topics, including how the law should apply to the use of copyrighted works in AI training and the resulting treatment of outputs.”

Louis Tompros ’03, a lecturer on law at Harvard Law School and a partner at the law firm WilmerHale, recently told Harvard Law Today that copyright statutes have always been interpreted to mean that “only humans can be authors for purposes of the constitutional and statutory copyright grant.” But he warned that interpretation, as it applies to AI, “hasn’t yet been tested fully in the courts, and it will be.”

On the other side of the coin — the question of compensating artists whose works are part of AI’s massive training database, or artists who think their work has been copied unfairly by technology — Tushnet uses history as a guide. In cases where “the output is not substantially similar to the input,” she sees no real difference between “using something as training data,” as in the case of AI, and “using something to practice on,” as artists have done for centuries. She notes human artists have historically learned their craft by copying directly or making their own versions of other people’s works and that such “developmental use is really inherent to generating new works when a human is involved.”

“My preferred approach would be if the output is not substantially similar,” says Tushnet, “then there’s nothing that’s being done that you have a legal right to prevent.”

A writer’s perspective

Science fiction author Ken Liu ’04 has long been fascinated with machine-augmented creativity. While studying English and computer science at Harvard College in the late ’90s, he built a basic AI model that crafted poetry in the style of Edna St. Vincent Millay, and he contemplated a senior thesis based on the poetics of computer-generated literature. After graduation, Liu became a software engineer, attended Harvard Law School, and worked as a corporate lawyer and high-tech litigation consultant before he turned to writing full time. He’s well positioned to consider AI’s long-range creativity and legal implications.

Copyright has been granted only for work created with significant involvement of human hand

Liu is the author of four novels and two collections of short stories, and his interest in AI has only intensified through the years — some of his fiction even features AI, including his popular short story “50 Things Every AI Working with Humans Should Know.” Earlier this year, he took part in Google’s Wordcraft Writers Workshop, experimenting with the company’s AI writing tool and offering feedback. (Liu tried to get the program to help him generate robot dialogue, with limited success.) He is currently advising the Authors Guild and the Copyright Alliance as they draft suggestions for the U.S. Copyright Office’s guidelines involving AI-assisted work.

And he’s not worried he’ll be out of a job anytime soon. For Liu, today’s AI technology can’t generate prose to challenge that of the writers he admires because its prime directive is to be understood, not interesting. Great writers, says Liu, invent their own language. ChatGPT, on the other hand, “is the great average of all the linguistic output out there, and its default mode is to speak very confidently in cliches about stuff it knows nothing about.”

That doesn’t mean it’s never helpful to his work. Liu sometimes engages with the chatbot if he’s looking for a little inspiration, in the same way another writer might pick up a book of poetry to feed their imagination. “It’s a great way to generate the kind of things that might spark you,” says Liu. He calls some of the best sparks hallucinogenic gems, like the time he was writing a story about a robot taxi narrating its own state of mind as it picked up fares, and ChatGPT came up with the scenario of one passenger sitting on another’s lap, “out of nowhere.” When that happens, Liu “leans into the crazy” and just lets his mind roam. 

And he’s not giving up hope on true computer sentience. Liu thinks one day soon a computer may indeed be able to interact in a conscious way with the world and then tell him about it in its own words, instead of simply regurgitating someone else’s. “I would totally read stories written by that AI,” he says.

He might actually collaborate with it, too. “At that point, the AI has a separate sentience, so I would certainly agree that it should have its own copyright,” he says, “and we should be co-authors if we are writing together.”

But he rejects the analogy that when a user prompts today’s AI models to generate a creative work, it’s similar to one artist hiring another.

The analogy he prefers instead is one in which AI is the camera, and the artist the photographer. “If the artist has done the work of creative arrangement, direction, editing, prompting, tweaking knobs and dials, etc., such that the artist is the mastermind of the final work in the same way that a photographer is the mastermind of a photograph,” he says, “then of course we should see no problem in giving the artist copyright over the AI-generated work just as we grant the photographer copyright over the camera-produced photograph.”

A question of ethics and regulation

For Palfrey, like for many others, much will depend on how the AI of the future evolves. It may well, he says, “press on the boundaries of the existing law.”

Part of existing law that could need retooling involves the definition of a derivative work. Current copyright regulation states that “to be copyrightable, a derivative work must incorporate some or all of a preexisting ‘work’ and add new original copyrightable authorship to that work.” Examples of derivative works include a translation of a novel written in English into another language, a film based on a novel, or a drawing based on a photograph. “But there’s still a lot of interpretation around the edges the courts have to do,” says Palfrey. And with advances in AI, he sees those interpretations only getting “more complicated.”

But other questions emerge when someone uses AI to deceive, especially when it involves offensive deepfake videos or dangerous disinformation.

To limit bad actors using AI, some think a reevaluation of Section 230, part of the 1996 Communications Decency Act that prevents companies from being sued based on the content their users create, could help. That may “come up sooner rather than later,” says Palfrey, “in part because Section 230 is already under such scrutiny.” In February, the Supreme Court heard oral arguments in its first Section 230 case, Gonzalez v. Google, which challenged the federal law. Justices seemed to signal they were hesitant to dismantle the legal shield, and in their May decision they sent the case back to the lower courts without ruling on Section 230.

A more robust regulatory approach to AI taken by other countries and international entities such as the European Union, Australia, and New Zealand may provide models for the United States. “Regulate, we must,” Palfrey says, “because these technologies cannot go unchecked for another quarter century.”

One of Harvard’s newest scholars has experience in that domain. In April Jacinda Ardern was chosen for fellowships at the Kennedy School’s Center for Public Leadership and at the Berkman Klein Center for Internet & Society beginning this fall. Former prime minister of New Zealand, Ardern took on online extremism in the wake of attacks by a white supremacist gunman who killed 51 people at two mosques in the city of Christchurch in 2019. In addition to studying “ways to improve content standards and platform accountability for extremist content online,” Ardern will also “examine artificial intelligence governance and algorithmic harms,” according to the official statement announcing her appointment.

More change ahead: buckle up

Needless to say, it’s a busy and engaging time for Harvard Law’s Jonathan Zittrain ’95, co-founder of the Berkman Klein Center, whose career is focused squarely on emerging technology. But he tempers his enthusiasm with the knowledge that material generated by AI is often unreliable, and he is quick to encourage users to take a cautious approach.

“We should expect and demand that these models aren’t offered as substitutes for search engines or more curated ‘knowledge panels,’” argues Zittrain, George Bemis Professor of International Law at HLS and professor of computer science at the university. “Today’s large language models are innately optimized for B.S. — that is, sounding right over being right — and there have been only fitful technical steps in how best to address that. And there just hasn’t been time for the public to build up fitting skepticism of results when they are presented in the form of encyclopedic content.”

Then there are bad actors, intent on using AI for disinformation and worse. But how best to move forward with AI regulation is still a hot topic of debate, and a source of worry, for Zittrain and countless others.

AI is speeding into uncharted territory, and how to regulate the technology is a hot topic of debate

In March more than 1,000 tech experts concerned about AI’s potential to do harm to society signed an open letter calling for a temporary halt to future AI development. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.

Just a little more than a month later, a computer scientist known as the “Godfather of AI,” Geoffrey Hinton, rocked the tech world when he quit his job at Google, citing his fears about the dangers of AI technology. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times.

In a recent talk to law school alumni, Zittrain pointed out that the seeming ability of  GPT-4 (the latest upgrade to ChatGPT) to engage in logical reasoning or cognition when correctly answering a brain teaser prompt is more than a little unsettling. “It doesn’t mean that there’s magic involved. … But nobody truly grasps why the model is as good as it is, given how it’s been built,” said Zittrain, which makes it hard to know just how much better later generations of the technology will become.

If you ask ChatGPT how AI should be best regulated, it offers up a detailed, 10-point reply involving safety, transparency, reliability, monitoring, adaptation, and much, much more. 

Much like the nuanced, multilayered answer from ChatGPT, many experts admit that the regulation of artificial intelligence will involve a range of factors and players, and will largely depend on how the technology develops.

Zittrain, author of “The Future of the Internet — And How to Stop It,” has pondered the question of cyberspace regulation for years. He admits a certain amount of technological flexibility and even abstention has had value, and notes how often innovation has been driven by the internet’s “anything not prohibited is permitted” framework. Zittrain says he and his law school colleagues Professors Yochai Benkler ’94, Lawrence Lessig, and Terry Fisher ’82 have generally favored that expansive approach over the years when it helps foster “artistic experimentation by individuals without having to contend with corporate copyright and trademark claims.”

But he is quick to add that AI tools such as ChatGPT are also speeding into uncharted territory, and that “there is simply no easy existing practice or social contract to draw upon for what these large language models are doing, and how any boundaries on development and use should be crafted and by whom.”

When it comes to the risks of AI and bad actors, Zittrain thinks tort law might offer up a useful concept or two. “It could be more or less everyone’s job in the ‘supply chain,’ the way that in the last century’s torts revolution, everyone from component makers to manufacturers to retailers can be liable for defective products. We just need to figure out what ‘defective’ really means here.”

But time is of the essence with a technology some think could be ubiquitously embedded — Zittrain likens it to the rapid, unmonitored, and ultimately regrettable installation of asbestos-containing products throughout buildings — in a decade or even less. “Tort law took about 30 years to review questions of who should be responsible when mass-produced products go awry,” says Zittrain. “It doesn’t seem like we have that kind of time here.”