When lawmakers wrote the Clean Water Act in 1972, pollution was essentially defined as coming out of a pipe and into a waterway. Nearly 30 years later, that’s still the rule—while fertilizer-laden runoff and other obvious sources of toxins are dealt with differently.
If that kind of seemingly obvious change hasn’t been made to U.S. laws, what hope is there for catching up to safety questions about emerging technologies, from nanotechnology to geoengineering?
Experts call this the “pacing problem.” A group of them discussed the ethical, legal and policy implications of the issue at a the recent Third Annual Meeting of the Society for the Study of Nanoscience and Emerging Technologies, held in Tempe, Ariz. The roundtable session was intended to be a chance to brainstorm about how to manage the intersection between technology, law and ethics.
Gary Marchant, a professor at Arizona State University’s Sandra Day O’Connor College of Law and executive director of the school’s Center for Law, Science and Innovation, referenced the way the U.S. Environmental Protection Agency is hamstrung under the outdated Clean Water Act. The slow pace of change is out of sync with the breathtaking pace of innovation, he said, leaving many wondering whether “hard” regulations are simply out of the question.
“Regulators are telling us they can’t deal with this problem,” Marchant said. “What can we do?”
Answering that question requires understanding a broad swath of what’s shaping our everyday lives, from chemicals to the ever-smaller circuitry that make our smartphones work. Nanotechnology, for example, leverages the often amazing properties of super-small engineered particles (a nanometer is a billionth of a meter). These tiny materials can make airplane wings stronger, through the introduction of nearly weightless carbon nanotubes, or make exterior paint self-cleaning with nano-titanium dioxide.
There is broad agreement that these substances show great promise in a wide variety of areas, from cancer treatments to electronics. Shrinking these materials, however, sometimes changes the way they interact with the world around them, raising serious questions about their impact on health and the environment.
Scientists are racing to understand and quantify any danger, while regulators around the world struggle to protect people, animals and the environment without going overboard. As a result, there are few nano-specific laws on the books.
The dynamic raises a number of questions: Who decides what’s right? Are international standards worth pursuing, or a waste of time? Can a regulatory system that’s already straining to keep pace be reformed?
“It’s not looking good, at least at the moment, that there will be international agreement,” said Joseph Herkert, a professor of ethics and technology at ASU.
He questioned whether old ethical models are sufficient to deal with the challenges of emerging technologies, or whether new thinking is needed. Herkert told a couple of stories about some of the early innovators in the artificial intelligence field, noting that they were blase about some of the questions raised by their work. That should be a lesson to today’s experts, he said.
“I’m not sure we want to leave our ethical work to the people so invested in the technology,” he said.
Time isn’t on the side of policymakers, Marchant said. Again, he offered an example: The drawn-out battle between the U.S. government and Microsoft over whether the bundling of Internet Explorer with the Windows operating system amounted to an abuse of the company’s market power. It began in the mid-1990s, and by the time the case was finally settled, in 2001, Microsoft had released several new versions of Windows. While the case is considered a landmark, Marchant said it shows how difficult it is for the federal government to keep up.
“It’s going to be years before there’s any significant regulation,” Marchant said of emerging technologies. “When we have technology moving faster than the oversight, one thing we need to do is slow down the technology … that’s probably not going to happen.”
Jennifer Kuzma, a professor at the University of Minnesota’s Humphrey School of Public Affairs, questioned whether the disconnect is a serious issue. She wondered aloud about the difference between “micro” problems, like pacing and health and safety, and “macro” problems that really force us to question our values.
“Is this what we want our society to look like?” she asked. “Do we want to let those technologies go?”
Trust is a serious problem, for the public, industry and the government, she said.
“If the current mismatch between the pace of innovation and deployment is a problem, who is controlling the pace of both of those?” Kuzma asked.
Among the options, she said, are letting innovation go, and living with the consequences. The polar opposite is slowing the pace long enough to stop and consider whether the consequences are worth living with. That doesn’t have to be the much maligned (at least in this country) “precautionary principle,” which promotes action to protect people and the environment when the risks are uncertain. Instead, she said, it could be framed as “deliberation.”
“I am often struck by how easily we all dismiss that,” she said.
A third option, Kuzma added, is changing the speed of the system. Given today’s political environment—and its toxic entanglement with the regulatory system, here and abroad—that’s often seen as unrealistic.
David Gartner, another ASU law professor, called the debate a “false choice” between regulation that’s too rigid and a total free-for-all. He raised the idea of what’s called “principles-based regulation,” which involves settling on a set of goals, then letting more tailored guidelines take shape on a more case-by-base basis. It’s been tried, with decidedly limited success, in Britain’s financial markets and elsewhere.
While acknowledging that the idea has some problems, Gartner said, “I think the principles approach holds a lot of promise.”