Don’t Trust the Government with AI: A Case Against State Regulation

I am taking a graduate policy seminar on Artificial Intelligence. Today’s topic is  “AI – Ethical problems and Legal solutions.” My amazing professor, Dr James Rogers, has invited the Palantir Whistleblower, Mr. Juan Sebastian Pinto, and he is speaking to us about the evils of AI: “AI is bad for the environment, look at how much water the data centers use,” he says. “They surveil citizens of the United States and Palestinians in Gaza. ICE is using AI to track immigrants.” He draws a deep breath and finally he asks, “who does AI benefit?” 

When no one responds, he adds “the private companies! They steal our data. They only care about profits, this is why we need more AI regulation.”

Now I am not writing this to disagree with Mr Pinto about whether AI should be regulated. Nor do I deny that AI can cause harm. Certainly, every technology in our society has the potential for abuse; AI is not unique in that respect.  Rather, the most important question for me is: WHO should regulate AI?

Ask the person next to you whether they believe that AI is sufficiently regulated, chances are they will say no. We conducted this experiment in my class at Cornell, and most students responded saying the government was not doing enough to regulate AI. Way too many people instinctively assume it is the government’s responsibility to dictate what AI should do and what it should not do. They want the state to set the guidelines and frameworks through which citizens interact with AI. I disagree.

Governments (alone) should not be left to set the standards for AI. I have two reasons for this: (1) government regulation is inefficient and (2) Government regulation disempowers the market, and, by extension, undermines consumer agency.

Regarding the first point of inefficiency: Government regulations often create bureaucracy that impose high compliance costs, stifles innovation, and invites regulatory capture by private interests (which often leads to monopolistic conditions; btw a number of these issues are some of Nigeria’s major problems). Regulations increase prices for consumers, reduce the options because they create high barriers to entry, and they favor large companies that can afford to comply with multiple requirements or that can pay for these requirements to be ignored. 

Still, inefficiency is not even the primary reason I object to AI regulation. My disapproval of regulation majorly stems from the fact that government regulation fails to take consumer agency seriously. To exemplify this, let us look at Palantir, the data analytics company. 

Since its founding in 2003, Palantir has been engaged in a public legitimization campaign. The company has been accused of being an instrumental arm of America’s data surveillance program. Because of these accusations, public sentiment towards the company has soured leading to sharp criticism. In response, Palantir has released numerous articles, even beginning a series called “Palantir explained.” Palantir is trying to legitimize itself to the American public. Why do you think that is? Why is Palantir concerned about public perception so much so it is refuting articles and not simply suing for libel?

Or we can even look at Anthropic, the AI company behind the large language model, Claude. Claude is governed by a constitution created internally by Anthropic. There was no government intervention. Currently there is tension between Anthropic and the United States government (the Pentagon), regarding Claude’s potential use in autonomous weapons and mass surveillance. Apparently, Anthropic is resisting certain uses of Claude that conflict with its privately developed constitutional principles.

For those who instinctively advocate sweeping government control of AI, this situation may seem inconceivable: a private company pushing back against the state. Yet it demonstrates something important. Companies respond to incentives and those incentives often come from consumers, not legislators.

Consumer theory tells us that rational consumers are always making decisions based on their marginal cost-benefit. For example, an environmentally conscious consumer does not only consider the price tag (direct cost) of a product. They may also consider the negative utility or personal guilt associated with polluting the environment. Companies reflect on these decisions, and change their priorities based on consumer behaviour.

When regulation is left to the government, power shifts toward the state. In polarized societies, regulatory priorities can fluctuate with political tides. What is restricted under one administration may be encouraged under another (…. think capturing a sitting president and toppling an authoritarian regime). Government incentives reflect political support, not necessarily long-term consumer values.

Yes, there must be standards for AI, but, in my opinion, the government should act as an organizer that brings together companies with different interests, politicians representing citizens, non governmental organizations and other stakeholders to negotiate workable standards or guidelines.  

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments