Blumenthal, Hawley AI framework could damage AI industry, violate First Amendment 

0
334

As the nation struggles to keep up with the rapid evolution of artificial intelligent (AI), it’s important to find the right balance in preventing AI risks while fostering innovation. Sens. Richard Blumenthal’s (D-Conn.), and Hawley’s (R-Mo. ) bipartisan AI framework, despite its laudable intentions–such as protecting children and promoting transparency–threatens to stifle AI innovation by regulating AI development instead of just use. It may even violate our First Amendment rights. Regulating the development of AI is not a good idea. AI development is protected speech, and must be subject to the same restrictions on prior restraints as other forms of speech. As a means of expression, writing software in code is no different from writing it as narrative. AI-powered systems that convert text into software have been developed, showing the similarities between these two forms. The use of language that describes software is a key part of regulating AI development. Regulators will also find it difficult to distinguish between AI and other software types. This type of regulation, which goes beyond First Amendment concerns, is bad for America. We want engineers and scientists in the lab, innovating, not filling in form after form for agency approval. AI can help paralyzed people, stop crime, detect cancer, reduce veteran suicides, detect and prevent diseases, protect firefighters and soldiers on the field, assist the elderly, combat sextortion, prevent heart disease and cyberbullying, protect critical infrastructure and improve road and construction site security. The potential benefits of AI are far more than these. The costs and burdens associated with regulation will slow down innovation. These regulations can have a devastating effect on AI startups who lack the resources, knowledge and time necessary to navigate complex regulatory frameworks. Congress would be foolish if it stifled AI development with regulations at a time when federal organizations, like DARPA are dedicating resources to encouraging AI development.

Also, AI regulation will not only harm America’s AI sector, but also accomplish nothing beneficial. All AIs that could have a negative effect will be developed, just in another place. In the worst-case scenario, the United States will not have the protections that it would otherwise have had if innovation was encouraged. We may, at best, fall behind other nations, lose our competitive advantage, and not understand AI technologies developed in other countries. We don’t also need a new AI regulator. In addition to the cost to taxpayers, potential duplication of effort, and confusion it will create, AI shouldn’t be treated differently than a human (or other piece of software) making the same decisions.

Protected class discrimination should be proscribed. Period. It is not necessary to treat a AI that discriminates differently from a human or a non-AI automated software process. This argument is valid in many other areas. The consumer should be protected against unfair trade practices, regardless of whether AI is involved. The safety of drugs and medical devices is important, whether they are designed or implemented using AI. Luckily, we already have agencies that can do these things, like the Equal Employment Opportunity Commission, Federal Trade Commission, and Food and Drug Administration. It makes no sense to have one regulator who is an expert in AI and another who does not. This could lead to a confusing mess of regulations that may affect contractors. How will we handle the next technology that could impact these areas in the future? Create a new agency? Instead of creating a new agency Congress should allocate funding to each relevant agency in order to bring together experts from the government, private sector and academia to identify any laws that need to be updated to reflect AI. Congress should also provide funds to each agency in order to encourage AI development within their own areas. These funds can be used to encourage companies that are developing AI technologies for public benefit and help America maintain its leadership in AI. Congress should also invest heavily in the American educational system, to help prepare those who are capable of using and developing AI systems. This will be a better use of resources in the long run than creating a federal bureaucracy.

Jeremy is the director of North Dakota State University’s Cybersecurity Institute. He is also a senior faculty fellow at the Challey Institute and an associate professor at the NDSU Department of Computer Science. These opinions are solely his.