Top
image credit: Unsplash

Artificial Intelligence: Managing Risk through Regulation

November 22, 2023

Category:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement on AI Risk, signed by some of the most notable leaders in the industry like OpenAI CEO Sam Altman, Microsoft’s Bill Gates, and US Congressman Ted Lieu, highlights the need for a greater understanding of the advanced risks of AI.

With the proliferation of Artificial Intelligence as well as its rapid advancement, new risks are being covered that pose significant threats to state security. A concerted, joint effort is being made to combat these risks through regulation by the National Security Agency (NSA) and several other federal agencies. From deepfake technology to chatGPT, the endless opportunities that AI provide come with new threats, and mitigating risk will require robust regulation. 

Disinformation

What the statement on AI Risk so succinctly conveys is that leaders in tech with an advanced understanding of AI, have cause to believe it may result in more harm than good. To place the risk on the same level as nuclear war sends a strong message to world leaders: act now or else. Artificial intelligence has gifted us with broader, deeper access to information, but cursed us with the ability to generate information as well. Generative AI is one of the leading sources of disinformation and propaganda. The Freedom House Report paints a bleak picture of how governments in as many as 16 countries have utilized AI to generate media in an effort to skew public opinion. One example is  Venezuela, where the  government-controlled media outlets utilized Synthesia, a company that creates deepfakes, to spread pro-government messaging. AI-generated videos depicting news “anchors”, Noah and Daren commenting positively on the Venezuelan economy, citing the booming travel and tourism industry. 

Disinformation in the era of social media gives rise to viral propaganda, and this is particularly concerning when we factor in that close to 86% of Americans admit to not fact-checking news sources on social media. It’s the perfect storm. 

In the past, fake news was easily discernible from credible news sources, but the sophistication of generative AI allows for text and images to mimic the look and tone of genuine news. And while governments are currently using this technology to (wrongfully) boost public perception on a national level, the rational next thought in risk mitigation is how long before this technology is weaponized to sow discord and incite violence on an international scale?

The NSA’s report highlights the threat of deepfake videos that could be used against the state, and with the rapid development of this technology, the ease with which this could become a reality. This is the concern of the NSA, as well as the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA). In this regard, experts suggest investments into deepfake detection technology, but admit that even as detection mechanisms are utilized, new iterations of the technology will be created to circumvent them. Alongside detection technology, the clarion call for strong frameworks for regulation is hoping to stem the tide of the advancement or manipulation of this technology for nefarious purposes. 

The call for regulation comes from within the Industry

In an unprecedented move, Sam Altan, OpenAI’s CEO, approached Congress with a bid for the government to regulate AI. It is certainly unheard of for the CEO of a private company to seek the oversight and regulation of their industry, and more unusual still to do so boldly in the highest office. The proverbial call is truly coming from inside the house. 

Altman and many others in the industry all agree that the unchecked development of AI will be catastrophic for humanity at large. He was joined by Professor Gary Marcus, an expert in Psychology and Neural Science at New York University, as well as IBM’s Chief Privacy and Trust Officer Christina Montgomery. All three strongly supported and advocated for AI governance at a federal level. 

In addressing the Senate Judiciary Committee, the trio offered a regulatory framework in the form of a three-pronged approach. First, Altman cited the need for oversight when working with advanced models of AI. He suggested a licensing model administered by a federal agency with safety guidelines as a yardstick for compliance. 

Second, Altman advocated for the development of safety standards for high-capability models. He notes the need for models to pass functionality tests to ensure that artificial intelligence can produce accurate information, and not biased or harmful content.

Finally, he introduced the idea of independent auditors, free of affiliation from both government and AI creators. Auditors would ensure AI creators operate within the parameters of the legislation. 

Addressing concerns for national security

The National Security Agency has plans to address the risk advanced AI poses to national security. The AI Security Center is a new agency that will oversee the development of artificial intelligence within the scope of U.S. national security systems. 

Using a multidisciplinary approach, the AI Security Center will collaborate with AI labs, academia, legislators, the Department of Defense and a few foreign partners. According to the Department of Defense, the AI Security Center will be a hub for experts to develop frameworks for evaluating risk, methodology, and best practices. 

The NSA acknowledges that the USA remains at the forefront of AI development, and so much of the risk mitigation will be refereeing on home turf, but remains vigilant of threats from outside the borders as well. With a central body coordinating and consolidating the efforts of different areas of AI development, the AI Security Center will be instrumental in legislating regulatory guidelines in the industry. 

Conclusion

With captains of industry and the creators of AI calling for regulation of the industry, governments around the world have a duty to humanity to ensure this technology never realizes the threat it poses to our world. Industry experts agree that regulatory frameworks will be the difference between artificial intelligence being our next revolution, optimizing and improving life, or our greatest risk. 

National security is a significant area of concern, where AI has the potential to play a crucial role in spreading disinformation, and propaganda and potentially exposing government officials (and their families) to the risk of personal humiliation, bribery, and extortion. Deepfake technology is particularly of concern in this regard, and strong detection technology alongside legislation is called for. 

Overall, AI regulation will require the collaboration of experts in various fields to prevent the technology from developing unchecked, without smothering creative advancement and surrendering the United States’ enviable position as a global leader in this field. AI regulation is not only the work of developers and Congress, but also academia, international relations, and civil society as a whole.