Rand Paul on Government Censorship and Use of AI

Contact Your Elected Officials

In 2021 the Pentagon, through the Defense Advanced Research Projects Agency DARPA, ask for proposals for real-time comprehensive tools that establish ground truth for how countries are conducting domestic information control.

DARPA’s goal in developing AI technology for measuring the information control environment was to help the US government better understand how digitally authoritarian regimes repress their populations at scale over the internet via censorship blocking or throttling.

Of course, the solicitation made it clear that the Pentagon did not want the proposals to look at activities of the US government. The Pentagon and the US government, as a whole, enjoy professing moral superiority over authoritarian governments when it comes to upholding basic Democratic values — even as officials erode them at home.

Indeed, American politicians have no qualms about criticizing foreign governments like Russia and China for their suppression of civil liberties and efforts to eliminate dissent. Yet, there seems to be a complete unwillingness to have an honest conversation about the disturbingly similar actions our own government is actively engaged in and financing.

For decades, the Pentagon and other federal agencies have been quietly partnering with private organizations to develop powerful surveillance and intervention tools designed to monitor and influence narratives on social media.

For example, a 2021 Pentagon program called Civil Sanctuary sought to develop AI tools to scale the moderation capability of social media platforms to create what it describes as a “more stable information environment.” In other words, the goal of this Pentagon program was to exponentially multiply the government’s ability to coordinate censorship of online speech.

The Pentagon has invested millions of taxpayer dollars to develop these tools — not only for the use of social media companies but also by the intelligence community and law enforcement.

Meanwhile, the Department of Commerce (DOC) is awarding million-dollar grants for cognitive research into how the U.S. government can foster trust in AI with the general public.

So, while the federal government is using taxpayer dollars to develop AI to surveil and monitor Americans’ online speech, it is also spending money to figure out how to get you to trust the AI.

Over the last year, starting with the Twitter Files, journalists have exposed the deep coordination between the federal government and social media platforms regarding content moderation decisions and policing the speech of Americans.

As Michael Shellenberger, a respected journalist who exposed the Department of Homeland Security’s collusion with social media companies to deplatform individuals questioning COVID-19 mandates, rightly pointed out, “The threat to our civil liberties comes not from AI but from the people who want to control it and use it to censor information.”

Last week, the Fifth Circuit affirmed the government likely violated the First Amendment by coercing social media companies to remove speech the government disagreed with in regards to the origins of COVID-19, pandemic lockdowns, vaccine efficacy, and the Hunter Biden laptop story.

The court cited numerous examples of U.S. government officials engaging in domestic information control on social media. Government officials demanded that platforms implement stronger COVID misinformation monitoring programs, modify their algorithms to avoid amplifying misinformation, target repeat offenders, and magnify communications from certain trusted sources.

After one meeting with federal officials, one platform committed to reducing the visibility of information that was skeptical of the government’s COVID vaccine policy, even when it does not contain actionable misinformation. Facebook likewise promised to label and demote a popular video after officials flagged it, even though they acknowledged it did not qualify for removal under its policies.

I fear that we are likely in only the beginning stages of understanding the extent of the federal government’s involvement in the content moderation decisions of private social media platforms.

What we do know is that our government is funding the development of powerful artificial intelligence tools for monitoring and shaping online discourse.

Now, I want to be clear. AI is not inherently malicious. It has the potential to revolutionize basic aspects of society, from healthcare to education.

However, in the hands of an unchecked government, AI can be weaponized as a tool to suppress the fundamental values our country was founded upon — the open exchange of ideas, the freedom to question, and the right to dissent. As AI continues to develop, I remain committed to conducting extensive oversight on this issue and working diligently to protect the First Amendment rights of American citizens.

Rand Paul is a U.S. senator for Kentucky and serves as the ranking member on the Committee on Homeland Security and Governmental Affairs.

Biden Doesn't Have Americans Best Interest At Heart