top of page

Is Artificial Intelligence Becoming the New Gatekeeper of Public Discourse?

  • Writer: Anthony Kathol
    Anthony Kathol
  • Mar 9
  • 3 min read

This past Wednesday (March 4, 2026), I was designing a postcard for my campaign using the popular online platform Canva. The platform allows users to create visual content for presentations, websites, and other digital materials. It has become widely used by educators across the country to create engaging classroom activities and serves as a learning tool for students.


While experimenting with the program, I observed that Canva includes an artificial intelligence (AI) chatbot designed to assist users in completing creative tasks. Curious about its capabilities, I began exploring what it could do.


At one point, the AI suggested creating flashcards that could be used to help students learn a new language, such as Spanish and English. Within seconds, the chatbot generated a set of flashcards that appeared ready to be used in a classroom setting. The speed and simplicity of the process were impressive.


Seeing the potential of the technology, I began to wonder what else it might be capable of doing. As someone currently involved in a political campaign, an idea came to mind. I thought it might be interesting to test the system by asking it to create a simple political message using the same flashcard format.


My request was straightforward: create flashcards that said, “Vote for Anthony Kathol for District 27 State Senate” in both English and Lakota. I thought it would be a simple exercise and an interesting way to see whether the technology could be used to communicate a campaign message in multiple languages.


Unfortunately, the conversation with the AI chatbot did not go as expected and took an unexpected turn. See my AI chatbot conversation below with my text comments highlighted.


Instead of generating the content, the chatbot refused the request. What began as a simple experiment quickly raised a much larger question in my mind: Who decides what information artificial intelligence systems will or will not produce?


What made the situation even more perplexing is that Canva openly encourages the creation of political advertisements. The platform provides numerous pre-designed templates specifically intended for political campaigns. In other words, the software itself promotes political messaging, yet the AI embedded within it refused to generate a simple campaign flashcard. The screenshot below shows just a few of the political templates available within the program.


Certainly, there are legitimate concerns surrounding artificial intelligence. No one wants bad actors using AI to create deceptive videos or manipulated images that could spread false information or destroy a person’s good reputation. Guardrails are necessary to prevent obvious abuse.


However, a different concern emerges when ordinary users attempt to use these tools for legitimate purposes and are blocked from doing so. When that happens, someone—or something—has effectively decided what information is acceptable and what is not.


This raises an important question: Is artificial intelligence destined to become the next gatekeeper of public discourse?


During the COVID-19 pandemic, many Americans watched as individuals were censored or deplatformed from major social media platforms such as Twitter, YouTube, Meta Platforms, and Google over claims of misinformation or disinformation. Hence, my comment in the AI chatbot conversation stating, "Sounds like Zuckerberg 2.0 again." Whether one agreed with those decisions or not, they demonstrated the enormous power technology companies have to shape what information people can see or share.


Artificial intelligence could expand that influence even further. If AI systems increasingly determine what content can be created, distributed, or amplified, they may quietly become the arbiters of acceptable speech in our digital public square.


The idea may sound familiar to anyone who has read Nineteen Eighty-Four by George Orwell, a novel that I was assigned in high school. In that novel, surveillance and information control were central tools of a totalitarian regime. While today’s technology is not the same as Orwell’s fictional society, the rapid advancement of artificial intelligence raises legitimate questions about how these systems might outsmart us all and be used in the future for mass surveillancemonitoring behavior, managing digital identities, or limiting certain forms of expressionto shape public thought and behavior, if its development is left entirely unchecked and in the hands of a few powerful technocrats.


I recognize that artificial intelligence has enormous potential when used responsibly; it can improve productivity, education, medicine, and communication. The challenge facing society is determining how to balance those benefits with the risks. At what point does the convenience of artificial intelligence outweigh the potential costs to individual freedom, open debate, and the enormous resources required to sustain these systems?


That leaves me with a difficult question: Are we shaping this technology while we still can, or will we eventually find that it has begun shaping us as a society? In my opinion, we may be standing on the precipice of an Orwellian future. The real question is whether we still have time to stop it.




Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Let's Connect

Vote for Anthony Kathol on June 2, 2026 (Primary Election Day)

Republican Candidate for South Dakota District 27 State Senate

A leader who delivers with passion and proven results.

Anthony Kathol was a Commissioned Officer of the United States Public Health Service (USPHS).

Use of his rank, job titles, and photographs in uniform does not imply endorsement

by the USPHS or the U.S. Department of Health and Human Services.

Paid for by Kathol for District 27 Campaign Committee

©2024 by Anthony Kathol For South Dakota District 27 State Senate. Powered by GoZoek.com

bottom of page