Three Reasons I’m an AI Skeptic

With increasing conversations around artificial intelligence (AI) and how we can use AI tools in our work, I constantly find myself apprehensive of what new waves of AI technology (like generative AI) are offering. Don’t get me wrong though—I’m all for technological advancement and I fully understand that AI is already around us, but I’ve been finding it hard to fully get behind the usage of AI in mainstream fields. Most of my apprehension comes from ethical and justice issues that, in my opinion, organizations and companies are ignoring. My main issues with incorporating AI technology into client work can be summarized into the following categories. 

1. Privacy concerns

New AI technology is notorious for stealing information from artists and writers. Artists have noticed that their art style and artwork itself have been used to train AI generative models without consent or copyright. The way that generative AI works is that the AI system must be “trained” by feeding content to the AI algorithm—this is what creates the basis of the AI model. There are various civil lawsuits where a company or individuals are suing AI companies for taking their copyrighted content and feeding it into the algorithm without consent; for example, The New York Times has filed legal cases against OpenAI and ChatGPT. Another element of AI that’s being criticized for being a privacy issue is that many personal devices are using biometric data (such as facial recognition, fingerprints, voice recognition) as security. We all find our FaceID or TouchID very convenient, but there’s little regulation as to how AI companies use the data after it’s collected. In many cases, this leads to unregulated public surveillance, which can be harmful to those belonging to marginalized groups, leading to the next point. 

2. Race and AI 

Technological advancement historically has had embedded race problems—from racial bias in camera lenses to soap dispensers that can’t detect hands with darker skin tones. Because new generative AI tech is based on human-made content, racism is not divorced from the technology, but rather, is an integral part of it. Many people have experimented with racism that can be uncovered in generative AI models like ChatGPT. This Scientific American article goes over an experiment that author Craig Piers did with ChatGPT and a crime story, where Piers found that ChatGPT considered itself guilty of racial bias; this individual asked OpenAI’s model what would determine “whether a person should be tortured” and received demonizing results, and this Guardian article highlights a research study by technology and linguistic researchers on ChatGPT and Google’s Gemini AI holding racist stereotypes towards African American Vernacular English (AAVE) speakers. 

I find this quote in a 2019 blog post written by Seema Rao quite illuminating: 

“The term artificial intelligence obfuscates the inherently human nature of the technology. It is easy to think of AI systems as being thoughtful machines, but that is a fallacy. People create the data. People choose the data to aggregate. People tell the machines what [to] do with the data. Ignoring the relationship between human action and AI has important downsides.”

3. Environmental issues

This last concern is often overlooked. The energy needed to train and run AI models is very high. According to researchers, training an AI program can produce 626,000 pounds of carbon dioxide, which is five times the lifetime emissions of the average car. Google’s latest AI pursuits have increased the company’s carbon dioxide emissions by 48% since 2019. As the use of AI increases in the Global North, it is the Global South that suffers (and is already suffering) from the climate crisis. Experts at Brookings estimate that AI will intensify disastrous effects like increasing greenhouse gas emissions, consuming large amounts of energy, and requiring a larger quantity of natural resources.  

While AI can have positive effects on some aspects of our society (like automation and increased efficiency), it’s integral to keep these issues in mind when utilizing and incorporating AI into our work. I understand the conversation is ever-evolving and that some of these issues can be resolved, but until they are, I will stay an AI skeptic.

Previous
Previous

Audience-Centered Strategy: Composites

Next
Next

Communicating with Intention: Activities and Techniques for Strengthening Community Partnerships