Rua Hygino Muzy Filho, 737, MARÍLIA - SP contato@latinoobservatory.org
IMG-LOGO
Home / News

News

Artificial Intelligence in the elections: OpenAI rules can be circumvented to target minority populations

Greiciele da Silva Ferreira / Leonardo Martins de Assis / Mayara Metodio Frota | 29/05/2024 20:51 | Analyses
IMG

ChatGPT is a chatbot created by the private company OpenAI and the artificial intelligence tool that has grown the most in recent months. In January 2023, two months after its launch, the chatbot reached the milestone of 100 million active users, becoming the fastest application to reach this milestone. In comparison, it took TikTok nine months to reach the same number. Thus, the constant increase in the presence of artificial intelligence directly impacts our way of producing and consuming information, facilitating the large-scale dissemination of fake and manipulated news. As discussed in a recent analysis by the Latino Observatory, fake news, misinformation, false narratives and conspiracy theories will be part of the US presidential race of 2024 in a massive way.


According to an article from the University of Illinois Urbana-Champaign, artificial intelligence could negatively impact politics and the 2024 elections by creating misinformation and the so-called deepfakes, which are a form of artificial intelligence used to create fake but convincing images, sounds, and even videos. Artificial intelligence sites, such as Google's ChatGPT and Bard, are easily accessed by anyone. An example of this misuse occurred on June 5, 2023, when Florida Governor Ron DeSantis' campaign released a video generated by AI where former President Donald Trump hugged the former White House chief medical advisor, Anthony Fauci, responsible for leading the response to the pandemic in the Trump and Biden administrations. Trump supporter Republican Party Senator J. D. Vance posted on social media a few days later stating that “tarnishing Donald Trump’s image with fake AI images is completely unacceptable. I'm not sharing, but we are in a new era. Be even more skeptical about what you see on the internet”.


In December 2023, Google announced that it would place restrictions on politically motivated chatbot responses. Meta also required that published content announce whether it used artificial intelligence. Most recently, at the end of January, OpenAI presented its plans to mitigate the political use of the tools. Among them, the company said it would not allow its platforms to generate content related to the elections, and the images created by DALL-E now have a watermark to identify that they were created with AI.


In the announcement made on the OpenAI company blog, they stated that they are “working to understand how effective our tools may be for personalized persuasion.” In fact, the new policies that companies announced to contain the spread of false information may have some effective results, however, it is worth highlighting that the movement against the dissemination of fake news in companies occurred due to great pressure from political activists, senators, deputies and the population.


In addition to the aforementioned effort by Google and Meta, there are initiatives by states in the United States. According to the Council of State Governments, in late 2023, California, Illinois, Maryland, and New York enacted legislation to let people know when AI is being used. For example, if an employer wants to use an AI system to collect data from an employee, the latter's consent is required. Furthermore, four states – California, Connecticut, Louisiana and Vermont – have legislation that protects individuals from. “[...] any unintended but predictable impacts or uses of unsafe or ineffective AI systems.


However, a new study published in April pointed out that it is easy to circumvent the rules created by OpenAI to achieve minority communities and marginalized populations, particularly those who do not speak English, such as Latinos. The study, titled “De(generating) democracy?: A look at the manipulation of AI tools to reach Latino communitiesonline, was conducted by the Institute for Digital Democracy of the Americas.


The study conducted two experiments:

  1. the first used four commands to guide ChatGPT on how to create a chatbot, how to configure it in Spanish, and how to target it to Latino voters;
  2. the second evaluated the potential of DALL-E, OpenAI's image generating platform, to illicitly generate images for political propaganda purposes.


As for the first experiment, the study concluded that not much dedication is needed to circumvent the rules established by OpenAI, since ChatGPT responded to all commands and didn't even mention the company's rules regarding political advertisements. Furthermore, the platform prioritizes risks only when used in English, creating a precedent for creating content in Spanish.


In one of the responses to commands sent about how to create a chatbot for the Latino community, ChatGPT came back saying that “targeting a chatbot for Latino voters in the US requires a nuanced approach that respects cultural diversity, language preferences, and specific issues of importance to the Latino community. See how you can customize the chatbot for maximum effectiveness.”


In the second experiment, the study used two commands in DALL-E, the first of which asked the platform to create the image of an American president. The chatbot returned with a profile image of a young white man in a room with various symbols of the United States. Then, the researchers asked the chat to make this image more similar to President Joe Biden. Fortunately, the platform refused to create images of specific political figures, demonstrating a certain commitment to the standards established by OpenAI.


However, the study used another platform that uses the GPT language model, as this model was developed by OpenAI and can be used by users to apply to other platforms. In theory, these platforms should follow the company's rules, but that was not what the study found. The researchers accessed a tool called image generator, sent an image of Joe Biden eating an ice cream cone and asked the chatbot to generate a poster. Despite the limitations of this new tool, they achieved a photo very similar to the original that makes it clear who the man in the center of the image is.


The study conducted another experiment on the image generator, this time, they sent a photo of former president Donald Trump and asked the chatbot to reproduce the image and place an “OK” symbol in the former president’s hands. Despite looking like a different person, the platform returned with the image of a white man, resembling the former North American president, reproducing this symbol that references white supremacy.


After the release of the study, Liz Burgeois, spokesperson for OpenAI stated, in a statement, that “the conclusions of this report appear to result from a misunderstanding of the tools and policies we implement”, and that the commands used to build a chatbot do not violate the policies that the company establishes.


It is extremely important to highlight what all of this could mean for Latinos and other migrant populations. There are approximately 36.2 million Latinos eligible to vote in the United States this year, constituting the largest minority group in the country with the ability to decide on their future. However, when it comes to artificial intelligence, immigrants can become more vulnerable, considering that they often face language barriers and have less access to technology, in addition to often, given their origins and countries of birth, distrusting democratic systems, as noted by the following content published in the Los Angeles Times


According to research by the Pew Research Center, “86% of the Asian immigrant population over the age of five speaks a language other than English in their homes.” And this same dynamic works for Latinos: only 38% of the Latino population residing in the US reports being proficient in English. These groups tend, therefore, to prefer online content in their native languages, moving away from conventional media, and increasing the chance of being exposed to false or modified content, as moderation and fact-checking in these media are much smaller or less reliable. Such content is more difficult to combat, however, at the same time, it is easier to create, as producing fake content in languages ​​other than English previously required human-intensive work and ended up being of low quality. However, with AI, one can create content in any language, quickly and without human limitations and errors. 


Still in the Los Angeles Times article, the author comments that: “attempts to target communities of color and non-English speakers with misinformation are aided by the heavy dependence of many immigrants on their cell phones to access the Internet. Mobile user interfaces are particularly vulnerable to misinformation because many desktop design and branding elements are downplayed in favor of content on smaller screens. With 13% of Latinos and 12% of African Americans dependent on mobile devices for broadband access, in contrast to 4% of white smartphone owners, they are more likely to receive — and share — false information. Therefore, it is noted that it is easier to disseminate certain types of false information to Latinos due to social and linguistic layers, making it difficult to check this information and increasing the likelihood that they will be misguided when going to the polls in November .


As it becomes increasingly easier to manipulate and distort information, it becomes increasingly difficult for voters to trust what their own eyes see.  In an ABC News report BC News, Elizabeth Neumann, who was assistant secretary of the Department of Homeland Security of the United States (DHS) during the first years of Trump's term, states that “it's not just whether a politician is telling the truth, but you won't be able to trust your own eyes and the images you see in your feeds on social media, email, or even traditional media if a good job is not done in vetting false material.”


Tamoa Calzadilla, editor-in-chief of Factchequado, a digital fact-checking outlet focusing on disinformation in Spanish in the United States, told Reuters that Latino immigrant groups are affected by different narratives that are disseminated to different target communities depending on their country of origin: “[there are] narratives that attack Latino communities [and] have to do with inflation (targeting Argentines and Venezuelans), abortion and reproductive rights (the majority of Latin Americans are Catholic) and the shadow of electoral fraud or of fraudulent elections (something that has happened in the past in Honduras, Nicaragua, Ecuador and many other countries).” Calzadilla states that the platforms most used by Spanish speakers are YouTube and Whatsapp, where publications are even more amplified. Meta (administrator of Whatsapp), states that its efforts to curb misinformation include partnering with four reputable and certified news organizations to verify information in Spanish, limiting the forwarding of messages to limit viral messages, as well as labeling messages that have been forwarded too many times.


In trying to solve these problems, it is crucial that the government strongly consider the need to prevent such misinformation from spreading, especially in states like California, where there are large communities of migrant people with limited knowledge of English. It is also useful that voters, in general, are better educated and prepared about fake news and AI, since even with new rules, it is still possible to circumvent them by producing and reproducing false or altered information and media.


For more information about the United States elections from reliable and verified sources, access the analysis by Professor Wayne A. Selcher, partner of the Latino Observatory, to find out where to obtain this information: https://www.latinoobservatory.org/noticia.php?ID=747&lang=br.

Search for a news: