Editorial credit: Ascannio / Shutterstock.com
By Linda Nwoke
The media industry is undergoing significant transformation due to the impact of Artificial Intelligence (AI), which is reshaping both production and consumption dynamics. AI technologies, such as OpenAI’s GPT-3, are revolutionizing content creation by automating article writing and report generation and assisting in scriptwriting for movies and TV shows. This evolution is changing how content is created, distributed, and consumed.
Netflix and Spotify, popular streaming services, utilize AI to analyze user preferences and offer personalized recommendations. Similarly, targeted advertising uses AI to deliver relevant ads based on consumer behavior. Major platforms like X (formerly Twitter) and Facebook employ AI to moderate content, detecting and removing harmful material to enhance community engagement and safety.
While integrating AI in media brings transformative potential, it also presents significant challenges. These include ethical concerns about authenticity, bias, and the spread of misinformation. The current critical election underscores the urgent need for transparency and adherence to ethical standards to safeguard citizens and society.
Ethnic Media Center in the Discuss
In a recent virtual gathering organized by the Ethnic Media Center, Pilar Marrero, an editor in the organization, moderated a discussion among experts. One of the highlights of the discussion was how AI now allows the production of fake images, posts, and videos with minimal effort, making it harder for voters to determine their authenticity.
Invited experts revealed how Artificial Intelligence is increasingly being used to destabilize elections through the creation and dissemination of fake content that targets ethnic voters with disinformation.
Challenges in Differentiating Various Types of News
One of the experts, Jonathan Stein of California Common Cause, illustrated various ways of generating news and showed how difficult it has become to distinguish between AI-generated fake messaging and actual news.
He emphasizes, “We are facing a pressing problem in our democracy, unfolding day by day and week by week.” Stein shared that the US Department of Justice recently disrupted a Russian disinformation campaign involving thousands of AI-generated fake social media profiles and cited several examples highlighting AI-driven disinformation’s rapid and alarming rise.
Stein stated, “We’re now on the brink of the first AI election, where AI deep fakes and disinformation have the potential to flood our political discourse.” He painted a stark picture of potential scenarios, with examples like fake audio of election officials and misleading robocalls.
Local Solutions to Address the Situation
Proactive initiatives like the California Initiative for Technology and Democracy (CITD) are crucial in addressing threats posed by AI-generated disinformation. These new tools make it challenging for candidates, conspiracy theorists, foreign states, or online trolls to deceive voters and undermine trust.
He stated that outside the United States, the threats posed by generative AI, deep fakes, and disinformation are also present, for example, in Bangladesh, Slovakia, Argentina, Pakistan, and India. “We need local solutions like CITD as a project of California Common Cause,” he stated, underlining the importance of state-level solutions to counter these threats in California.
Stein explained that AI is a technology that mimics human intelligence, noting its broad applications from mundane tasks, like Netflix’s recommendation algorithm, to sophisticated uses, such as Google’s AI system for predicting wind power. However, while it can create efficiencies, such as predicting crime locations, evaluating home loan applications, and sorting public benefits applications, it raises significant concerns. “These create efficiencies, but they also create questions,” said Stein
Explaining Generative AI and the Risks
According to Stein, generative AI creates new digital media from human prompts, producing images, audio, video, and text, ranging from harmless fun to more serious applications.” Jonathan explained that AI is now an integral part of our world and likened its influence to the effect of the internet over the past few decades, concluding that AI now has widespread adoption and revolutionary potential across industries.
For instance, the use of generative AI with national security implications, like the fake image of an attack on the US Pentagon that circulated on Twitter in May 2023, was reshared by the Russian news service RT and led to a significant stock market dip.
Another example is the ease of creating misleading visuals, which Stein noted: ” If you are moving fast, or you’re scrolling on the phone… you’re not going to notice the difference.”
In bolstering the risks, Stein explained how AI can quickly generate convincing audio and video. For example, it can use just two minutes of audio from a person’s voice, available online for most public figures, to create scripts spoken in their voice, making anyone say anything.
He noted that many, including older generations and those viewing on small screens, might need to spot the difference in the voice. Stein emphasized that the dangers are more pronounced at the local level, where a deepfake of a lesser-known figure might take time to be exposed as fake by the national press.
The Effect of Deepfakes on Local Politics
There is significant concern about deep fakes impacting local politics. Over time, fake videos of city council members or county election officials might not be immediately exposed due to limited media coverage. Hence, local and ethnic media are critical in addressing these threats.
Additionally, an alarming number of fake county election websites spread false Information, which can be easily created with current technology. For instance, fake news websites, often made by Russian intelligence, such as the non-existent “Miami Chronicle,” disseminate propaganda.
In Stein’s view, such AI-generated racialized disinformation poses a massive threat to communities of color, immigrant communities, and low-income communities. They aim to deceive and disenfranchise voters, making it harder for them to exercise their right to vote.
Examples include deepfakes created by Trump supporters to show increased support for Trump within the Black community falsely.
Safety Keepers – Role of the State Government & Others
Jonathan Stein highlighted the growing issue of social media platforms ignoring their responsibilities to combat disinformation. He shared a screenshot from the Washington Post, noting that platforms like YouTube, Meta, and Twitter have stopped labeling or removing posts that repeat false claims about the 2020 election. “Facebook has made some of their fact-checking features optional, and Twitter has ceased using a tool that identifies organized disinformation campaigns,” he stated.
Additionally, all platforms have laid off members of their trust and safety and civic integrity teams. Stein questioned, “If the social media platforms aren’t going to solve this problem, whose job is to protect our communities from these threats?”.
He highlighted the role that his organization, California Common Cause, is pushing for legislative action through three bills and one resolution to address these issues. Stein also acknowledged that protecting communities from disinformation requires a collective effort, particularly from state leaders and community members. He stressed that voters must “increase their skepticism of political information in 2024” and fine-tune their “BS meters.” Stein urged the audience to double-check, fact-check, and ensure the authenticity of Information before sharing it. He stated, “We are the trusted messengers, so it’s our job to protect our communities.”
Difficulties in Monitoring Disinformation on Social Media
Addressing the growing concern regarding social media platforms abandoning their monitoring of AI-generated disinformation while still censoring user speech. Participants questioned the inconsistency, such as removing posts for using certain words. Yet, sophisticated political disinformation seems to go unchecked.
Stein explained that most platforms can easily monitor and flag specific apparent issues, like sexual exploitation or hate speech. However, nuanced political disinformation, such as pro-Russia messaging and others, is more difficult to track. He stated, “It’s just not as easy to monitor for the sort of sophisticated, nuanced political disinformation we’re talking about.”
Differentiating between Real and Fake Content
While it is becoming increasingly difficult to discern real from fake content, Stein urged people to take a proactive approach; he encouraged audiences to double-check Information before sharing it by suggesting, “If you see an image that is too good to be true, if you see a video that helps one political party or one political candidate too much or something that just doesn’t pass the smell test, verify its authenticity by looking for corroborative reports from reliable news sources.”
He also emphasized getting news directly from trusted outlets like local ethnic media, NPR, or significant news agencies like the AP. He concluded, “Stop getting your news from Facebook, Twitter, Instagram, or TikTok, and instead rely on credible news creators for accurate information.” He stated
AI-Generated Fake News & Immigrant Communities
Another expert, Jinxia Niu, from Chinese for Affirmative Action and the founding manager of PR Bar, the first Chinese-language fact-checking website in the US, discussed the impact of AI-generated disinformation within the Chinese American community.
She stated, “We have documented over 600 pieces of disinformation in the last 12 months across all major Chinese-language social media.” She explained that this year marks the first time they’ve seen AI-powered disinformation spreading the same messages but at a faster pace.
Another challenge fact-checking organizations face in immigrant communities is a severe lack of “timely, effective, accurate information or fact checks,” says Niu. She noted that ethnic media are limited and outnumbered, making it challenging to address AI-generated disinformation, especially when translated from English.
Niu explained that her organization, PR Bar, has only three full-time staff and a dozen part-time fact-checkers, which needs to be improved to handle the volume and complexity of disinformation. She pointed out, “Right now, you can find tons of tools to teach you how to generate fake images and videos, but identifying and debunking these fakes remains a significant hurdle,” She says
Niu also discussed the broader issues of disinformation campaigns involving fake accounts and AI-generated content, which require more resources than fact-checking organizations typically possess. She stressed, “You almost need an equivalent of the FBI to investigate who is behind how many accounts are running and how they are related.”
Another critical challenge is the low level of AI literacy in immigrant communities, particularly among older people and those with limited English proficiency. This makes them vulnerable to AI scams and misleading Information. Niu warned of the potential dangers, stating, “Imagine when these fake AI digital idols mislead their followers on how and who to vote for. How dangerous would that be?” She also noted that disinformation often circulates in messaging apps, not just on social media, complicating efforts to counter it.
Encrypted Messaging Platforms within Immigrant Communities
A peculiar challenge is encrypted messaging apps within Asian American communities, which are used to circulate disinformation. Niu highlighted that platforms like WeChat, Telegram, WhatsApp, Signal, Line, and Kakao Talk are culturally intimate and encrypted, making it difficult to monitor and intervene in the spread of fake news. Additionally, she noted that social media influencers exploit these platforms to turn private chats into unregulated public broadcasts, bombarding followers with AI-generated phony news. She emphasized, “There is no way you can even monitor and document them because these platforms are perfectly encrypted.”
Efforts to address fake news and disinformation in Immigrant Communities
Niu also addressed efforts to combat this issue, mentioning that PR Bar is part of a fact-checking research project called Co-Insights. It is designed to debunk disinformation within AAPI communities.
Thus far, they have developed tools like an intelligent chatbot on Telegram to automatically collect questions about the authenticity of Information and provide fact-checks. However, she acknowledged that more than these solutions are needed compared to the scale of the problem. Niu noted, “It requires a collective effort to address it,” and praised Stein’s policy team for their efforts to hold big tech platforms accountable.
Recommendations
Niu emphasized the need for more resources and investment in fact-checking and ethnic media, especially within immigrant communities. She stressed the importance of empowering these communities through digital engagement programs and providing timely, effective, and accurate Information. “We needed more than ever,” she stated, underscoring the urgency of addressing the disinformation challenge in culturally and linguistically diverse communities.
Impact of Social Media on the Public
The award-winning Brandon Silverman, former CEO and co-founder of Crowd Tangle, shared his insights on the impact of social media platforms on public discourse. He emphasized that these platforms have become crucial to civic and public dialogue.
In his view, journalists require tools to monitor and understand these conversations, and an example is Crowd tangle, now owned by Meta. A tool that enables news organizations to see what is happening on social media.
Silverman highlighted the disproportionate harm many platforms continue to inflict on communities of color and ethnic communities, yet they expect the solution from sources. He stated, “It is too often the case that when we’re talking through solutions… we turn to civil society organizations in those communities and ask them to be the ones who help, placing an unfair amount of burden on them.”
Reflecting on the role of social media platforms in promoting misinformation, he opined that, technically, the platforms do not violate rules. However, they misled the public, as he differentiated between false and misleading statements.
He stressed that fact-checking organizations know the gray area and the significant challenge of combating misinformation. However, social media platforms must be more transparent to better equip journalists and the public.
Challenges to the enforcement of Rules
Sadly, enforcing rules on disseminating misinformation is difficult because of the preponderance of misleading content classified as “misleading gray area.” Additionally, most content tagged ‘disinformation’ is often politically motivated and primarily spread by elite and trusted sources within media ecosystems.
Silverman noted Steve Bannon’s strategy of “flooding the zone” with untrustworthy content to create confusion, stressing the importance of not overestimating the impact of bots or foreign accounts to avoid playing into disinformation campaigns’ hands.
Framework for Understanding ‘Wrong’ Information & Solution
Yet, there is a framework for understanding misinformation and disinformation based on three pillars:
- The supply side focuses on the creators of disinformation.
- The demand side examines why specific messages resonate with communities.
- The mechanisms that facilitate both supply and demand, such as media dynamics, are also important.
In his view, all three aspects must be considered with a lot of emphasis on developing appropriate policies, specifically, the need to write and improve many policies that effectively combat disinformation.
Potential Solutions
Given the situation, Silverman proposed some practical solutions, like implementing a digital ad tax on large platforms, with digital advertising funding ethnic media and local journalism.
According to Silverman, “We need more resources going to folks like you (local media).” He stated.
Furthermore, Silverman stressed the need to promote a culture of double-checking content and paying close attention to influential accounts spreading narratives rather than focusing solely on bots or foreign-controlled networks.
Addressing the needs of resource-strapped communities, he suggested they focus on consistent narratives rather than individual pieces of content while pushing back against false narratives.
Moreover, he emphasized the importance of collective efforts through collaboration and resource sharing, such as initiatives like the Knight Elections Hub and the Brennan Center’s messaging app project. “Thinking about what are the spaces where groups can come together and try and work together on this stuff,” he stated.
Silverman also acknowledged the critical role of journalism and the news industry in ensuring an informed public, stating, “It is the role of everyone in this room to help shape a lot of where we go.”
Effect of failings of social media on Communities of Color
Concerning how social media’s shortcomings disproportionately impact communities of color, based on his experience at Crowd Tangle, Silverman pointed out that in some regions, like Southeast Asia and countries such as Myanmar and Sri Lanka, fewer staff were dedicated to understanding local political nuances, which is crucial for enforcing platform policies.
He explained, “To enforce even a platform’s policies, you sometimes need to understand if a
He also noted that big platforms like Meta needed to have classifiers in many necessary languages, further complicating the enforcement of policies in diverse regions. In the US, the challenge is more of inadequate resources and cultural understanding. He stated that policy violations were more prevalent in Spanish than in English-language content.
Moving Forward, Ways Local Media Can Monitor Information
Regardless, local, small, and medium-sized media organizations can monitor local Information despite limited resources and continue to serve the public, say the experts. Stein emphasized the importance of traditional reporting.
He advised that if something seems suspicious, such as a political mailer or robocall, reporters should investigate its authenticity and source. “The response to fake news is real news,” Stein asserted, highlighting that the future of politics and democracy depends on diligent and honest reporting.
Niu focused on the necessity of training. She pointed out the importance of training, especially for small organizations involved in fact-checking, and the need to prepare for the changing media landscape, such as the challenges posed by AI. “We need to train our staff quickly,” she stressed, emphasizing that local news organizations must educate themselves on AI and the latest tools to inform the public properly.
Silverman suggested that media organizations expand their presence on messaging platforms like Telegram and WhatsApp. He noted that significant civic discourse has moved away from platforms like X and Facebook. “Trying to get into as many Telegram groups as possible, such as WhatsApp groups, can be a great place to stay on top of important new stories and narratives,” he said.