Newsletter

open source and cybersecurity news

February 23rd, 2024

In this Episode:

It’s February 23rd, 2024 and time for point of view Friday, where we cover a single topic from multiple perspectives. Today’s point of discussion is around the increasing threat of deepfakes to democracies worldwide. We have perspectives from Trac Bannon in Camp Hill, Pennsylvania, Olimpiu Pop from Transylvania, Romania, Hillary Coover in Washington, DC, and Katy Craig in San Diego, California. We’ll start with Katie Craig.

 

Katy Craig
DeepFakes and Election INtegrity

Katy Craig, Contributing Journalist, It's 5:05 PodcastThis week we’re diving into a topic that’s as fascinating as it is frightening- deepfakes and their impact on election security. In an era where seeing is no longer believing, deepfakes pose an unprecedented challenge to the fabric of democracy.

This is Katy Craig in San Diego, California.

Deepfakes, the eerily realistic videos and audio recordings generated by artificial intelligence, have the power to mislead, manipulate, and sow discord among the electorate.

These digital deceptions are becoming a tool for those looking to undermine trust in the democratic process. As we approach another election cycle, the threat posed by deepfakes is more pertinent than ever. Imagine a scenario where a deepfake video of a candidate saying something they never did goes viral. The damage could be done before the truth has its shoes on. The potential for chaos is limitless- from influencing voters’ perceptions to inciting public unrest. But it’s not all doom and gloom. The fight against deepfakes and election security is on, with researchers, tech companies, and governments joining forces.
Innovative technologies and AI detection tools are being developed to catch these fakes in the act. However, the technology behind deepfakes is also advancing, turning this into a high-stakes game of cat and mouse. Education and awareness are our best defenses. Understanding the existence and capabilities of deepfakes can help the public approach digital content with a critical eye.

It’s crucial for voters to rely on verified sources and fact-check information before sharing it. The role of social media platforms cannot be understated. These companies have a responsibility to combat the spread of deepfakes, using AI to flag suspicious content, and promoting transparency about their origin. Collaboration between tech giants, lawmakers, and civil society is essential to create a robust defense against this digital threat.
As we navigate this new reality, the importance of safeguarding election integrity cannot be overstated. The foundation of democracy relies on informed and unmanipulated decision making by the electorate. Deepfakes challenge this principle, but with proactive measures, education, and technology, we can protect the sanctity of our elections.

In conclusion, deepfakes represent a formidable challenge to election security, but not an insurmountable one. By staying informed, questioning the authenticity of what we see online, and supporting efforts to combat misinformation, we can uphold the integrity of our democratic processes.

Remember, in the age of deepfakes, critical thinking is your most valuable asset. Stay vigilant, stay informed, and most importantly, stay engaged.
This is Katy Craig. Stay safe out there.

 

Tracy (Trac) Bannon
Elections AI POV

Trac Bannon, Contributing Journalist, It's 5:05 Podcast

Are we actually at the point in history where truth and technology collide? Is AI reshaping the landscape of democracy?

Hello, this is Trac Bannon reporting from Camp Hill, Pennsylvania.

It would appear that AI wants to alter the battlegrounds of democracy from the elections in India to the halls of global tech summits and the legislative chambers of the United States.

The intersection of artificial intelligence, AI, and the democratic processes have become a focal point of global attention. Are we at the tipping point where we cannot rely on any electronic information, whether it’s text, audio recordings, or videos? Is the only safe way to get our information by showing up at the town hall to hear politicians speak live?

India’s 2024 election landscape is marred by the use of AI-generated deepfakes, with political entities deploying this technology to sway voter perceptions. This tactic underscores the pressing challenges of distinguishing between genuine and manipulated content, raising significant concerns for the integrity of democratic elections.

So, what do we do about it? In a landmark move, leading technology companies have united under an accord announced at the Munich Security Conference, committing to combat the misuse of AI in undermining democratic elections. This non-binding agreement signals a collective effort to implement safeguards against deceptive AI content- although it stops short of imposing mandatory regulations.

In addition to the largely symbolic representation from big tech, regulatory response to election deepfakes in the U.S. are on the horizon, as well as current implementation attempts. Following incidents of AI-generated misinformation, such as a deceptive phone call misattributed to President Biden, the U.S. is grappling with the regulatory vacuum in addressing deepfakes. The Federal Communications Commission, the FCC, has taken steps to outlaw AI-generated robocalls, yet comprehensive federal legislation remains elusive. Meanwhile, state-level initiatives are merging, with laws enacted and bills proposed to curb the influence of AI in election meddling.

The commitment of tech giants in policing AI-generated content highlights the pivotal role these companies play in moderating online information and handling both sides of misinformation. Their proactive measures, including the development of detection tools and labeling strategies, are crucial in mitigating the risks posed by AI to electoral integrity.

As we navigate the complexities of AI in the democratic sphere, the stories from India, the global tech community, and the regulatory bodies in the U.S. illustrate the urgent need for a balanced approach. Balancing innovation with safeguards against misinformation requires ongoing vigilance, robust regulatory frameworks, and a commitment to ethical standards.

In addition to the valid identification of deepfakes, there need to be safeguards to ensure the deepfake police do not get deployed to suppress actual content as well. A very double-edged sword, for sure. The future of democracy in the digital age hinges on our ability to harness the benefits of AI while mitigating its potential threats.

Something to noodle on.

 

Olimpiu Pop
Disinformation – a lethal weapon of the Gen AI Age

Olimpiu Pop, Contributing Journalist“The last couple of years felt like a siege on humanity: the lengthy COVID-19 pandemic, natural disasters, the accelerated global warming. Was Gaia fighting back? As if that wasn’t enough, humans took matters into their own hands and started attacking each other as if we were not brothers living under the same sky. The war in Ukraine commemorates two years since the inception of the final act: the Russian invasion. The Palestine war started almost five months ago as well. I would like to say that there are only these two, but it’s not the case. Multiple acts, in various formats, unravel everywhere on the globe.

Even though we are still killing ourselves like savages, the world evolved. The technology is progressing at a staggering rate. We are bombarded with huge amounts of information in multiple formats: written, spoken and video. We are consuming news from multiple sources and from non-traditional news outlets, blogs or social media. Yes, it’s important to hear multiple voices, let’s remember that in Ukraine we have live streams directly from the front lines.

But, how reliable are the sources? Can we trust what we read? Can we trust that they are who they say they are? If information is power, misinformation is a lethal weapon. Propaganda was always yet another weapon in the arsenal of war machines. And yet with that many information sources, we should not be that easy to be convinced. Or are we?

In Ukraine, some of the fiercest cyberattacks were against information infrastructure, trying to unplug the official news outlets. Other encounters targeted Postal Services.

This year is yet another crucial year in the fight for democracy, equality and the environment. In a sentence: 2024 is a crucial year in the fight for freedom, freedom of humankind. Why is that? Around the globe elections are organised: US’ Presidential elections, European Union’s parliamentary elections, Russia’s presidential election – yes, that should be more for an Oscar.

Politics is the place where nothing counts, where we need to put down the adversary. And use any means necessary. In politics, anything was used just to take the chances of different candidates of winning. Now, in an era where text, sound, photography and even video are generated in a matter of seconds with just a plain textual description, what can we still trust?

Trust your instinct. Use multiple sources to cross-validate the information and use as many as possible official sources. The national agencies from the US provided guidelines on how to protect themselves from misinformation. Use them and help others who are not that digitally literate to understand the risk also.

Stay informed, stay alert, stay safe! Olimpiu Pop, reported from Tirol, Austria.

Resources
https://www.cisa.gov/resources-tools/resources/election-mail-handling-procedures-protect-against-hazardous-materials
https://www.euronews.com/2024/02/20/ukraine-war-russian-disinformation-operations-zelenskyy-visits-frontline-canada-sends-dron
https://www.bmi.bund.de/SharedDocs/schwerpunkte/EN/disinformation/examples-of-russian-disinformation-and-the-facts.html

 

Hillary Coover
Spot the fake. Learn how to keep elections real in a world full of digital fakes.

Hillary Coover, Contributing Journalist, It's 5:05 Podcast

Spot the fake. Learn how to keep elections real in a world full of digital fakes.

Hi, this is Hillary Coover in Washington, DC.
In today’s digital era, deepfakes, sophisticated video and audio manipulations, pose a significant threat, especially during our election seasons.

To illustrate the power and potential implication of deepfakes, consider these two compelling stories. The first story involves a political candidate who is falsely depicted in a video participating in illegal activities. Despite the video being complete fabrication it quickly spread across social media platforms, tarnishing the candidate’s reputation and influencing public opinion. The second story showcases the misuse of deepfake technology to create a fake speech by a world leader suggesting the declaration of war on another country. This video though, promptly debunked, caused temporary panic and highlighted the potential for deepfakes to incite real-world consequences.

To help avoid falling for deepfakes, it’s crucial to develop a critical eye and verify information through these steps:
1) Evaluate the source. Trust information only from recognized and reputable sources. Be really skeptical of sensational news from unknown outlets.
2) Spot visual and audio clues. Look for inconsistencies in lip syncing, facial expressions, and background details. These might indicate manipulation.
3) Seek confirmation from multiple sources. True stories are typically reported by several credible news organizations. Cross-reference to verify the authenticity of any story you come about.
4) Utilize fact-checking services. Websites dedicated to debunking misinformation, such as Snopes or factcheck.org can be crucial tools for verifying stories and images.
5) Understand the technology behind deepfakes. A basic knowledge of how deepfakes are made helps in identifying potential manipulations. Awareness of common manipulation techniques is key.
6) Practice critical thinking and skepticism. Approach, every surprising or sensational piece of news with caution. Consider the motive behind the information being shared.

By following these steps, evaluating sources, spotting inconsistencies, seeking multiple confirmations, utilizing fact-checking services, understanding deepfake technology and practicing critical thinking, we can shield ourselves and our democratic processes from the disruptive potential of deepfakes.

Resources
https://www.wsj.com/tech/ai/new-era-of-ai-deepfakes-complicates-2024-elections-aa529b9e
https://www.miamiherald.com/news/nation-world/national/article285705411.html

 

Hillary Coover: Thanks for listening to Point of View Friday. If you like what you heard, please subscribe to “It’s 5:05″ on your favorite podcast platform. ” It’s 5:05″ is a Sourced Network Production based in New York City. This is your host, Hillary Coover. Have a great weekend. 

Contributors:

Comments:

Newsletter