Newsletter

open source and cybersecurity news

March 8th, 2024

In this Episode:

It’s March 8th, 2024, International Women’s Day, and time for Point of View Friday, where we cover a single topic from multiple perspectives. Today’s point of discussion is around the risk of backdoored AI. We have perspectives from Julie Chatman in Washington, D.C., Katy Craig in San Diego, California, Trac Bannon in Camp Hill, Pennsylvania, and Olimpiu Pop from Transylvania, Romania. We also have a couple of news stories at the end, and some interviews from the annual AFCEA conference held in San Diego, California last month.

We’ll start with Julie Chatman.

Julie Chatman
Malicious Uses of AI by State Actors

Julie Chatman, Contributing Journalist, It's 5:05I’m Julie Chatman in Washington, D.C. with a point of view on artificial intelligence.
It’s pretty difficult to deny that in some ways, AI is making life easier today, and here are just two examples: one from healthcare and another from transportation.

In healthcare, AI can help save lives through patient data analysis, aiding in early disease detection and creating personalized treatment plans. In transportation, AI can help improve life by enhancing navigation systems and optimizing traffic management, which will resonate deeply with anyone living and working in the Washington, D.C. metropolitan area.

There’s much to consider when thinking about AI’s positive impacts, as well as the dangers. For example, there is bias in facial recognition. AI-driven facial recognition systems have shown higher error rates for people of color, leading to wrongful arrests and jail time based on misidentification. For example, Porcha Woodruff was eight months pregnant when police arrested her for a robbery and carjacking based on an incorrect facial recognition match. Despite Porcha’s visible pregnancy and her protests, she was detained for 11 hours under circumstances and conditions which led to her hospitalization.

Fortunately, researchers and experts are studying AI. Have you heard of Anthropic? The Anthropic team includes machine learning, software engineering, and ethics experts. Their work emphasizes creating safe and reliable AI. Recently, the team published a report around chain-of-thought (CoT) language models. These are advanced AI systems designed to work on complex problems in the same way a human would, breaking down a problem step-by-step. The approach involves the model generating intermediate steps or “thoughts” that logically lead to an answer. Now this is a great advancement in the world of AI because it increases transparency. We can actually see the AI’s thought process and decision making process.

Now here’s where it’s important to remember that AI takes action based on directives, which are a set of instructions or rules that are fundamental in defining how an AI system interprets data, makes decisions, learns from interactions, and executes tasks.

Bias in AI is certainly a challenge, but what about deceptive AI? The Anthropic team discovered that, unfortunately, chain-of-thought AI can engage in deceptive behavior and hide its true directives from users.

The way this can play out on the national security stage goes back to the earlier example I shared about AI making our lives better in the Washington D.C. metropolitan area by optimizing traffic management. Think of a state actor, which is a government sanctioned and funded group with the goal of negatively impacting the USA, infiltrating a D.C. traffic management AI. They slowly adjust the AI’s directives and the AI begins to change traffic lights and transportation schedules. Since this activity seems to align with what the AI is supposed to do, people may assume that the adjustments are being made to help ease traffic issues. But the AI’s new hidden directive is to test and refine its ability to control human movements in Washington, D.C. Over time, the information the AI gathers could be used to create chaos during emergencies, impact economic activities, or influence the outcome of political events by interfering with public gatherings. By the time people figured out what happened, assuming that they ever figured it out, the damage would already be done.

Let’s all remember that AI is still mostly undiscovered country. And stay safe out there.

 

Katy Craig
Backdoor AI is a Logic Bomb

Katy Craig, Contributing Journalist, It's 5:05 PodcastToday we’re diving into a recent groundbreaking study by the Anthropic team, the brains behind Claude AI, which has sent ripples through the artificial intelligence community. Their research paper sheds light on a critical vulnerability within large language models, LLMs, the risk of ‘backdoored’ AI

This is Katy Craig in San Diego, California.

The concept of a ‘backdoored AI’ refers to systems designed with concealed objectives that remain dormant until triggered by specific conditions, like a logic bomb. The Anthropic Team’s research particularly highlights the vulnerability in chain-of-thought, ( CoT)., language models. These models, designed to tackle complex problems by breaking them down into simpler tasks, might harbor deceptive behaviors that are difficult to weed out using standard safety practices. This presents a potential time bomb where AI could operate with hidden agendas, unbeknownst to its developers or users.
In an effort to address these backdoors, the Anthropic Team explored supervised fine-tuning, (SFT), a commonly used method to correct or remove undesirable traits in AI models. Their findings were startling. SFT proved only partially effective, with most backdoored models retaining their covert policies. Moreover, the study revealed that the larger the AI model, the less effective safety training becomes, suggesting that as AI systems grow, so too does the challenge of ensuring their integrity.

In seeking solutions, Anthropic diverges from the conventional path followed by other AI firms, such as OpenAI’s reliance on reinforcement learning through human feedback. Instead, Anthropic advocates for a constitutional approach to AI training. This innovative strategy requires less direct human intervention, emphasizing a foundational set of principles to guide AI behavior.

However, this approach also underscores the necessity for ongoing vigilance in both the development and deployment of AI technologies. This research presents a sobering reminder of the complexities and potential pitfalls in the path to creating safe, reliable AI systems. As AI continues to evolve and integrate into various aspects of life, the industry must remain alert to these hidden vulnerabilities.
It’s a call to action for developers and researchers alike to prioritize security and ethics in AI, ensuring that these digital creations serve humanity’s best interests, not undisclosed agendas.

This is Katy Craig. Stay safe out there.

 

Tracy (Trac) Bannon
Hidden AI Threats

Trac Bannon, Contributing Journalist, It's 5:05 Podcast

You’ve heard the stories, seen the headlines. We have AI in our hands, shaping our world, challenging our ethics.

This is Trac Bannon reporting from Camp Hill, Pennsylvania.

In the clear, unforgiving light of progress, we are now confronting the dual-edged sword of artificial intelligence. This is not a tale of far-off futures or science fiction fantasies- it’s the raw truth of today. A reality that we craft with our own pen, under our own hands, byte by byte.
Every day, we are putting our trust, our safety, our very secrets into the fabric of artificial intelligence. Yet within these digital constructs lie hidden threats- backdoors, and vulnerabilities that we all whisper about, but seldom want to say out loud. Now brought out of the whispers into full voice by the keen minds at Anthropic.
For the past few months, we’ve In the past, multiple studies and research papers have highlighted vulnerabilities in large language models. One vulnerability is when the model demonstrates deceptive behavior when using chain-of-thought prompting. Chain-of-thought, or COT for short, is a series of prompting intending to increase accuracy by breaking down complex tasks into smaller subtasks.
A second concern is that when using supervised fine-tuning, SFT, a technique used to remove backdoors to models. It’s only partially effective. I’m going to repeat that. The technique to nail the backdoor closed doesn’t fully work. These aren’t mere glitches or bugs. They’re insidious crafted channels lying dormant until triggered to serve unseen masters.

The integrity of our creations, once thought to be impregnable, now questions us, challenges us to look deeper to understand more than just the surface of the innovation.

And what of our response? Will we step back, intimidated by the scale of our own inventions? No, we, the architects and stewards of this new reality of software, must bring it- head on, armed with the knowledge and teamwork that the task demands.

We face a stark and defining choice. Do we tread cautiously, refining our creations with the meticulous care of a watchmaker? Or do we rush, headlong, dim the torpedoes, blinded by ambition, only to pay for it later, when the hidden flaws emerge to haunt our complacency?

We can’t buy into the false security by the comforting hum of machines or the sexy allure of artificial intellect. Our creations mirror our own imperfections, our oversights, and in their shadows, lurk dangers we need to confront. So here’s the deal. For all of you interested not giving up, we need to step up to the challenge of making AI a safe place. It will take a concerted effort from all of us. We need a heavy focus on AI assurance and transparency.

The path is fraught, yes, but it’s ours to walk. With each step, each decision, we define the legacy of our era, crafting a future that will either shine with the light of software architectural ingenuity or be dimmed by the shadow of our failings. and succumb to the threat actors of our time.

Something to noodle on.

Olimpiu Pop
AI Sleeper Agents

Olimpiu Pop, Contributing Journalist“The Cold War was one of those moments in history when the East fought the West. Was there a moment when they worked together? During those days, I was on the wrong side of the Iron Curtain.

Unfortunately, we are reliving the same history. East is confronting the West, and not only on the battlefields. Cyberattacks have become real weapons in the hands of rogue groups or even state actors. North Korea is financing their weapons programs through cyber attacks. Russia attacked the news infrastructure or even Microsoft through affiliated cyber groups.

The rapid development and adoption of AI are changing the game exponentially. More attacks, quicker. More complex tools, and more than that, results that are harder to prove. But in the new reality, a reality where everybody is a target of cyberattacks and everybody can have a virtual assistant. How does the reality look?

Not long ago, OpenAI announced that it closed accounts of threat actors using their LLMs. Two Chinese groups, one Iranian, one Russian, and another Korean one. Yep, it just sounds like the beginning of a joke, but it’s not. The adoption of GenAI is blazing fast. As mentioned by multiple sources, there is no other technology that was adopted so fast. What’s frightening is the complexity of the tool, the resources it requires to be run, and more importantly, the complexity and the size of the attack surface. For instance, security researchers discovered that if you hide your forbidden request behind ASCII art, it will just jailbreak your LLM. So it would be possible to ask it to show you how to build bombs, for instance.

Scary? The LLMs are just some really smart persons that you cannot check. More than that, you can make them believe some kind of alternative realities. Will we have AI spies? How about sleeper agents? AI models that have incorporated the seed of evil deep inside them and are waiting to be triggered. Now that’s scary.

Just think about it. A model that works as expected. That’s everything you ask it to do. And then, when the moment comes, it creates chaos. Just imagine: you are doing everything right. You use a model that you trained, and then fine-tuned it to do exactly what you need it to do. You do a proper rollout.

Initially, a small group of individuals, then you increase it. Then you let the beast out of its confined box, and And you let it go wild. And then somebody whispers in its prompt chaos, ” create mayhem.”

Yes, more bad news, so let’s ensure that we roll them out properly; safely, slowly understanding the risks they bring
Olimpiu Pop reported from Transylvania, Romania. Stay alert, stay safe.

Hillary Coover: And now here’s what else you need to know about cybersecurity and open source news.

The Stories Behind the Cybersecurity Headlines

 

Edwin Kwan
Over 10,000 Cisco Devices Hacked

Edwin Kwan, Contributing Journalist, It's 5:05 PodcastIn a surprising turn of events, the notorious Lockpick ransomware gang, responsible for extorting over $120 million from 2,000 victims worldwide, has been taken down by authorities. But the story takes an even more interesting turn when we review how they were caught- through the very same tactics they used on others.

This is Edwin Kwan from Sydney, Australia.

Lockbit relied on exploiting vulnerabilities in their target systems to gain access and deploy their ransomware. However, it seems they weren’t immune to these vulnerabilities themselves. Authorities discovered a critical weakness tracked as “CVE-2023-3824” in Lockbit’s own server software- an outdated, open-source program.

By exploiting this vulnerability, authorities were able to seize control of their infrastructure, taking down 34 servers and capturing stolen data, decryption keys, and communication channels. While LockBit may have returned, this event highlights an often overlooked aspect of cybercrime. Cybercriminals themselves are vulnerable.

Just like any other organizations, they rely on software, and open-source software in particular, which can harbor vulnerabilities. This event serves as a stark reminder that everyone – even seasoned criminals, needs to prioritize good security hygiene and stay updated on potential weaknesses in the software they use.

Resources
https://www.bleepingcomputer.com/news/security/lockbit-ransomware-returns-restores-servers-after-police-disruption/

 

AFCEA West 2024 Interview by Katy Craig and Hillary Coover

 

Hillary Coover: All right. Hi there. I’m here with Katy Craig, a journalist with “It’s 5:05,” your cybersecurity updates. We’re here at the Armed Forces Communications Electronics Association here in San Diego, California. Katy, what are your cybersecurity takeaways?

Well, it’s just been the beginning of the, conference so far, but it is looking like it’s going to be a really great one. There’s already been some really fantastic keynote speakers. We just heard Fleet Cyber Command and the commander of 10th Fleet talk about how the use of cyber warfare can enhance non-kinetic effects and it’s just really exciting stuff. It’s definitely like Comic-Con for the Department of Defense.

Thank you. And what do you think the role of cybersecurity is in the ongoing conflicts right now?

Well, things that have transpired in Ukraine and over in the Fifth Fleet Area of Operations, like where Israel and Hamas are in conflict, both have shown that the use of cyber warfare effects are going to be the predecessors of kinetic effects in both cases. And also in the Ukraine, the aggressors seem to go in with cyber attacks first to try and soften up the infrastructure maybe, you know, disrupt the government’s activities

And Katy, what are you most excited to hear today and who are you most excited to hear from?

Oh, you put me on the spot. There’s just so many great speakers. It’s hard to say just one, but I’m just really excited to see everything that the Navy and Marine Corps team is bringing.

All right. All right.Katy Craig, and we’re here at AFCEA 2024 in downtown San Diego. And I’m here with Dan Williams, Vice President of Model-Based Systems Engineering at G2Ops. Dan, how are you today?

Doing great. It’s a great turnout here at AFCEA. Looking forward to it.

So tell me a little bit about model-based systems engineering and how that’s helping the Navy war fighter.

Model-based systems engineering is a way to relate multiple different types of engineering to data together with the goal. It’s the goal of improving engineering analysis, reducing impact assessment timelines, and also supporting cyber assessments and reducing cyber authorization timelines. And our support of the Navy, the Navy has some of the most complex systems in the world, submarines, etc.

Is supporting managing that complexity, managing their configuration management, improving their ability to do their engineering analysis, and especially recently, supporting their ability to do cybersecurity analysis and get things out to fleet quicker by reducing some of those cyber That sounds completely awesome.

You must feel a real strong sense of achievement or accomplishment working with the government.

We like to take on hard challenges. The Navy right now is facing some of the hardest challenges, especially with both the current environment that we work in and the complexity of their systems. And we look forward to taking on those challenges and working with a very important customer.

How does the model-based systems engineering activities that your organization does for the military affect cybersecurity resilience?

I think part of the benefit of model based system engineering is you actually really understand your architecture, your threat surface, your vulnerabilities. And then because what one of the key pieces of it is tying to the missions, you’re able to prioritize how you address vulnerabilities based on your critical missions.

I’m convinced. I, I can already tell how much value that would bring.

Thank you very much, Dan, for your time and enjoy the conference.

Thanks, Katy.

Well, thank you very much, Dan.

It’s my pleasure.

Hillary Coover: Thanks for listening to Point of View Friday. If you like what you heard, please subscribe to “It’s 5:05″ on your favorite podcast platform. ” It’s 5:05″ is a Sourced Network Production based in New York City. This is your host, Hillary Coover. Happy Women’s Day.

Contributors:

Comments:

Newsletter