open source and cybersecurity news

April 11, 2023

Prompt Injection, Table Top Exercises, AI Attribution

In this Episode:

Episode Transcription:

[00:00:00] Pokie Huang: 

Hey, it’s 5:05 on Tuesday, April 11th, 2023. From the Sourced Podcast Network in New York City, this is your host, Pokie Huang. Stories in today’s episode come from Katy Craig in San Diego, California, Edwin Kwan in Sydney, Australia, Mark Miller in New York City and Marcel Brown in St. Louis. 

Let’s get to it.

[00:00:32] Katy Craig: 

According to Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, CISA, large language models and generative pre-trained transformer, GPT, are the biggest threats we’ll face this century. 

This is Katy Craig in San Diego, California. 

Basically, these advanced AI technologies can be used for good, but they can also be used for evil. For example, large language models in GPT can be used to create fake news stories or social media posts that look totally legit. And let’s be honest, we’re all susceptible to falling for click bait every now and then but this is much worse. Plus, these technologies can automate cyber attacks, making them faster, more efficient, and more difficult to detect.

But the real kicker is a new attack vector in large language models called prompt injection. Basically, an attacker can inject malicious instructions into a prompt that a language model is trained on, causing it to produce some seriously bad output or ignore previous instructions. What’s worse is we humans may not even know or understand why or when it’s happening.

So what does all of this mean? Well, it means we need to be careful and stay on our toes. And if you’re like me, you might want to brush up on your GPT skills, learn how to detect and defend against these threats. 

This is Katy Craig. Stay safe out there.

[00:02:10] Edwin Kwan: 

This is Edwin Kwan from Sydney, Australia. 

Following the recent large Australian data breaches with Optus, Medibank, and Latitude Financial, the government will be summoning the nation’s largest banks and financial services companies to a series of cyber war games to test how they would respond to cyber attacks that could upend the lives of tens of millions of Australians. 

The Home Affairs minister said that the recent data breaches Australians suffered were just the tip of the iceberg, and the government was preparing for more profound breeches that could rip infrastructure assets such as water supply and the electricity grid.

Last month, the government ran a three hour tabletop exercise with representatives from the Reserve Bank, the Australian Securities and Investment Commissions, the Australian Prudential Regulations Authority, and the Australian Federal Police. The aim of that exercise was to examine how they would respond to attacks involving the theft of sensitive IT data.

Similar exercises are now being held with individual banks before the government moves on to the aviation sector and other critical infrastructure networks.

[00:03:56] Mark Miller: 

One of the issues with using AI chat engines is that there is no attribution built in to give credit to the original source. That is, except for perplexity ai. 

This is Mark Miller in New York City, and if you’ve been listening for the past few months, you know I don’t talk about product, but in this case I’m going to make an exception because of what’s happening in the LinkedIn DevOps group I run. 

There are 197,000 members in that group with a couple hundred requests to join each day, and 30 to 50 content submissions on an average day. I’ve been seeing a pattern on the submissions and I’ve started calling people out. It’s not courteous and it’s not ethical to post other people’s diagrams and content without giving them attribution. Period. If it’s not yours, say who you got it from and give a URL pointing to the source. 

That got me thinking about how people are using AI engines, and you already know the 800 pound Gorilla, ChatGPT. The issue is that most people don’t realize the content they’re generating from that engine is being pulled from various sources and then transformed a tiny bit to make it sound original. For that reason, I don’t use ChatGPT. At all. I want to know the sources of the content being generated. 

That’s where comes in. Each response back from the engine contains a footnote with all of the resources used to create the response. John Willis and I were playing with it last month and found the resource section indispensable when determining whether or not to trust a source. You can listen to those episodes here on 505 where we check AI responses against John’s deep knowledge of Edwards Deming. Definitely worth a listen. 

When it comes down to it, it’s this. Supporting others work through attribution is not just courteous, it’s how community works. Using an AI engine that does not attribute content to resources, shortchanges the work of real authors and artists that have created original content.

My message to you is: support your community. Use an AI engine that shows you where it’s pulling the content from.

[00:06:31] Marcel Brown: 

This is Marcel Brown, the most trusted name in technology with your technology history for April 11th.

April 11th, 1957. The Ryan X-13 Vertijet makes its first vertical takeoff and landing, marking the first time a jet powered aircraft successfully completed a VTOL flight. Militaries around the world at the time were experimenting with the concept of VTOL as a way of having aircraft that could take off and land from areas with no runway. 

While the X-13 was tested successfully, the program was canceled after only two years. The X-13 concept proved unfeasible for the United States military due to a variety of issues including payload and range capabilities.

April 11th, 1985. Almost exactly two years after joining Apple, John Scully asks Steve Jobs to step down his head of the Macintosh division at an Apple Computer board meeting. With the backing of the company’s other executives. Jobs is stripped of nearly all responsibilities at Apple. While Job retains the title of Chairman, he has no authority and eventually leaves Apple.

That’s your tech history for today. For more tune in tomorrow and visit my website,

[00:07:47] Pokie Huang: 

That’s it for today’s open source and cybersecurity updates. For direct links to all stories and resources mentioned in today’s episode, go to, where you can listen to our growing library of over 100 episodes. You can also download the transcript of all episodes for easy reference.

5:05 is a Sourced Networks Production with updates available Monday through Friday on your favorite audio streaming platform. Just search for “It’s 5:05!”. And please consider subscribing while you’re there. 

 Thank you to Katy Craig, Edwin Kwan, Mark Miller and Marcel Brown for today’s contributions.

The Executive Producer is Mark Miller. The editor and the sound engineer is Pokie Huang. Music for today’s episode is by Blue Dot Sessions. We use Descript for spoken text editing and Audacity to layer in the soundscapes. The show distribution platform is provided by This is Pokie Huang. See you tomorrow… at 5:05.



Leave the first comment