Newsletter

open source and cybersecurity news

May 24, 2023

Samsung Exploit, Telco Snitch Scammers, Employees Sharing Sensitive Data

In this Episode:

Episode Transcription:

Pokie Huang: 

Hey, it’s 5:05 on Wednesday, May 24th, 2023. From the Sourced Postcast Network in New York City, this is your host, Pokie Huang. Stories in today’s episode come from Katy Craig in San Diego, California, Edwin Kwan in Sydney, Australia, Trac Bannon in Camp Hill, Pennsylvania. 

Let’s get to it.

Katy Craig: 

Today’s report is for Samsung smartphone users and boy is there are major security alert for you.

This is Katy Craig in San Diego, California. 

There’s this nasty vulnerability, CVE-2023-21492, that Samsung and the US Cybersecurity Agency are freaking out about. It’s a kernel pointer exposure issue related to log files, and it’s letting some sneaky attacker bypass security measures.

The thing is, Samsung only patched this bug in their May, 2023 updates, and certain Android 11, 12, and 13 devices are still at risk. CISA’s got it on their radar and is telling government agencies to patch at ASAP. 

But here’s the kicker. Google discovered this flaw, and they suspect some shady commercial spyware vendor might be exploiting it. These guys are trying to hack Samsung smartphones using all kinds of vulnerabilities. 

So if you’re a Samsung user, don’t wait around. Update your device and stay on top of those security patches. We can’t let the bad guys win. 

This is Katy Craig. Stay safe out there.

Edwin Kwan: 

This is Edwin Kwan from Sydney, Australia. 

SMS scams are on the rise in Australia. There’s the very popular “Hi Mom” scam where the scammer pretends to be the victim’s child who had lost their mobile while overseas and is reaching out, using a different number, asking for money so that they can return home.

 There’s also the Australian Post scam where the victim is informed that their package is being withheld and they need to provide extra payment before its release. 

Australian telecommunications provider. Telstra, have been actively working to combat spam through their Cleaner Pipes initiative. They block, on average around 23 million scam SMSs each month. However, they can’t catch everything.

Today they’ve launched an initiative to allow their customers to snitch a scammer so that others won’t fall prey. All they need to do is forward the SMS or MMS scam to the number 7226, or SCAM for short. Doing that helps prevent the same scam from potentially affecting others.

Tracy Bannon: 

Earlier this year, I’ve reported that a number of businesses had raised concerns that their employees were submitting PII, Personally Identifiable Information, as well as business data into external models such as ChatGPT. Data Security Service, CyberHaven, detected and blocked request to input data into ChatGPT from a mere 4.2% of workers at its client’s companies. That may sound small though in this case, it turns out to be over 67,200 workers risking, spilling, and leaking corporate data. 

The data ranges from confidential information or client data to source code and even regulated information, and it’s all being offered up into the mouths of the hungry, large language models.

Hello. This is Trace Bannon reporting from Camphill, Pennsylvania. 

Most employers offer security related training, often focused on protecting passwords and not allowing somebody to tailgate through a badge swiped entryway. Companies, whether nonprofit or commercial, must quickly step up to educate workers on using AI models, especially those not owned by the corporation.

A worker using ChatGPT to craft a letter seems reasonable at first glance, but what if that letter is generated by a doctor to a patient’s insurance company and it has all of their medical information?

Why is this such concern? 

Depending on the model and the service being used, questions are unanswered as to whether or not information submitted can be retrieved at a later date by someone else. Often, the generative AI offering does not share how or if it is properly securing the data. This includes saving the prompts as well as the accompanying data.

A growing body of research is exposing that simply by querying a generative AI system in a way that recalls specific items, adversaries can trigger the model to recall a specific piece of information. This is called a data extraction attack. 

The risk is not theoretical. In 2021, a dozen researchers from a broad list of companies and universities, including Harvard, Stanford, Google, and Apple, found that they could execute trained data execution attacks.

This technique is also called “exfiltration via machine learning inference”. 

JP Morgan has restricted worker’s use of ChatGPT, and big heavies like Amazon, Microsoft and Walmart have all issued warnings to employees to take extra care in using any generative AI services. It only takes one leak. 

Cyber Haven reports that a mere 1% of workers are responsible for roughly 80% of incidents sending sensitive data into public large language models.

I reported in March that OpenAI had a bug that shared prompt histories without a nefarious attack. This is not limited to generated text, though that’s much more prevalent right now. Those offering generative AI are beginning to add some restrictions. If you were to ask for specific personal details right now, for example, you’d get a canned response saying, “No.” From my technical perspective, this is likely to be a quick patch, sans the requisite protections. 

The genie is out of the bottle, though we can educate our colleagues and workforce balancing perceived productivity gains and the possibility of leaking. We each need to ask ourselves with every prompt whether or not we are putting ourselves or our companies at risk.

The continued tidal wave of generative AI certainly gives us something to noodle on.

Pokie Huang: 

That’s it for today’s open source and cybersecurity updates. For direct links to all stories and resources mentioned in today’s episode, go to 505Updates.com, where you can listen to our growing library of over 100 episodes. You can also download the transcript of all episodes for easy reference.

5:05 is a Sourced Networks Production with updates available Monday through Friday on your favorite audio streaming platform. Just search for “It’s 5:05!”. And please consider subscribing while you’re there. 

Thank you to Katy Craig, Edwin Kwan, Trac Bannon for today’s contributions. 

The Executive Producer and the editor is Mark Miller. The sound engineer is Pokie Huang. Music for today’s episode is by Blue Dot Sessions. We use Descript for spoken text editing and Audacity to layer in the soundscapes. The show distribution platform is provided by Captivate.fm. This is Pokie Huang. See you tomorrow… at 5:05.

Contributors:

Comments:

Leave the first comment

Newsletter