Newsletter

open source and cybersecurity news

August 11, 2023

- CYBERSECURITY HEADLINES TODAY -

Veilid launch
Keystroke Logging to Measure Employee Productivity
OWASP Releases Top 10 Threats LLM V1.0

In this Episode:

Marcel Brown: August 11th, 2003. The Blaster worm, also known as MSblast or Lovesan, begins to spread on the internet, infecting Windows XP and Windows 2000 computers. Damage caused by the worm was estimated at $320 million.

Kadi McKean: I have heard you’ve got some big things going on and potentially a new release coming up. Yeah. Friday night at, DEFCON, my team is releasing a application framework that is privacy-centric, that is unlike anything that’s been released before.

Edwin Kwan: For one company, productivity is measured by having more than 500 keystrokes on the computer every hour. In the rare case for Australia, an employee had been terminated for not meeting the required productivity levels.

Olimpiu Pop:  OWASP released version 0.5 of Top Threats for Large Language Models. Here it is, the top five threats :
1) Prompt Injection
2) Insecure Output Handling
3) Training Data Poisoning
4) Model Denial of Service
5) Supply Chain Vulnerabilities
There is something for everybody.

Trac Bannon: When it comes to Generative AI and Large Language Models, those of us focused on cybersecurity and architecture are aware of the speed at which the bad guys are innovating . OWASP has assembled over 125 experts who are keeping pace and who are looking holistically at the creation and usage of generative AI LLMs.

Katy Craig:There’s a new initiative on center stage, the OWASP Top 10 for Large Language Model Applications Project. This venture has a singular mission: to illuminate the security pitfalls that come hand-in-glove with large language models like ChatGPT.

From Sourced Network Productions this week in Las Vegas, Nevada, for Black Hat and Defcon, It’s 5:05. I’m Hillary Coover. Today is Friday, August 11th, 2023. Here’s the full story behind today’s cybersecurity and open source headlines.

 

Kadi McKean: Veilid launch with Paul Miller at BlackHat 2023

Hi, this is Kadi McKean here at BlackHat and I am with “the Gibson”. I have heard you’ve got some big things going on and potentially a new release coming up.

Friday night at, DEFCON, my team is releasing a application framework that is privacy-centric that is unlike anything that’s been released before. We are working with a group know as ‘Cult of the Dead Cow,’ which a lot of you probably know out there in listener land, but this tool is called Veilid. It’s ultimately a application framework that creates the underpinnings for truly private communications and combats surveillance capitalism. There’s no other way to say it. It is a way to build apps that are fully distributed and make you the owner of your own information.

That’s incredible. So what is the thing that makes this stand out over other products that we’ve previously seen? What is the one thing you would say that just pushes it over the edge?

First of all, we are fully open source. We’re committed to open-source methodologies and licensing. Second of all, we’re not doing this for money. There is no profit motive here. There’s nothing to do with that. To think that it can be controverted by an existing organization, it’s gonna be pretty hard because there’s literally no profit motive.

We are doing this for the betterment of the world. We’re not out to make a buck and we’re not out to sell it. As a matter of fact, we have a non-profit called The Veilid Foundation that’s been founded, and I, full disclosure, I’m a director for it.

We are launching this tool basically because it’s the right thing to do and go save the world.

I love it. Well, thanks for stopping by.

Yeah, thank you.

Resources
Guest speaker: Paul Miller, Sr. Manager at VmWare Carbon Black

https://www.linkedin.com/in/paulm3319/
Veilid
Cult of the Dead Cow Launches Encryption Protocol Veilid
https://twitter.com/VeilidNetwork?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

 

Edwin Kwan: Company using Keystroke Logging to Measure Employee Productivity

How do you measure productivity of remote workers? For one company, productivity is measured by having more than 500 keystrokes on the computer every hour. In the rare case for Australia, an employee had been terminated for not meeting the required productivity levels.

This is Edwin Kwan from Sydney, Australia.

The employee had an otherwise unblemished 18-year career and was recently unsuccessful in getting the dismissal overturned.

The company tracked three categories of activities on the employee’s workstation. The first was their daily activity. And this is based on the recorded first and last event on the person’s computer. The second was the hourly activity. And the third was their VPN activity.

Keystroke logging was also used, and the manager said that in the area of business the employee was in, the activity required a minimum keystroke of 500 an hour. The ruling did not state what system logs or tools were used, but this is a rare public example of the use of application and VPN logs to keep tabs on an employee’s productivity.

It’s something that had been raised as a concern for several years, but actual levels of usage are unknown.

Resources
http://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/FWC/2023/1792.html

 

Hillary Coover:

And now our story for the week featuring Tracy Bannon, Katy Craig and Olimpiu Pop with their perspectives on the OWASP Guidance for Large Language Models.

 

Trac Bannon: Help is on the way: OWASP Releases Top 10 LLM Threats

When it comes to Generative AI and Large Language Models, we are all aware of the absolute breakneck speed of model research, adoption, and innovation. Those of us focused on cybersecurity and architecture are aware of the speed at which the bad guys are innovating as well. Thank heavens, rapid help is on the way.

Hello, this is Trac Bannon reporting from Camp Hill, Pennsylvania.

Just a little over a month ago, OWASP released an early version of the top 10 threats for large language models. OWASP is the Open Worldwide Application Security Project. The early version was 0.5. On August 1st, they released version 1.0.

The release cadence of this guidance is important to keeping pace with the explosive impacts of LLMs. Some of the threats are technical in nature, while others depend on gullible human nature. For example, LLM02: Insecure Output Handling, is the situation when the LLM output is “accepted without scrutiny.” The ramifications could include ingestion by backend systems, providing an attack vector for things like cross-site scripting.

LLM09 is titled Overreliance. It is exactly what it sounds like. When systems or people are overly dependent on LLMs without oversight. I’ve been reporting on myriad issues in this space, including miscommunication and legal issues resulting from inappropriate or incorrectly generated content.

There are plenty of other more technical vulnerabilities such as Insecure Plugin Design, LLM07, and Training Model Poisoning, LLM03.

What is important is that OWASP has assembled over 125 experts who are keeping pace and who are looking holistically at the creation and usage of generative AI LLMs.

It’s important that you keep informed about this top 10 listing. Listen to today’s full set of contributions from the 5:05 journalists and head over to 505updates.com for links to these resources.

Something to noodle on.

Resources
OWASP Top 10 for LLM
https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki
OWASP Top 10 for Large Language Model Applications | OWASP Foundation
Official Release: The OWASP Top 10 for Large Language Model Applications v1.0

 

Katy Craig: OWASP 4 LLM

There’s a new initiative on center stage, the OWASP Top 10 for Large Language Model Applications Project. This venture has a singular mission: to illuminate the security pitfalls that come hand-in-glove with large language models like ChatGPT.

This is Katy Craig in San Diego, California.

Designed to educate a diverse spectrum of stakeholders, developers, designers, architects, managers, and organizations, this project is a beacon of awareness. It reveals the top 10 vulnerabilities that frequently haunt Large Language Model applications, offering insights into their potential devastation, exploitability, and real-world prevalence. Among these vulnerabilities are the menacing prompt injections, insidious data leakage, compromised sandboxing, and the specter of unauthorized code execution.

For those who relish in the journey of innovation, this initiative extends an open invitation. It’s an orchestra where the symphony is composed by myriad voices, yours included. The community’s strength fuels the project’s progress, and your expertise is the instrument that shapes the outcome.

In this era where technology dances on the edge of possibility, let’s ensure that security is the partner that never falters. Embrace the mission of OWASP Top 10 for Large Language Model Applications, and embark on a journey that secures the future one keystroke at a time.

This is Katy Craig. Stay safe out there.

Resources
OWASP Top 10 for Large Language Model Applications | OWASP Foundation
Official Release: The OWASP Top 10 for Large Language Model Applications v1.0

 

Olimpiu Pop: OWASP Releases Top 10 Threats LLM V1.0

A month ago, OWASP released version 0.5 of Top Threats for Large Language Models, with version 1.0 expected by end of July. And here it is, the top five threats remain the same:
1) Prompt Injection

2) Insecure Output Handling

3) Training Data Poisoning

4) Model Denial of Service

5) Supply Chain Vulnerabilities

There is something for everybody. An ill-willing actor can persuade your newly-appointed virtual assistant to do things he or she shouldn’t do. Or they might disrupt the normal functioning either entirely or introduce faulty components in the chain or in the model. In the lower part of the pack, things are a bit different. Sensitive Information Disclosure is a new entry on position six, followed by Insecure Plugin, which claims from position 10 in the previous version.

Eight and nine are somehow related- Excessive Agency, or Overreliance. So, either we give the AI too much rope, or we rely too much on it. None of the versions are desirable.

Last, but not least, on position 10 is Model Theft, another new addition to the list. This involves unauthorized access, copying, or exfiltration of proprietary LLM models.

Now that we know them, the question is what we will do about them- especially as we are more and more eager to renounce privacy and control in favor of comfort.

During this episode of 505updates.com, you can listen to other angles on the topic as well.

Olimpiu Pop reported from Transylvania, Romania.

Resources
https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_0.pdf

 

Marcel Brown: This Day, August 11th, 12th in Tech History

This is Marcel Brown bringing you some technology history for August 11th and 12th.
August 11th, 2003. The Blaster worm, also known as MSblast or Lovesan, begins to spread on the internet, infecting Windows XP and Windows 2000 computers. The primary symptom of the worm was the crashing of the RPC service, which would trigger the computer to shut itself down and reboot. Microsoft estimated that the number of machines infected was between eight and 16 million. Damage caused by the worm was estimated at $320 million.

August 12th, 1960. Echo 1, the world’s first communication satellite, is launched. Technically, Echo 1 was a passive reflector, as communication signals were bounced off it, rather than retransmitted as modern satellites do today.

August 12th, 1981. IBM introduces its first personal computer, the IBM PC model 5150. IBM originally intended this model to be a stopgap computer that would allow them to quickly tap into the emerging personal computer market while taking the time to develop a so-called “real” personal computer. It was developed in under a year by a team of 12 with the goal of rapid release to market.

Therefore, this team was allowed to work outside of the normal IBM development process and use whatever off-the-shelf components allowed for the quickest development. This overriding goal of developing something as quickly as possible had monumental, unintended consequences for IBM and the computer industry as a whole that are still being felt to this day.

By compromising quality for rapidity, the design of the IBM PC forced software programmers to resort to inelegant methods of software development, hindering the reliability and compatibility of their software. This laid the groundwork for the reputation of the PC as error-prone and frustrating to use.

The use of common components and the choice of IBM’s DOS as the PC’s operating system allowed other companies to quickly clone the IBM PC. They also allowed Microsoft to license their DOS to other companies, giving Microsoft control of the operating system market. Ultimately, these choices led to IBM’s loss of control of the platform.

IBM never did really get a chance to create their so-called “real” PC. And because IBM was the 800-pound gorilla of the business world at the time, the computer that was supposed to be a stopgap became the overwhelming computing standard, crushing nearly every other emerging platform in the process.

Steve Jobs was quoted in a 1985 interview. “If IBM wins, my personal feeling is that we are going to enter a sort of computer dark ages for about 20 years.” While IBM themselves didn’t win, the creation that they lost control of was the clear market winner for approximately the next 20 years. Some will argue that from a technology standpoint, this was in fact a dark age for the personal computer, but no one could have predicted that on this day in 1981.

That’s your technology history for today. For more, tune in next week and visit my website thisdayintechhistory.com.

Resources
http://thisdayintechhistory.com/08/11
http://thisdayintechhistory.com/08/12

 

Hillary Coover

That’s our updates for today, August 11th, 2023. I’m Hillary Coover. We’ll be back on Monday… at 5:05.

Contributors:

Comments:

Newsletter