Newsletter

open source and cybersecurity news

September 15, 2023

It's 5:05, Friday, September 15, 2023. TIme for your daily cybersecurity and open source headlines

In this Episode:

Marcel Brown:  September 16th, 1997. Twelve years to the day after resigning from Apple, Steve Jobs is named interim CEO of Apple. Only seven months earlier, Jobs’ company Next was purchased by Apple. Much of the technology acquired with the purchase was used to build the Mac OS X operating system.

Edwin Kwan: Spyware masquerading as Telegram applications have been spotted in the Google Play Store and have been downloaded over 60,000 times. According to security researchers, the app appears visually identical to the official Telegram application.

Katy Craig: AI often has a mystical aura like it’s some sort of wizardry, but let’s cut through the fog. AI is software, and like any software, it needs to be secure by design.

Trac Bannon: We can and should apply CISA’s Secure-by-Design and -Default guidance to the sexy trifecta: AI, ML, and Generative AI. Applying the CISA Secure-by-Design guidance presents many considerations and challenges.

Olimpiu Pop: Artificial intelligence, though not always understood, holds enchanting promise to reshape everything. Medicine with faster, more accurate diagnostics, and even our leisure time with Netflix’s suggestions and Nest’s intuitive thermostats.

 

The Stories Behind the Cybersecurity Headlines

 

Edwin Kwan
Fake Telegram Apps Infect Thousands with Spyware

Edwin Kwan, Contributing Journalist, It's 5:05 PodcastSpyware masquerading as Telegram applications have been spotted in the Google Play Store and have been downloaded over 60,000 times.

This is Edwin Kwan from Sydney, Australia.

According to security researchers, the app appears visually identical to the official Telegram application. It contains additional packages that the original app does not. Those additional libraries attempt to gain access to the user contact information and run whenever it connects to the command and control server to facilitate the exfiltration of those information. It also reads incoming messages and files that are sent or received. Those information are also sent to the command and control server.

The app appears to be targeted towards Chinese-speaking users and the Uyghur ethnic minority.

Google has been unable to prevent these malicious apps from getting into the Google Play Store, as the publishers only introduce the malicious code after the screening process and as part of post-installation updates.

Resources
– The Hacker Newshttps://thehackernews.com/2023/09/millions-infected-by-spyware-hidden-in.html
– Bleeping Computer https://www.bleepingcomputer.com/news/security/evil-telegram-android-apps-on-google-play-infected-60k-with-spyware/

 

Marcel Brown
This Day in Tech History

Marcel Brown, Contributing Journalist, It's 5:05 Podcast

This is Marcel Brown and I’m on the road bringing you some technology history for September 15th and 16th.

September 15th, 1986. Apple introduces the Apple II GS, the last major product release in the Apple II series of personal computers. Blending the older Apple II series computers with aspects from the Macintosh computer, the advanced graphics and sound capabilities of the II GS, hence the name, was ahead of other contemporary computers such as the Macintosh and IBM PC. However, as Apple chose to focus on the Macintosh, Apple eventually ceased development of the Apple II series. The last II GS was produced in December of 1992. The Apple II GS was actually the computer I really wanted as a child. And as an adult, I purchased one for about $50. Many years later, I actually got Steve Wozniak to sign my Apple II GS, which I still have to this day.

September 16th, 1985. Five months after losing control of the company in a boardroom battle with John Scully, Steve Jobs resigns from Apple. Jobs goes on to found Next and purchase Pixar before eventually returning to Apple.

September 16th, 1997. Twelve years to the day after resigning from Apple, Steve Jobs is named interim CEO of Apple . Only seven months earlier, Jobs’ company Next was purchased by Apple, and just two months earlier, Gil Emilio resigned as Apple’s CEO. Much of the technology acquired with the purchase of Next was used to build the Mac OS X operating system, which became the core of the iOS operating system that runs the iPhone and iPad.

That’s your technology history for this week. For more, tune in next week and visit my website, thisdayintechhistory.com.

Resources
https://thisdayintechhistory.com/09/15/

 

It’s Point of View Friday, featuring Katy Craig, Trac Bannon and Olympiu Pop with their perspectives on AI and Secure by Design.

 

Katy Craig
AI and Cybersecurity

Tracy Bannon - Contributing Journalist

Today we’re diving into the nitty gritty of AI and cyber security. AI often has a mystical aura like it’s some sort of wizardry, but let’s cut through the fog. AI is software, and like any software, it needs to be secure by design.

This is Katy Craig in San Diego, California.

From the get go, security has to be a core business requirement, not just a tech feature you slap on later. We’re talking about the whole lifecycle of the product, from the drawing board to its digital retirement party.

Now, AI is a bit of a special beast. It’s like the high interest credit card of technical debt. Take shortcuts on security, and you’re asking for trouble. As AI gets more ingrained in everything from email filters to medical diagnostics, the stakes become sky high.

The AI community needs to step up its game by applying existing security best practices, like system development lifecycle risk management, defense in depth, and zero trust. And let’s not forget, AI systems should respect privacy principles by default.

Bottom line, secure by design isn’t just a nice to have, it’s a must have. As AI continues to weave itself into the fabric of our daily lives, we can’t afford to skimp on security.

This is Katy Craig, stay safe out there.

Resources
– CISA https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception

 

Trac Bannon
Fortifying AI

Tracy Bannon - Contributing Journalist

AI is fundamentally software. That means we can and should apply CISA’s Secure-by-Design and -Default guidance to the sexy trifecta: AI, ML, and Generative AI.

Hello, this is Trac Bannon reporting from Camp Hill, Pennsylvania.

Applying the CISA Secure-by-Design guidance to AI, ML, and Generative AI presents many considerations and challenges. Here is a sampling.

Ownership at the executive level. Every technology provider must take ownership at the executive level to ensure their AI products are both Secure-by-Design and Secure-by-Default. This requires a top-down approach to prioritize security in the development and deployment of AI. Secure-by-Design isn’t just an engineer’s play.

Adversarial AI attacks. These aren’t just clever hacks, they’re AI’s Achilles’ heel. This highlights the fundamental security issues in AI and ML systems. These attacks can exploit vulnerabilities in the design and implementation of the AI models, leading to potentially harmful consequences. Secure-by-Design principles should address these vulnerabilities and mitigate the risks associated with adversarial attacks.

Data privacy and de-identification. Taking a cue from the Office of Australian Information Commissioner, de-identified data isn’t just a good idea, it’s the law of the AI land. We’re talking rock-solid encryption and anonymization. No exceptions. There must also be robust data privacy measures.

Large AI model theft poses a significant risk to the security of AI systems. While CISA may not be the primary policymaking audience for this problem, there are relevant policy levers regarding how the federal government protects its models. Secure-by-Design principles should address the security of large AI models, including measures to prevent theft and leaks that could be used by cyber criminals to attack critical infrastructure.

Balancing security and accessibility. The security of AI should not hinder improvements in model reliability and interpretability that require providing more access to powerful models. Secure-by-Design principles should strike a balance between security and accessibility, ensuring that AI systems are both secure and usable for the intended purposes.

Augmenting security practices. Some security practices will need to be augmented to account for AI considerations. This doesn’t mean completely throwing out your existing security playbook. Instead, fortify leading practices with Secure-by-Design principles.

Clearly, we’ve got a ton to do, and clearly, there’s a growing body of guidance. For links to the resources used in this report, head over to 505updates.com or just ping me.

Something to noodle on.

Resources
– CISA https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception

 

Olimpiu Pop
Transparency, Accountability, Responsibility for AI Models

Olimpiu Pop, Contributing Journalist

Artificial intelligence, though not always understood, holds enchanting promise to reshape everything. Medicine with faster, more accurate diagnostics, and even our leisure time with Netflix’s suggestions and Nest’s intuitive thermostats. As AI emerges, much like electricity or the Internet, as a ubiquitous general-purpose technology, its newest sibling, Generative AI, joins the ranks of techniques put already to good use like supervised, unsupervised reinforcement and deep learning.

However, with omnipresence often comes obscurity. Much of the user base, unaware of AI’s quiet integration into devices and industries, remains blind to its risks. Even tech experts grapple with understanding these risks fully. The recent release of OWASP’s, Top 10 LLM threats gives us a lifeline, but we might miss using it.

Industry leaders eager to embark on the AI journey often overlook its profound effects on privacy, security, and even society itself. The potential risks are immense and multi-faceted, hidden deep within the models, ranging from model poisoning to inherent biases.

This demands transparency and accountability. We must scrutinize what’s inside these models, identify contributors and determine responsibility. If AI models mirror software, then a Software Bill of Material becomes indispensable. Also, to label content generated appropriately, as Google will demand starting from November, especially for political campaign content.

Olimpiu Pop, Transylvania, Romania, advocating for a bright, informed future.

For more insights on the topic, visit 505updates.com, your daily open source and cyber security outlet.

Resources
– Google https://support.google.com/adspolicy/answer/13755910
– CISA https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception

Contributors:

Comments:

Newsletter