December 1, 2023
In this Episode:
Marcel Brown: December 1st, 1996. America Online launches a new subscription plan offering their subscribers unlimited dial up internet access for $19.95 a month. Previously, AOL charged $9.95 a month for 5 hours of usage. The new plan brought in over 1 million new customers to AOL within weeks, and daily usage doubled among subscribers, to a whole 32 minutes per day.
Edwin Kwan: Apple has urgently released security updates to address two zero day vulnerabilities that were actively being exploited. These vulnerabilities impact iPhones, iPads, and Mac devices.
Katy Craig:CISA and the United Kingdom’s National Cyber Security Centre jointly released guidelines for secure AI system development, developed in cooperation with 21 other agencies and ministries from across the world, including all members of the group of seven major industrial economies.
Olimpiu Pop: UK’s Cyber Security Agency provided guidelines, and they invite you to act securely while developing your AI system. They mostly refer to general software development practices. Practices that the industry is trying to impose without much success for years now.
Trac Bannon: The CISA AI Roadmap is a comprehensive, whole of agency plan. They’ve aligned it with the U. S. National AI Strategy. The roadmap has lines of effort to promote the beneficial uses of AI, enhance cybersecurity capabilities, and improve protection of AI systems from cyber based threats. One specific example that I find particularly valuable is the emphasis on secure by design principles in AI adoption.
From Sourced Network Productions in New York City, It’s 5:05 on Friday, December 1st, 2023. This is your host, Mark Miller.
Stories in today’s episode, come from Edwin Kwan in Sydney, Australia, Marcel Brown in St. Louis, Missouri, and our Point of View Friday segments are provided by Olimpiu Pop in Transylvania, Romania, Katy Craig in San Diego, California, and Trac Bannon in Camp Hill, Pennsylvania. We’ll start off today’s episode with Edwin Kwan, updating us on Apple’s latest emergency security updates. Let’s get to it.
The Stories Behind the Cybersecurity Headlines
Apple Releases Emergency Zero-Day Security Updates
This is Edwin Kwan from Sydney, Australia.
These vulnerabilities impact iPhones, iPads, and Mac devices. They were discovered in the WebKit browser engine and allow attackers to access sensitive information through an out of bounds read weakness. It also allows the execution of arbitrary code via a memory corruption bug on vulnerable devices by using maliciously crafted webpages.
Apple has addressed these flaws in devices running iOS 17. 1. 2, iPadOS 17. 1. 2, MacOS Sonoma 14. 1. 2 and Safari 17. 1. 2 by doing enhanced input validation and locking.
The impacted Apple devices include iPhone XS and later, various iPad models and Macs running MacOS Monterey, Ventura, and Sonoma. These vulnerabilities marked the 19th and 20th zero day flaws exploited in the wild that were fixed by Apple devices as soon as possible.
– Apple: https://support.apple.com/en-us/HT214031
This Day, December 1 & 2, in Tech History
December 1st, 1971. Michael Hart, founder of what is now known as Project Gutenberg, launches the project by making his first posting, the Declaration of Independence. Now known as the father of ebooks, earlier in the year, Hart had been given an operator’s account on a mainframe at the University of Illinois, where he was a student. Having been given highly valuable computer time when few people had such opportunity, he decided to begin a project that would digitize and electronically preserve public domain books and texts and make them freely available.
The Xerox Sigma 5 mainframe at the Materials Research Lab at the University of Illinois just happened to be one of the first 15 nodes on the early ARPANET, the beginning of the modern internet. The ability for anyone connected to this network to download information was a major inspiration for Hart to begin Project Gutenberg.
December 1st, 1996. America Online launches a new subscription plan offering their subscribers unlimited dial up internet access for $19.95 a month. Previously, AOL charged $9.95 a month for 5 hours of usage. The new plan brought in over 1 million new customers to AOL within weeks, and daily usage doubled among subscribers, to a whole 32 minutes per day.
This huge increase in usage, overloads AOL’s infrastructure with the result being that many of their subscribers could not access the service. Class action lawsuits were filed by angry subscribers who could no longer access the service they were paying for.
Regardless of their trouble, by offering unlimited internet access for a reasonable fee, AOL helped facilitate increased adoption of internet usage among a public still becoming acclimated to the information superhighway.
December 1st, 1999. The domain name business. com sells for $7.5 million. At the time, it was the most expensive domain name sold in history, and still ranks in the top 15 all time most expensive domain names.
Domain name investor Mark Ostrovsky had purchased the domain in the mid 1990s for $150,000. In 2007, the company that owned Business. com and had created an ad network website around the domain sold for $345 million to Yellow Pages publisher R. H. Donnelly. Mark Ostrosky reportedly owned a stake in this company as well.
If only I had the foresight in the 90s to buy up simple domain names when they were all available. I was in the right position to do so, but I just didn’t think of doing it until years later, when they had already been registered and started selling for crazy amounts of money. However, I did help my parents sell a domain they registered in the 90s for $20,000.
Looking back now, I probably could have gotten a lot more for it.
December 2, 1991. Apple releases version 1. 0 of QuickTime, a multimedia extension for playing color video, transforming the capabilities of personal computers. For QuickTime, only specialized computers could play color video. QuickTime allowed anyone with a personal computer to do so, and it changed the history of computing, in more ways than one.
It was the patent infringement battle over QuickTime that led to the now famous truce between Steve Jobs and Bill Gates in 1997 that helped Apple survive long enough to transform itself in the 2000s. There is a lot more detail to this story, which is available on my website, ThisDayInTechHistory.Com.
And that’s your technology history for this week. For more, tune in next week, and do make sure to visit my website, ThisDayInTechHistory.com.
We are at the part of our show where each Friday, our contributing journalists agree on a single story for discussion. This week, the topic is the CISA and EU roadmaps for AI security. Let’s start off with an overview from Katy Craig.
AI Guidelines: US and EU Release Secure AI System Development Guidelines
CISA and the United Kingdom’s National Cyber Security Centre jointly released guidelines for secure AI system development, developed in cooperation with 21 other agencies and ministries from across the world, including all members of the group of seven major industrial economies.
This is Katy Craig in San Diego, California.
The guidelines for secure AI system development target providers of artificial intelligence, AI systems. These guidelines aim to ensure that AI systems, whether developed from scratch or built upon existing tools and services, function reliably, remain available, and safeguard sensitive data from unauthorized access.
The guidelines emphasize that while AI offers significant societal benefits, its full potential can only be realized through secure and responsible development and operation. Given AI’s unique security vulnerabilities, security considerations must be integrated throughout the system’s lifecycle and not just during the development phase.
The guidelines are divided into four key areas of the AI system development lifecycle.
1. Secure Design. This section focuses on risk understanding, threat modeling, and critical considerations in system and model design.
2. Secure Development. This section includes guidelines on supply chain security, thorough documentation, and managing assets and technical debt.
3. Secure Deployment. This covers protecting infrastructure and models, developing incident management processes, and ensuring responsible release.
4. Secure Operation and Maintenance. Here, the recommendations for post deployment actions, such as logging, monitoring, update management, and information sharing can be found.
This initiative marks a significant step in standardizing AI system security, highlighting the need for comprehensive strategies to manage AI’s unique vulnerabilities effectively.
This is Katy Craig, stay safe out there.
Tracy (Trac) Bannon
AI Guidelines: Can CISA and her partners keep up the pace?
The U. S. Cybersecurity and Infrastructure Security Agency, CISA, is moving at breakneck speed compared to the normal lumbering cadence of U. S. government agencies. They are collaborating with industry and with international partners of the U. S., like Australia and the U. K. I’m genuinely excited! They are publishing materials that I need right now for designing and delivering cyber secure solutions leveraging AI. The latest pubs are the CISA AI Roadmap and the NCSC Guidelines for Secure AI Systems Development.
Hello, this is Trac Bannon reporting from Camp Hill, Pennsylvania.
The CISA AI Roadmap is a comprehensive, whole of agency plan. They’ve aligned it with the U. S. National AI Strategy. The roadmap has lines of effort to promote the beneficial uses of AI, enhance cybersecurity capabilities, and improve protection of AI systems from cyber based threats.
One specific example that I find particularly valuable is the emphasis on secure by design principles in AI adoption. Our emphasis must be to make sure AI software tools are not vulnerable to attacks or abuse. This aligns with fundamental security practices and it’s crucial for the responsible and secure development of AI systems.
Similarly, the NCSC Guidelines for Secure AI Systems Development provide practical guidance for developing secure and resilient AI systems. The emphasis is on addressing security considerations throughout the AI system development lifecycle.
This guidance is invaluable for software architects and engineers. It provides a framework for integrating security leading practices into AI system development. We’ve got to keep our focus on more robust and secure AI solutions.
We’ll also need to keep an eye out for trends by bad actors. Why? The bad guys read all these publications as well. Time will tell if this collaborative approach can keep pace with our needs.
Will CISA and her partners be able to keep up? Can they keep the existing guidance current while continuing to publish new and needed resources? I certainly hope so.
Something to noodle on.
– NCSC: https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development
– CISA: https://www.cisa.gov/sites/default/files/2023-11/2023-2024_CISA-Roadmap-for-AI_508c.pdf
– CISA: https://www.cisa.gov/ai/roadmap-faqs
– White House: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
– DHS: https://www.dhs.gov/news/2023/11/14/dhs-cybersecurity-and-infrastructure-security-agency-releases-roadmap-ai
AI Guidelines: Can governments protect us from AI?
There is no news that AI is the new fear of missing out enhancer for all executives, tech and non-tech alike. It seems that it’s spread to the governmental agencies as well. But everybody is treading lightly and cautiously. During these tumultuous economic times, Artificial Intelligence promises to offer hyperbolic revenue streams, larger by a couple of orders of magnitude than the World Wide Web, and we don’t call it wide for nothing.
Also, AI is a multi factor enhancer. It can transform your wildest dream into a utopia or into a Orwellian dystopia.
How should humanity deal with it? The European Union reacted quicker than ever before. It put in place legislation, proposed it, adapted it, published it, and then it stopped. The good part of this is that it provides some kind of safety. Rules are rules. And they allow countries to unplug anything that doesn’t tick all the boxes.
On the other hand, It is hard to comprehend and to adhere to, especially in a field that is moving as fast as the technology space. And the AI space is a notch faster, if you can imagine that.
There are other approaches, softer spoken ways of the guidelines. This way, you are advised to be a good citizen and act cautiously, and think about other aspects and not only your good fortune.
UK’s Cyber Security Agency provided guidelines, and they invite you to act securely while developing your AI system. Design securely, implement securely, deploy securely, and you got it. Support and maintain securely. They mostly refer to general software development practices. Practices that the industry is trying to impose without much success for years now. Still work needs to be done there.
A similar approach has the across the Atlantic cousin. CISA released a roadmap of what they intend to do with the AI security. For now, they see four lines of effort. For now, they see 4 lines of effort.
1. Responsible use of AI to support their mission.
2. Assuring AI systems.
3. Protecting critical infrastructure from malicious AI.
4. Join forces with others.
Even though you have to appreciate the methodical approach, for me, it seems a bit too slow paced through the dynamic world we live in. I do hope that the good people in tech have high ethical standards, because the governmental agencies still need some time to catch up.
Olympia Pop reported from the magical Disney World. More resources and opinions on 505updates.com.
That’s it for today’s open source and cyber security updates for direct links to all stories and resources mentioned in today’s. For direct links to all stories and resources mentioned in today’s episode, go to 505updates.com. 5:05 is a Sourced Network Production with updates available Monday through Friday on your favorite audio streaming platform. We’ll see you Monday… at 5:05.