June 27, 2023
AI Chatbot Used for Sex; Medibank Nightmare Continues; He’s not stupid... he’s lazy; Today in Tech History
In this Episode:
Meta AI Chatbot Used for Sex
?? Katy Craig, San Diego, California ↗
Meta’s new AI is being used to create sex chatbots – The Washington Post
Meta’s new AI is being used to create sex chatbots – The Washington Post
Medibank Nightmare Continues: APRA takes action against Medibank for data breach
?? Edwin Kwan, Sydney, Australia ↗
Medibank hit with $250m extra capital requirement for data breach
APRA takes action against Medibank Private in relation to cyber incident
Judge Issues Federal Ruling against Steven Schwartz in Chatbot case
?? Mark Miller, New York City↗
Full transcript of court ruling:
OPINION AND ORDER ON SANCTIONS In researching and drafting court submissions, good lawyers appropriately obtain assistance from
This Day in Tech History
?? Marcel Brown, St. Louis, Missouri ↗
https://thisdayintechhistory.com/06/27/all-hail-atari/
Episode Transcription:
Mark Miller:
From Sourced Network Productions in New York City, I’m Mark Miller. Today is Tuesday, June 27. Here’s the full story behind each of our headlines.
Katy Craig:
Get ready for an AI showdown. Folks. Meta’s new open source AI chatbots were just offered up to people and they’re using them for sex.
This is Katie Craig in San Diego, California.
Open source AI is shaking things up, giving everyone the power to build chat bots and data tools on their own like a buffet of innovation, breaking free from corporate control.
But is there ever a dark side. Open source AI can be exploited for fraud, cyber hacking, and even creating artificial pornography, including abuse, rape, and child pornography. Senators are raising concerns about misuse and abuse while platforms like Hugging Face try to keep it clean with content policies.
Meta champions open source AI sharing models like LLaMA to advance technology together. It’s about transparency and collaboration according to Meta, but let’s not forget the risks.
We need ethical guardrails to prevent disinformation and hate speech from spreading like wildfire. It’s becoming more difficult to believe our eyes, and we must consider how these new technologies might affect our young people.
With responsibility and a touch of AI magic, we can navigate this landscape, unleash innovation, and create a future that benefits us all. But we must remain vigilant to the possibilities that miscreants will try to use AI for evil.
This is Katie Craig, stay safe out there.
Edwin Kwan:
This is Edwin Kwan from Sydney, Australia.
The Australian Prudential and Regulation Authority, also known as APRA, has imposed an extra $250 million requirement and Medibank Capital adequacy requirement. Medibank Private, a health insurance provider, suffered a data breach in October, 2022, and that resulted in the compromise of 9.7 million current and former customers.
This was one of the most significant breaches in Australia. The announcement from APRA was made following the examination of the matters relating to the incident and the increase was reflecting the witnesses identified in Medibank’s information security environment. APRA noted that while Medibank has already addressed the specific control weaknesses which permitted unauthorized access to its systems, it still has further work to do across a number of areas to further strengthen its security environment and data management.
The action demonstrates how seriously APRA protects entities obligations in relation to cyber risk. In taking this action, APRA seeks to ensure that Medibank expediates its remediation program.
Mark Miller:
Over the past month, I’ve been following the case against Steven Schwartz and Peter LoDuca, the lawyers who used raw ChatGPT output as the sole form of “research” to defend a client, and I put research in quotes. The case isn’t about whether or not a lawyer can use output from an AI engine. It’s about using that information without checking for accuracy in the output.
In this case, no confirmation of the actual cases being cited was done. The defense team wasn’t even aware there was an issue until the opposing counsel sent back a response to Schwartz’s filing that they, the opposing councel could not find cases referenced in the defense documents. Even then, Schwartz and his team ignored the evidence that they had submitted fake cases as a defense and proceeded with the trial.
Judge Castel, the federal judge in the case against Schwartz and his team, released his findings last week blasting Schwartz and LoDuca for their complete lack of adherence to basic gatekeeping to ensure the accuracy of their court filings.
This isn’t something that is a “nice to have”. It’s actually an existing rule called Rule 11. Here’s a direct quote from the judge on Schwartz’s teams disregard for the law.
“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias, and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there’s nothing inherently improper about using a reliable artificial intelligence tool for assistance.
But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings, Rule 11. Peter LoDuca, Stephen A. Schwartz, and the law firm of Levidow, Levidow and Oberman PC abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool. ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
I am going to pause here for a second to let that sink in. A federal judge has stated that it’s perfectly fine to use output from an AI engine as part of your research, but you have to check the output for accuracy. For most of us in tech, that’s called “common sense”. It highlights the point that the average layman when using ChatGPT doesn’t understand the basics of how it works and how the output is generated.
Schwartz is no dummy. He’s been practicing law for 30 years. We can assume that he’s been around the block a couple times in those 30 years, and yet he professes that he was tricked by ChatGPT, that he had no idea that the output could be generated from non-existent cases. If we are to believe him, and I do actually believe him, that doesn’t make him stupid. It makes him lazy.
The egregiousness as explained by the judge is not that Schwartz and LoDuca use ChatGPT as a tool. It is that when confronted with proof that what had been provided was false, they stood by their original filing.
Some might say this is further proof that AI needs to be reigned in. I disagree. This is not a case against ChatGPT or the use of AI. It’s a case against using content without verifying its accuracy. Plain and simple; they were not fooled by ChatGPT output , because they never verified a single thing that was output as a response.
People were lazy. Don’t punish the technology for the human condition.
I’ve put a link to Judge Castel’s findings in the resources section of today’s episode at 505updates.com. It’s written, thoughtfully organized, and is not mired in legal jargon. It’s well worth the read.
Thank you to my friend Joel MacMull for sending me a copy of Judge Castel’s findings.
This is Mark Miller reporting in from New York City, just a couple steps away from the Supreme Court.
Marcel Brown:
This is Marcel Brown with your technology history for June 27th.
June 27th, 1972. The iconic video game company Atari is founded by Nolan Bushnell in Ted Dabney. Their first video game Pong was the first commercially successful video game and led to the start of the video game industry
In 1977, Atari’s video computer system known as the VCS, and later the Atari 2600, popularized the home video game market. Before the video game crash of 1983, Atari was the fastest growing company in the history of the United States at the time, and the brand was synonymous with video games. All those who have enjoyed video games, whether we started playing in the seventies, eighties, or just in the last few years, should take a moment to reflect on the company that single-handedly spawned the video game industries we so cherish.
I propose that the best way to do that, well, stop what you’re doing and play a video game right now.
That’s your technology history for today. For more, tune in tomorrow and visit my website ThisDayInTechHistory.com.
Mark Miller:
That’s our update for today, June 27. I’m Mark Miller. . We’ll be back tomorrow at 5:05.