Accéder au contenu principal

AI ‘hallucination’ in B.C. court prompts caution

A Vancouver tech lawyer’s work with video game companies put him in a position to watch the rise of artificial intelligence in the industry.

Now Ryan Black finds himself on the front lines again, as his profession grapples with the technology.

“The degree to which it was impacting game studios surprised people,” said Black, who helped the Law Society of British Columbia draft advice for lawyers about the use of AI.

“The generative (AI) revolution kind of has hit people hard in terms of, ‘Oh my gosh, we have to pay attention to this now,’ so I would say that it’s a new thing for a lot of people,” he said referring to technology that can create arguments and essays based on prompts from a user.

“It doesn’t surprise me that lawyers don’t know a lot about it.”

Story continues below advertisement

The rise of generative AI tools like ChatGPT, he said, is a “revolutionary change to the practice of law.”

But a recent ruling by the B.C. Supreme Court shows lawyers must use the technology cautiously and skeptically, legal experts say.

In a costs ruling released on Feb. 20 related to a child custody case, it was revealed that Vancouver lawyer Chong Ke had used ChatGPT to prepare material submitted in the case. The material included citations to cases that don’t exist, something her opponent in the case called an AI “hallucination.”

Ke told the court that discovering that the cited cases were fictitious was “mortifying,” and she quickly informed the Law Society and admitted a “lack of knowledge of the risks” of using AI to draft court submissions.

“I am now aware of the dangers of relying on Al generated materials,” Ke said in an affidavit. “I understand that this issue has arisen in other jurisdictions and that the Law Society has published materials in recent months intended to alert lawyers in B.C. to these dangers.”

Ke apologized to the court and her fellow lawyers.


Click to play video: 'Business Matters: Artificial intelligence experts urge more deepfake ‘safeguards’'



Business Matters: Artificial intelligence experts urge more deepfake ‘safeguards’


Her lawyer John Forstrom said in an email that the case “has provoked significant public interest, but the substance of what happened is otherwise unremarkable.”

Story continues below advertisement

“I’m not sure that the case has any significant implications regarding the use of generative AI in court proceedings generally,” Forstrom said.


Breaking news from Canada and around the world
sent to your email, as it happens.

“Ms. Ke’s use of AI in this case was an acknowledged mistake. The question if or how generative AI might appropriately be employed in legal work did not arise.”

The society is now investigating Ke’s conduct, spokeswoman Christine Tam said in an email.

“While recognizing the potential benefits of using AI in the delivery of legal services, the Law Society has also issued guidance to lawyers on the appropriate use of AI and expects lawyers to comply with the standards of conduct expected of a competent lawyer if they do rely on AI in serving their clients,” Tam said.

The law society’s guidance, issued in late 2023, urges lawyers to seek training in the use of the technology, and be aware of confidentiality issues around data security, plagiarism and copyright concerns, and potential bias in materials produced by the technology.

Law societies and courts in other provinces and territories have also produced guidance on the use of AI. For instance, the Supreme Court of Yukon said in a June 2023 practice direction that if any lawyer relies on AI “for their legal research or submissions in any matter and in any form,” they must tell the court.

For Black, with the firm DLA Piper, the use of AI is causing a lot of “necessary angst about relying on a tool like this to do any real heavy lifting.”

Story continues below advertisement

Black said delivering justice requires the impartiality of a “human peer,” capable of evaluating and making important legally binding decisions.

He said he’s encountered lawyers and judges who are either “completely dialled into it, to completely averse to it, or completely agnostic to it.”

He said he’s been “impressed by the pace of the technology,” but the need for caution and skepticism around any materials generated by the material is essential for lawyers now and into the future.

Reflecting on the Ke case and others like it, Black said tools like ChatGPT are “really good autocorrect tools that do a fantastic job of relating text to other text, but they have no understanding of the world, they have no understanding of reality.”

UBC law professor Kristen Thomasen said in an interview that the B.C. Supreme Court case shows not only the limitations of the technology, but also the need for lawyers and other professionals “to be critical of the technologies that they’re using.”

Thomasen said evaluating the strengths and weaknesses of technology has to be done “in spite of what is often a lot of hype.”

She said it’s important not to delegate work that requires a human element to a computer system in “high stakes” professions like law and policing where new, potentially problematic technologies should be approached and employed with caution.

Story continues below advertisement

Thomasen said the technology has been described as a “living thing” or an existential threat to humanity, or thought of as a “superhuman ghost in the machine,” but despite being highly sophisticated, it’s just doing math based on data scraped from the internet.

She said that stepping back from seeing it as a “person” would help institutions, students and teachers better understand what the technology actually does.

“As we see how it progresses, I think it makes sense to then, kind of like the law societies, keep developing more refined and detailed guidelines or rules as we gain a better understanding of what the technology looks like,” she said.

The judge in the case that involved Ke said it would be “prudent” for her to tell the court and opposing lawyers if any other material employed AI technology like ChatGPT.

“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” Justice David Masuhara wrote in his costs ruling. “Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.”

Black said artificial intelligence technology isn’t going away, and any rules developed now will likely need changing due to the “breakneck speed” of its evolution.

Story continues below advertisement

“We are for sure now in a world where AI will exist,” he said. “There is no un-ringing this bell as far as I’m concerned.”




Source link

The post AI ‘hallucination’ in B.C. court prompts caution appeared first on Job From Home Blog.

Commentaires

Posts les plus consultés de ce blog

Jojobet (464)

Jojobet bahis adresi ua – En Güvenilir Bahis Sitesi 2021 Jojobet bahis adresi ua – En Güvenilir Bahis Sitesi 2021 Geri dönüşü yüksek bir heyecana hazır olun! Heybetli rekabet atmosferinde keyifli bir oyun deneyimi sunan öncü bir bahis platformuna hoş geldiniz. İnternetin en güvenilir ve güçlü adreslerinden biri olarak, size en üst düzey kalite standartlarına sahip online bahis fırsatlarını sunmaktan gurur duyuyoruz. Bahis ve şans oyunları tutkunlarının beklentilerini aşan kapsamlı hizmetlerimizle sizi unutulmaz bir yolculuğa çıkarmak için buradayız. Profesyonel ekibimiz, Türkiye’nin en donanımlı ihtiyaçlarına uygun olarak sürekli yenilenen bahis seçenekleriyle geniş bir kumarhane atmosferi ile etkileyici deneyimler sunar. Farklı spor dallarında eşsiz tahmin ve analiz araçlarıyla dolu olan platformumuz, kazandıran oranlarla sizden tam not alacaktır. En yeni teknolojik yazılım alt yapımız sayesinde, kesintisiz ve sorunsuz oyunculuk deneyiminin tadını çıkarabilirsiniz. Şansın, yetene...

Space loves AI, AI doesn’t love Space

Space-related applications of artificial intelligence and machine learning are often confined to the ground because moving AI onboard satellites, while promising, is significantly more difficult. “Running AI in space is like running a marathon on the moon — impressive if achieved, but limited by the environment,” said Sylvester Kaczmarek, chief technology officer at OrbiSky Systems, a London startup focused on AI edge operations. Advanced processors are power hungry, meaning satellites with onboard AI require large solar panels and extra batteries. Plus, “radiation in space can fry electronics,” Kaczmarek said. Power management is another problem. Many AI devices require very high currents at low voltages. “The demands that AI devices place on power management are new to the space industry,” said Bert Vermeire, Voyager Space chief technology officer. “There are no good solutions with space heritage and it is difficult to identify efficient and small form factor power and manageme...

Earn BIG Commissions in 2023

In this post, we’re looking at the best niches for affiliate marketing . So, if you’re struggling to choose a niche for your blog or YouTube channel, this list will help. In this list, you’ll learn some key information to help you determine which is the right niche to choose for you. Information like: How big the niche is and its market share The products you can promote Sub niche ideas And some commission rates you can expect Keep reading. Best Niches for Affiliate Marketing We’ve researched and broken down 21 of the best niches for affiliate marketing, so you don’t have to. Let’s dive in. 1. Artificial Intelligence The Artificial Intelligence industry has exploded in recent years and is projected to be valued at close to US$2 trillion by 2030. That’s what makes this one of the best niches for affiliate marketing today. There is also a wide range of AI tools to promote with generous commissions. For example, LinkWhisper is an AI content optimization tool for SEO of...