MAYAWEB.JP
土曜日, 3月 25, 2023
  • ITニュース
  • 地方のニュース
  • コンピュータ ニュース
    • All
    • マイクロソフトコンピューター
    • 最新の apple ニュース
    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    GPT-4やPaLMなどの大規模言語モデルは規模が大きくなると突然 … – (ギガジン)

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    理研 国産初の量子コンピュータ公開 超伝導方式 ネットで公開し外部からの利用可能に | TBS NEWS DIG – TBS NEWS DIG Powered by JNN

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    ChatGPTとMicrosoft 365 Copilotが話題、3月注目のデジタルネタを3本 … – GetNavi web

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    Windows 11に潜むSnipping Toolの脆弱性 Microsoftが修正するも … – エンタープライズ

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    国産初の量子コンピューターがいよいよ稼働へ 高まる期待 – 東日本放送

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    初の国産量子コンピュータ、クラウドサービスとして提供 オンラインで64量子ビット計算機を活用可能に(2023年3月24日)| –

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    初の国産量子コンピュータ、クラウドサービスとして提供 オンラインで64量子ビット計算機を活用可能に – NEWS

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    見出しでわかる IT界隈・今週の重要ニュース:全113本[2023/3/16 … – INTERNET Watch

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    国産初の量子コンピュータ公開 – –

  • パソコンの比較
  • ゲーム
  • ビデオ
  • 製品動向
No Result
View All Result
  • ITニュース
  • 地方のニュース
  • コンピュータ ニュース
    • All
    • マイクロソフトコンピューター
    • 最新の apple ニュース
    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    GPT-4やPaLMなどの大規模言語モデルは規模が大きくなると突然 … – (ギガジン)

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    理研 国産初の量子コンピュータ公開 超伝導方式 ネットで公開し外部からの利用可能に | TBS NEWS DIG – TBS NEWS DIG Powered by JNN

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    ChatGPTとMicrosoft 365 Copilotが話題、3月注目のデジタルネタを3本 … – GetNavi web

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    Windows 11に潜むSnipping Toolの脆弱性 Microsoftが修正するも … – エンタープライズ

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    国産初の量子コンピューターがいよいよ稼働へ 高まる期待 – 東日本放送

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    初の国産量子コンピュータ、クラウドサービスとして提供 オンラインで64量子ビット計算機を活用可能に(2023年3月24日)| –

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    初の国産量子コンピュータ、クラウドサービスとして提供 オンラインで64量子ビット計算機を活用可能に – NEWS

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    見出しでわかる IT界隈・今週の重要ニュース:全113本[2023/3/16 … – INTERNET Watch

    新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

    国産初の量子コンピュータ公開 – –

  • パソコンの比較
  • ゲーム
  • ビデオ
  • 製品動向
No Result
View All Result
Mayaweb.jp
No Result
View All Result

The current legal cases against generative AI are just the beginning • TechCrunch

1月 27, 2023
in 地方のニュース
7 min read
The current legal cases against generative AI are just the beginning • TechCrunch

同種の記事

Just 7 days until the TC Early Stage early bird flies away

Rocket Lab reveals big supplier deal with mystery mega constellation customer

As generative AI enters the mainstream, each new day brings a new lawsuit.

Microsoft, GitHub and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.

Two companies behind popular AI art tools, MidJourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images.

And just last week, stock image supplier Getty Images took Stability AI to court for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI.

At issue, mainly, is generative AI’s tendency to replicate images, text and more — including copyrighted content — from the data that was used to train it. In a recent example, an AI tool used by CNET to write explanatory articles was found to have plagiarized articles written by humans — articles presumably swept up in its training data set. Meanwhile, an academic study published in December found that image-generating AI models like DALL-E 2 and Stable Diffusion can and do replicate aspects of images from their training data.

The generative AI space remains healthy — it raised $1.3 billion in venture funding through November 2022, according to Pitchbook, up 15% from the year prior. But the legal questions are beginning to affect business.

Some image-hosting platforms have banned AI-generated content for fear of legal blowback. And several legal experts have cautioned generative AI tools could put companies at risk if they were to unwittingly incorporate copyrighted content generated by the tools into any of products they sell.

“Unfortunately, I expect a flood of litigation for almost all generative AI products,” Heather Meeker, a legal expert on open source software licensing and a general partner at OSS Capital, told TechCrunch via email. “The copyright law needs to be clarified.”

Content creators such as Polish artist Greg Rutkowski, known for creating fantasy landscapes, have become the face of campaigns protesting the treatment of artists by generative AI startups. Rutkowski has complained about the fact that typing text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks very similar to his original work — threatening his income.

Given generative AI isn’t going anywhere, what comes next? Which legal cases have merit and what court battles lie on the horizon?

Eliana Torres, an intellectual property attorney with Nixon Peabody, says that the allegations of the class action suit against Stability AI, MidJourney, and DeviantArt will be challenging to prove in court. In particular, she thinks it’ll be difficult to ascertain which images were used to train the AI systems because the art the systems generate won’t necessarily look exactly like any of the training images.

State-of-the-art image-generating systems like Stable Diffusion are what’s known as “diffusion” models. Diffusion models learn to create images from text prompts (e.g., “a sketch of a bird perched on a windowsill”) as they work their way through massive training data sets. The models are trained to “re-create” images as opposed to drawing them from scratch, starting with pure noise and refining the image over time to make it incrementally closer to the text prompt.

Perfect recreations don’t occur often, to Torres point. As for images in the style of a particular artist, style has proven nearly impossible to shield with copyright.

“It will … be challenging to get a general acceptance of the definition of ‘in style of’ as ‘a work that others would accept as a work created by that artist whose style was called upon,’ which is mentioned in the complaint [i.e. against Stability AI et al],” Torres told TechCrunch in an email interview. 

Torres also believes the suit should be directed not at the creators of these AI systems, but at the party responsible for compiling the images used to train them: Large-scale Artificial Intelligence Open Network (LAION), a nonprofit organization. MidJourney, DeviantArt and Stability AI use training data from LAION’s data sets, which span billions of images from around the web.

“If LAION created the dataset, then the alleged infringement occurred at that point, not once the data set was used to train the models,” Torres said. “It’s the same way a human can walk into a gallery and look at paintings but is not allowed to take photos.”

Companies like Stability AI and OpenAI, the company behind ChatGPT now valued at $TKTK, have long claimed that “fair use” protects them in the event that their systems were trained on licensed content. This doctrine enshrined in U.S. law permits limited use of copyrighted material without first having to obtain permission from the rightsholder.

Supporters point to cases like Authors Guild v. Google, in which the New York-based U.S. Court of Appeals for the Second Circuit ruled that Google manually scanning millions of copyrighted books without a license to create its book search project was fair use. What constitutes fair use is constantly being challenged and revised, but in the generative AI realm, it’s an especially untested theory.

A recent article in Bloomberg Law asserts that the success of a fair use defense will depend on whether the works generated by the AI are considered transformative— in other words, whether they use the copyrighted works it in a way that significantly varies from the originals. Previous case law, particularly the Supreme Court’s 2021 Google v. Oracle decision, suggests that using collected data to create new works can be transformative. In that case, Google’s use of portions of Java SE code to create its Android operating system was found to be fair use.

Interestingly, other countries have signaled a move toward more permissive use of publicly available content — copyrighted or not. For example, the U.K. is planning to tweak an existing law to allow text and data mining “for any purpose,” moving the balance of power away from rightsholders and heavily toward businesses and other commercial entities. There’s been no appetite to embrace such a shift in the U.S., however, and Torres doesn’t expect that to change anytime soon — if ever.

TKTK transition (more nuanced than…)

The Getty case is slightly more nuanced. Getty — which Torres notes hasn’t yet filed a formal complaint — must show damages and connect any infringement it alleges to specific images. But Getty’s statement mentions that it has no interest in financial damages and is merely looking for a “new legal status quo.” 

Andrew Burt, one of the founders of AI-focused law firm BNH.ai, disagrees with Torres to the extent that he believes generative AI lawsuits focused on intellectual property issues will be “relatively straightforward.” In his view, if copyrighted data was used to train AI systems — whether because of intellectual property or privacy restrictions — those systems should and will be subject to fines or other penalties.

Burt noted that the Federal Trade Commission (FTC) is already pursuing this path with what it calls “algorithmic disgorgement,” where it forces tech firms to kill problematic algorithms along with any ill-gotten data that they used to train them. In a recent example, the FTC used the remedy of algorithmic disgorgement to force Everalbum, the maker of a now-defunct mobile app called Ever, to delete facial recognition algorithms the company developed using content uploaded by people who used its app. (Everalbum didn’t make it clear that the users’ data was being used for this purpose.)

“I would expect generative AI systems to be no different from traditional AI systems in this way,” Burt said.

What are companies to do, then, in the absence of precedent and guidance? Torres and Burt concur that there’s no obvious answer.

For her part, Torres recommends looking closely at the terms of use for each commercial generative AI system. She notes that MidJourney has different rights for paid versus unpaid users, while OpenAI’s DALL-E assigns rights around generated art to users while also warning them of “similar content” and encouraging due diligence to avoid infringement.

“Businesses should be aware of the terms of use and do their due diligence, such as using reverse image searches of the generated work intended to be used commercially,” she added.

Burt recommends that companies adopt risk management frameworks such as the AI Risk Management Framework released by National Institute of Standards and Technology, which gives guidance on how to address and mitigate risks in the design and use of AI systems. He also suggests that companies continuously test and monitor their systems for potential legal liabilities.

“While generative AI systems make AI risk management harder — it is, to be fair, much more straightforward to monitor an AI system that makes binary predictions for risks — there are concrete actions that can be taken,” Burt said.

Some firms, under pressure from activists and content creators, have taken steps in the right direction. Stability AI plans to allow artists to opt out of the data set used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time. Rival OpenAI offers no such opt-out mechanism, but the firm has partnered with organizations like Shutterstock to license portions of their image galleries.

For Copilot, GitHub introduced a filter that checks code suggestions with their surrounding code of about 150 characters against public GitHub code and hides suggestions if there’s a match or “near match.” It’s an imperfect measure — enabling the filter can cause Copilot to omit key pieces of attribution and license text — but GitHub has said that it plans to introduce additional features in 2023 aimed at helping developers make informed decisions about whether to use Copilot’s suggestions.

Taking the ten-thousand-foot view, Burt believes that generative AI is being deployed more and more without an understanding of how to address its dangers. He praises efforts to combat the obvious problems, like copyrighted works being used to train content generators. But he cautions that the opacity of the systems will put pressure on businesses to prevent the systems from wreaking havoc — and having a plan to address the systems’ risks before they’re put into the wild.

“Generative AI models are among the most exciting and novel uses of AI — with the clear potential to transform the ‘knowledge economy,’ ” he said. “Just as with AI in many other areas, the technology is largely there and ready for use. What isn’t yet mature are the ways to manage all of its risks. Without thoughtful, mature evaluation and management of these systems’ harms, we risk deploying a technology before we understand how to stop it from causing damage.”

Meeker is more pessimistic, arguing that not all businesses — regardless of the mitigations they undertake — will be able to shoulder the legal costs associated with generative AI. This points to the urgent need for clarification or changes in copyright law, she says.

“If AI developers don’t know what data they can use to train models, the technology could be set back by years,” Meeker said. “In a sense, there is nothing they can do, because if businesses are unable to lawfully train models on freely available materials, they won’t have enough data to train the models. There are only various long-term solutions like opt-in or opt-out models, or systems that aggregate royalties for payment to all authors … The suits against AI businesses for ingesting copyrightable material to train models are potentially crippling to the industry, [and] could cause consolidation that would limit innovation.”

クレジットソースリンク

ShareTweet

関連記事

Just 7 days until the TC Early Stage early bird flies away
地方のニュース

Just 7 days until the TC Early Stage early bird flies away

3月 24, 2023
Rocket Lab reveals big supplier deal with mystery mega constellation customer
地方のニュース

Rocket Lab reveals big supplier deal with mystery mega constellation customer

3月 24, 2023
Fortra told breached companies their data was safe
地方のニュース

Fortra told breached companies their data was safe

3月 24, 2023
As banks totter, crypto is busy racking up gains
地方のニュース

Not-so-fake dry powder, AI and the future of DAOs

3月 24, 2023

探す

No Result
View All Result

最近のニュース

新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

[GDC 2023]Windowsの新ファイルシステム「DirectStorage」で … – 4Gamer.net

3月 25, 2023
新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

3/25 中日 vs 千葉ロッテ ゲームハイライト – スポーツナビ「ゲーム … – スポーツナビ

3月 25, 2023
Intel co-founder Gordon Moore has passed away

Intel co-founder Gordon Moore has passed away

3月 25, 2023
新キャラ登場!『ULTRAMAN』FINALシーズン、ティザーPV公開 –

[GDC 2023]より優れたPCゲームをデザインするために,TRPG … – 4Gamer.net

3月 25, 2023
「ムーアの法則」のゴードン・ムーア氏死去

「ムーアの法則」のゴードン・ムーア氏死去

3月 25, 2023
Mayaweb.jp

IT(情報技術)を始めとする、テクノロジー関連の最新情報やコラムはこちら。日常生活に欠かせないスマートフォン情報から、最新技術にまつわる記事まで、幅広い情報をお届けします。

最近のニュース

  • [GDC 2023]Windowsの新ファイルシステム「DirectStorage」で … – 4Gamer.net
  • 3/25 中日 vs 千葉ロッテ ゲームハイライト – スポーツナビ「ゲーム … – スポーツナビ
  • Intel co-founder Gordon Moore has passed away

今すぐ購読

Loading
  • Advertise
  • お問い合わせ
  • 個人情報保護方針

© mayaweb.jp - 全著作権所有!

No Result
View All Result
  • ITニュース
  • 地方のニュース
  • コンピュータ ニュース
  • パソコンの比較
  • ゲーム
  • ビデオ

© mayaweb.jp - 全著作権所有!