FWIW, the core of Mathew's argument can be dismantled in this case by the fact that generative models are expression copying machines. For instance, in the context of character copyrights:
REGULATION
-
Copyright Resolution and Artist Control in AI-Generated Content
By
–
(Assuming the copyright issue is fixed…) If people buy the results are OK with that, or if the artists know how to control or correct it, I don't see any problem.
-
Consistency and Measurability in Research Override Concerns
By
–
It doesn't matter if it's delusion or hallucination. It works as long as it's shown to be consistent with the theory, and in this case it's measurable. Non-profit research is allowed worldwide even if the data is copyrighted.
-
Automated Thumbnails and Copyright: Expression Reproduction Issues
By
–
I guess one argument would be that thumbnails generated by an automated process cannot themselves be copyrighted. Further, there's no proof the underlying expression was reproduced by the model.
-
Full Self Driving: Examining Five Years of Autonomous Vehicle Progress
By
–
Full Self Driving and the Emperor's New Clothes: @mpesce and co-hosts reflect on the promise and the progress over the past 5 years: https://
omny.fm/shows/the-next
-billion-seconds/the-next-billion-cars-autonomous-vehicles-fail?t=3m14s
… -
Mercedes-Benz Obtains Nevada Certification for Drive Pilot Level 3
By
–
Drive Pilot, @MercedesBenz's SAE level 3 system, certified in Nevada https://actuia.com/actualite/drive-pilot-le-systeme-sae-de-niveau-3-de-mercedes-benz-certifie-au-nevada/
… #AI #artificialintelligence #car -
AI Tools Disclosure Standards for Academic Authors
By
–
1. Authors should report tools they use (consistent with field standards)
2. Authors always take responsibility for paper contents
3. Generative AI should not be listed as an author Seems sensible enough. -
ChatGPT Limits on Politically Sensitive Topics to Prevent Toxicity
By
–
Meanwhile, ChatGPT's outputs will be limited in a different way — OpenAI designed it to avoid politically sensitive topics like race in an effort to prevent it from spewing toxic comments.
-
Baidu’s chatbot faces censorship and labeling requirements in China
By
–
Of course, Baidu's chatbot outputs will be heavily limited by state censorship—and will fall under China's new rules for "deep synthesis," which requires the labeling of such outputs that could be misconstrued as real. The US doesn't have equivalent rules.
-
Mafia concept gains relevance in AI future
By
–
Doesn't seem like it now, give it 3-4 more years. Then "Mafia" would make complete sense.