Bringing it back to our community, I wonder how many false positives (and false negatives!) there will be for #ICML2023's ban on LLM-generated text
REGULATION
-
AI System Automatically Issues Fines for Littering UK Roads
By
–
This #AI program automatically sends fines to motorists who throw rubbish on the UK roads 👍
— Pascal Bornet (@pascal_bornet) 5 janvier 2023
Caught in the act, these people can be fined more than $150
Should we implement this in more cities in the world?
Credit: LitterCam#ai #machinelearning #innovation #tech pic.twitter.com/qf00ORDZRIThis #AI program automatically sends fines to motorists who throw rubbish on the UK roads Caught in the act, these people can be fined more than $150 Should we implement this in more cities in the world? Credit: LitterCam
#ai #machinelearning #innovation #tech -
LLM Plagiarism: Transparency and Ethical Decision-Making
By
–
My issue is lack of transparency. If this interpretation is right, it wouldn't be hard to add a line like: "Whether LLMs plagiarize is an emerging topic of discussion, we deliberated and chose to be conservative, we look forward to how things unfold, etc."
-
Misinformation impact on reputation and public understanding
By
–
Y no es "tan grave", porque al final el usuario final que quiera saber bien cómo funciona esto sabrá elegir sus fuentes. Y el que no, aunque se le exponga a desinformación, ni la retendrá ni tampoco le generará un mayor perjuicio. Pero a la reputación del que divulga mal…
-
Mainstream AI Coverage Lacks Rigor and Spreads Misinformation
By
–
La reciente explosión de la IA llegando al mainstream está haciendo que más y más periodistas, divulgadores, tertulianos, se lancen a hablar de ella. El problema es que en muchos casos se está tratando mal, desde el desconocimiento, con poco rigor y desinformando. Mal, vamos.
-
Do Large Language Models Constitute Plagiarism?
By
–
I think it is super debatable whether large language models (and more generally, powerful ML models) count as plagiarism. This seems like a big question that we will have to grapple with as a community and society.
-
Google’s AI Content Detection and the Importance of Human Writing
By
–
Google is trying to detect AI content versus human-generated content and I expect they'll get good at it in the next year or two. Best to work some human writing into anything you post imo.
-
Federated Learning: Heterogeneity and Privacy Perspectives
By
–
I like FL as a lens through which to study heterogeneity in clients, which may have different distributions, resources, or capabilities. But not for privacy. Here is another perspective on privacy of FL, which is a bit more conspiratorial than my own https://
x.com/le_science4all
/status/1602432680657928193
… -
Economic Tightening Drives Increased AI Governance and Scrutiny
By
–
With the economy tightening and interest rising, I agree, more scrutiny and governance is on the way.
-
Generative AI Safety Debate: Open versus Closed Models
By
–
10. Open or Closed: Negative applications and real-world vulnerabilities of Generative AI will come to the fore. This will fuel the debate around ‘safety’ and whether these technologies should be open or closed.