Por ejemplo, me estáis compartiendo un vídeo donde se propone marcar los textos de ChatGPT de forma invisible para poder detectar que es artificial. El problema, como comenta aquí Jeremy, es que estos sistemas una vez creados, son muy fáciles de superar.
REGULATION
-
Blurring Lines Between AI and Human-Generated Content Online
By
–
Por otro lado, las fronteras que intentamos poner entre contenido AI/Humans son cada vez más difusas, creando situaciones delicadas como estas, donde en Reddit a un usuario se le ha baneado bajo la sospecha de que su arte (que él dice es original suyo) puede estar hecho con IA…
-
ICML2023 LLM Detection Policy False Positives Analysis
By
–
Bringing it back to our community, I wonder how many false positives (and false negatives!) there will be for #ICML2023's ban on LLM-generated text
-
AI System Automatically Issues Fines for Littering UK Roads
By
–
This #AI program automatically sends fines to motorists who throw rubbish on the UK roads 👍
— Pascal Bornet (@pascal_bornet) 5 janvier 2023
Caught in the act, these people can be fined more than $150
Should we implement this in more cities in the world?
Credit: LitterCam#ai #machinelearning #innovation #tech pic.twitter.com/qf00ORDZRIThis #AI program automatically sends fines to motorists who throw rubbish on the UK roads Caught in the act, these people can be fined more than $150 Should we implement this in more cities in the world? Credit: LitterCam
#ai #machinelearning #innovation #tech -
LLM Plagiarism: Transparency and Ethical Decision-Making
By
–
My issue is lack of transparency. If this interpretation is right, it wouldn't be hard to add a line like: "Whether LLMs plagiarize is an emerging topic of discussion, we deliberated and chose to be conservative, we look forward to how things unfold, etc."
-
Misinformation impact on reputation and public understanding
By
–
Y no es "tan grave", porque al final el usuario final que quiera saber bien cómo funciona esto sabrá elegir sus fuentes. Y el que no, aunque se le exponga a desinformación, ni la retendrá ni tampoco le generará un mayor perjuicio. Pero a la reputación del que divulga mal…
-
Mainstream AI Coverage Lacks Rigor and Spreads Misinformation
By
–
La reciente explosión de la IA llegando al mainstream está haciendo que más y más periodistas, divulgadores, tertulianos, se lancen a hablar de ella. El problema es que en muchos casos se está tratando mal, desde el desconocimiento, con poco rigor y desinformando. Mal, vamos.
-
Do Large Language Models Constitute Plagiarism?
By
–
I think it is super debatable whether large language models (and more generally, powerful ML models) count as plagiarism. This seems like a big question that we will have to grapple with as a community and society.
-
Google’s AI Content Detection and the Importance of Human Writing
By
–
Google is trying to detect AI content versus human-generated content and I expect they'll get good at it in the next year or two. Best to work some human writing into anything you post imo.
-
Federated Learning: Heterogeneity and Privacy Perspectives
By
–
I like FL as a lens through which to study heterogeneity in clients, which may have different distributions, resources, or capabilities. But not for privacy. Here is another perspective on privacy of FL, which is a bit more conspiratorial than my own https://
x.com/le_science4all
/status/1602432680657928193
…