Some venues that come to mind include COLT, ICML, FAccT, FORC, and EC. These are mostly late January deadlines though.
REGULATION
-
AI Systems User Preferences Moral Pluralism Ethics
By
–
i think AI systems should do what their users want, subject to the very wide bounds of what society decides is acceptable. moral pluralism seems right, and also its very important to allow for moral evolution over time.
-
Elon Musk Twitter acquisition speech moderation challenges
By
–
(welcome to hell) https://
theverge.com/2022/10/28/234
28132/elon-musk-twitter-acquisition-problems-speech-moderation
… -
Constitutional AI: Moving Beyond Researcher-Defined Constitutions
By
–
In our paper we used an ad hoc constitution drafted purely for research purposes. Ultimately, we think constitutions shouldn’t be just defined by researchers in isolation, but by groups of experts from different disciplines working together.
-
Constitutional AI: Making Implicit Principles Explicit in AI Systems
By
–
While the name “Constitutional AI” may sound ambitious, we chose it to emphasize that powerful, general-purpose AI systems will always be operating according to *some* principles, even if they are left implicit, or encoded in privately held data.
-
Saudi ownership of X appears problematic compared to Elon
By
–
trading Elon for Saudis does not seem like a win
-
Public and Private AI Training: Different Accuracy Standards
By
–
Also, my personal opinion is that both public pretraining (with certain constraints/limitations) as well as private training from scratch both have roles to play in the space. They of course can't be held to the same accuracy standard though.
-
Public vs Proprietary Datasets in AI Model Training
By
–
Thanks Alex. I like the papers that do this, but I also have some concern when this is done on a dataset that is proprietary and only Google has access (JFT). I would like to see a version that is pretrained on LAION. This still has privacy issues, but is at least all public.
-
Privacy in AI: Beyond Training and Model Usage
By
–
And finally, privacy is… hard! While a lot of work focuses on training and using models privately, this is a narrow view of privacy, which encapsulates much more. 14/n
-
Privacy-Respecting Public Pre-Training Datasets for AI Models
By
–
So where do we go from here? We conclude with a number of suggestions for the field. The first ones focuses on making sure we have public pre-training sets which are truly privacy-respecting. Can we make such a dataset/model with comparable utility to what people use now? 12/n