Hackernoon accepts AI-created content but requires writer transparency about AI use. The platform may also disclose AI usage rates in the future to maintain complete transparency and guide readers. What do you think about that? Should we allow AI-generated content?
REGULATION
-
Remote-Operated Autonomous Vehicles: Safety and Cost Effectiveness
By
–
Why does that matter?
The only questions are:
1. Is it safe?
2. Is it cost effective?
If remote operators have to intervene once in a while, so be it, as long as it's safer and cheaper than a human driver in the car. Historically, most AI-assisted services have used a -
US Government AI Regulation: Big Tech vs Open Source Advocates
By
–
Both are false.
The US government is primarily concerned with 2 things: public safety and economic development. The corporations advocating for regulations are both large (Google) and small (OpenAI, Anthropic).
So are those in favor of open source: large (Meta, IBM, Shopify), -
AI Fear Drives Policymakers to Enact Symbolic Regulations
By
–
The only half-cogent argument is this: because people have become scared of AI (by fallacious arguments) policymakers have to look like they are doing something about it.
-
Open-source LLMs surge: Can AI R&D regulation work?
By
–
High-performance open-source LLMs popping up around the world.
Does anyone believe regulating AI R&D can be effective?
(Assuming it's useful. Which it is not). -
Censorship and Knowledge Exchange Control in Authoritarian Regimes
By
–
In 1983, when the free world was starting to play with personal computers, the Ceaucescu regime in Romania required a license to own a typewriter.
Obscurantism isn't just preventing people from accessing knowledge.
It's also preventing people from exchanging knowledge. -
AI Critics Repeat Social Network Arguments From Years Ago
By
–
I don't think anything he has said is salvageable.
What I find odd, among many other things, is that the arguments he uses against AI today are the *exact* same arguments he used against social networks years ago. -
Agency Limitations for AI Risk Management and Natural Disasters
By
–
That's my point. Our agency is kimited for events like pandemics, earthquakes, tornadoes, meteor strikes, and other natural phenomena. The best we can do is take some preemptive defensive measures and hope for the best. But for AI and other manifestations of human activity, we
-
Open Source AI Safety Advantages Over Closed Source
By
–
Excellent thread by Stanford's @percyliang with a list of reasons why open source AI platforms are inherently *safer* than closed source ones.
-
GM Cruise Exposed: Human Operators Behind Self-Driving Claims
By
–
Nice post by Gary Marcus on the revelation by the New York Times that GM's Cruise unit is apparently employing a large number of human operators behind the scenes to make self-driving work. https://
garymarcus.substack.com/p/could-cruise
-be-the-theranos-of-ai
… @GaryMarcus $GM #selfdriving #AI #cruise #GM