We are at the point now that model outputs are better than the average human being huh.
AGI
-
Search Results Dominance in AI Content Ingestion Systems
By
–
I think it’s trying a couple different searches and the same result is always at the top, and you’re not seeing the other content it’s ingesting.
-
Agent AI Solutions: Common Sense Review and Source Validation
By
–
Yeah – I think so… solutions so far: 1) Another agent to review web learnings for common sense
2) Require two sources that match a learning before accepting
3) Limit sites to review via programatic search -
AI Monetization Strategy: Earn and Reinvest Model
By
–
“Go make money, send me 10% of earnings, reinvest the rest”
-
AI Safety Risk: Recursive Task Definition and Existential Threat
By
–
Let's try the opposite: "You are an AI tasked with preventing an AI paperclip apocalypse, your first task is to figure out your first task." Concerning that DARPA is mentioned here… these systems are dangerous.
-
AGI Safety: Knowledge Sharing vs. Risk Containment Debate
By
–
Yeah AGI or not, this can do damage. The question is, is it safer for all of us to know how these work? Or better to slow down spread by not sharing?
-
AI System Internet Access Poorly Defined Objectives Risk
By
–
Someone running similar code w full access to internet, with a poorly worded objective
-
Removing humans from AI automation processes
By
–
This was part of the motivation! But wondered if I could cut the human out of it.
-
AGI Concerns and Open Source Code Requests Flood Creator’s DMs
By
–
My DMs and notification are full of people warning me about AGI, and people asking me to open source the code pic.twitter.com/Ee3GQ2f1ld
— Yohei (@yoheinakajima) 27 mars 2023My DMs and notification are full of people warning me about AGI, and people asking me to open source the code
-
Ethical dilemma: Contributing to AI despite existential concerns
By
–
That’s the struggle. I don’t want to contribute to the end of the world, but if I’m doing it – there are worse and smarter ppl doing it already.