Packed event at the @agihouse_org agent hackathon today! We had the privilege to hear from @vkhosla
, meet great founders, and I showcased some live @OpenAI API demos. The energy was buzzing—thanks for the warm welcome and all the engaging questions!
AGI
-
Agent Hackathon Success: OpenAI API Demos and Great Founders
By
–
-
Romain Huet Joins AI Agent Hackathon at AGI House
By
–
Excited to join the AI agent hackathon tomorrow at @agihouse_org
! Build AI assistants with @OpenAI APIs, see live demos, and hear from @vkhosla and @pirroh
. The event is sold out, but there’s a few spots left for high-quality founders and builders: -
Expert predictions on AI future from industry leaders
By
–
Hear predictions from Carlos Rincon Sanchez, Oscar Loria, @snpower
, @krishnarp
, and Sinduri Guntupalli -
AI System Improves Accuracy in Personal Information Handling
By
–
Huh, interesting – in the past it's usually put in incorrect details like saying I was CTO at GitHub or Eventbrite, maybe it's got better since the last update
-
Models Cannot Accurately Answer Questions About Themselves
By
–
I don't trust models to answer questions about themselves accurately
-
Training cutoff dates help AI models refuse out-of-scope questions
By
–
Doesn't imply much to be honest, the thing that's most interesting for me in there is the training cut-off – it's good to provide an accurate date to the model so it can usefully refuse to answer questions about events beyond that point
-
AI Model Hallucination-Free Responses Challenge
By
–
I don't see any hallucinated details in here at all, which never happens with these kinds of ego-prompts for me
-
LLM Knowledge Creation and Human Verification Requirements
By
–
For me part of the problem is that if an LLM did create "new knowledge" it would be incapable of verifying that what it had created was genuinely new – that's not possible without human involvement
-
Trusting AI Agents: The Importance of Transparency
By
–
Honestly I haven't personally experienced any agents that fit that description – I only trust agents where I can see exactly what they are doing, and beyond Code Interpreter I haven't actually used many examples myself
-
Chatbot Honesty and Trust: The Lying Problem
By
–
Another example of this pattern
— Simon Willison (@simonw) 27 avril 2024
If people ask your chatbot how it did something and it evidently lies to them when it answers you're going to have trust issues! https://t.co/YCNp2h1uCVAnother example of this pattern If people ask your chatbot how it did something and it evidently lies to them when it answers you're going to have trust issues!