If I wrote an "AGI ruin FAQ", what Qs would you, yourself, personally, want answers for? Not what you think "should" be in the FAQ, what you yourself genuinely want to know; or Qs that you think have no good answer, but which would genuinely change your view if answered.
REGULATION
-
OpenAI criticized for releasing unsafe LLM without proper timing
By
–
Thank you @OpenAI for making the world a more paranoid place to live. You released a #LLM totally unfit for humans to use safely in 2022. Your inept timing is creating more havoc than we should bear @ylecun @GPAI_PMIA @SEDIAgob
-
Early-Stage Startup Data Security and Investor Privacy Risks
By
–
I mean… it's traction data and investor names for an early-stage startup. Will be outdated info in a month. Not too worried about a content moderator at OpenAI finding this and sending it to their VC buddy, which is the worst that could happen…
-
Perplexity Discontinues BirdSQL Due to Twitter API Changes
By
–
We, unfortunately, had to take down @perplexity_ai
's BirdSQL due to changes in terms of use and pricing of the Twitter API done on February 9th. BirdSQL was actually the first thing we ever worked on at Perplexity and gave many people the fun memories of Facebook Graph Search, -
Understanding AI Limitations for Ethical Development
By
–
AI comes with it's own set of limitations and @LofredM reminds us that understanding these limitations is crucial for responsible AI development. Read more on how AI can be used ethically and equitably: https://
bit.ly/3LDGtnk @jibuelias @wef @OpenAI @UniofOxford @truera_ai -
Red-teaming GPT-4: Public Transparency Over Corporate Secrecy
By
–
to start, I want to say I have nothing to gain here and I don't condone anyone actually acting upon any of GPT-4's outputs however, I believe red-teaming work is important and shouldn't be conducted in the shadows of AI companies. the general public should know the capabilities
-
OpenAI Stops Using Customer Data for Model Training by Default
By
–
Seems like they did – but rather recently. https://
techcrunch.com/2023/03/01/add
ressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/amp/
… -
NDAs and Professional Ethics in AI Industry
By
–
Good thing I don’t sign NDAs, but it’s an important point for those who do.
-
Data Privacy and Tech Company Trust in AI Models
By
–
Technically, I'm having the model predict tokens based on tokens it's calculating from the sensitive data – which is different from "feeding sensitive data to the model". Though if your question is around whether we can trust tech cos not to use the sensitive data for purposes
-
GPT-4 Risky Emergent Behaviors and Illicit Advice Capabilities
By
–
GPT-4: A new capacity for offering illicit advice and displaying 'risky emergent behaviors' The program behind ChatGPT can advise how "to kill the most number of people" https://
zdnet.com/article/gpt-4-
has-new-capacity-for-offering-illicit-advice-and-having-risky-emergent-behaviors/
… @OpenAI #AI #OpenAI #ChatGPT