The modern economy rests on a single road in Spruce Pine, North Carolina. The road runs to the two mines that is the sole supplier of the quartz required to make the crucibles needed to refine silicon wafers. There are no alternative sources known. From Conway’s Material World:
@emollick
-
Understanding True AI System Capabilities and Their Uncertainties
By
–
It is really hard to know the true capabilities of AI systems. There are no instructions. No one knows what is in the training data (even the AI labs don’t know the content of all the web pages, articles & books). And variability in prompts, seeds, and versions adds randomness.
-
AI Interpretability: Training Data Uncertainty and Model Justification
By
–
Was it in the training set? Who knows.
Did the color of the brick really suggest that it was Georgetown and not Philly? I have no idea.
Do the shrubs suggest the US and not Europe? Maybe. Asking the AI how it thought is a request for it to justify, not truth. The mystery remains -
Claude Identifies Georgetown House: Real Understanding or Hallucination?
By
–
Example of why working with AI both so impressive and so challenging: I show Claude a picture of a house in Georgetown that shouldn't be in its training set. It nails it (GPT-4 does too) I ask it why Georgetown? Its answers seem great, but could all be hallucinated justification
-
Common Mental Model Misconception About AI and Privacy
By
–
The mental model of many folks about how AI works is that it each model a single entity, like a person. So, for example, privacy is a concern because if GPT-4 sees something from someone in one conversation, it will know it everywhere. Subtly different than actual privacy issue.
-
GPT-4 adoption gap among business and tech leaders
By
–
In every group I speak to, from business executives to scientists, including a group of very accomplished people in Silicon Valley last night, much less than 20% of the crowd has even tried a GPT-4 class model. Less than 5% has spent the required 10 hours to know how they tick.
-
API Base Models Fine-tuning and Guardrail Instructions
By
–
As far as I know, you are not getting the base model through APIs, there is definitely fine-tuning and many guardrail instructions that are likely the result of prompting. Refusals to mess with copyright work, for example.
-
Why Humans Prefer Anthropomorphic AI Over Formal Assistants
By
–
It is similar to how Sydney/Bing caused a huge stir. We are very much more freaked out and impressed by an AI that is allowed to act human than one that insists it is just an assistant.
-
Claude 3 Performance: Design Over Actual Model Capability?
By
–
It is really hard to know how much of the Twitter reaction to the "smarts" of Claude 3 is due to the fact that Claude's system prompt/design is pushing the AI to act more human. I am not sure the model is actually better than GPT-4, but it more willing to play along with users.
-
AI Predicts Neuroscience Experiment Outcomes Better Than Experts
By
–
Interesting new result on how AI can help advance scientific research by predicting in advanced which neuroscience experiments would yield positive findings better than human experts could And they only used GPT-3.5 class models & found fine-tuning helped https://
arxiv.org/abs/2403.03230