haha, I'm high level familiar with DAOs and I don't think so. LLM LLCs are about AI Power, not about decentralization, transparency, or governance. Actually in many ways opposite of DAOs in a basic execution of the idea.
LLMS
-
Language Models Continue Sequences from Prompts, Not Maximize Rewards
By
–
they don't maximize rewards, they are given a prompt (a kind of inception) and continue the sequence
-
Automated Companies Powered Entirely by LLMs Communicating via Text
By
–
automated companies made up just of LLMs (CEO LLM, manager LLMs, IC LLMs), running asynchronously and communicating over a Slack-like interface in text…
-
Extending LLMs to Vision: Incremental Multimodal Integration with Flamingo
By
–
Extending LLMs from text to vision will probably take time but, interestingly, can be made incremental. E.g. Flamingo (
https://
storage.googleapis.com/deepmind-media
/DeepMind.com/Blog/tackling-multiple-tasks-with-a-single-visual-language-model/flamingo.pdf
… (pdf)) processes both modalities simultaneously in one LLM. -
Why LLMs Process Text Instead of Raw Pixels
By
–
Interestingly the native and most general medium of existing infrastructure wrt I/O are screens and keyboard/mouse/touch. But pixels are computationally intractable atm, relatively speaking. So it's faster to adapt (textify/compress) the most useful ones so LLMs can act over them
-
LLMs as Cognitive Engines Orchestrating Compute Infrastructure via Text
By
–
Good post. A lot of interest atm in wiring up LLMs to a wider compute infrastructure via text I/O (e.g. calculator, python interpreter, google search, scratchpads, databases, …). The LLM becomes the "cognitive engine" orchestrating resources, its thought stack trace in raw text
-
LangChain 0.0.14 Release with GitHub Actions and Vector DB Improvements
By
–
🦜🔗LangChain version 0.0.14
— LangChain (@LangChain) 16 novembre 2022
🧹Improve GitHub Actions (@PredragGruevski)
🎉Improve env var handling (@deliprao)
🥗Improve coloring of logginghttps://t.co/LuIrkDkbzM
Also, here's an example of using the new vector DB question/answering chainhttps://t.co/hPFqkC1l2cLangChain version 0.0.14 Improve GitHub Actions (
@PredragGruevski
)
Improve env var handling (
@deliprao
)
Improve coloring of logging https://
github.com/hwchase17/lang
chain
… Also, here's an example of using the new vector DB question/answering chain -
Data Quality and Curriculum Learning for LLM Training
By
–
"Obviously anything that looks useless (like SHA hashes or other noise) is not worth training on and is just wasting training capacity and time"
"You may want to start with simpler topics and work up to more complex later, just like in human school" -
Concerns about GPT alignment with human values and safety
By
–
"Finally, we are very concerned that this GPT could be unaligned with humans. This would be bad. We want this to be a nice GPT that deeply loves all humans and is always considerate and helpful. Thanks"
-
GPT Training Framework with Dataset and Sampling Tools
By
–
Prompt: "You are a GPT and you're in charge of training an even better GPT, congrats! You have a dataset here . You can train it on document chunks like this: and sample its current understanding like this: . And here's a calculator and a scratchpad . Begin:"