LangChain version 0.0.10: @cloud_nlp support (
@sjwhitmore
) @elastic support (
@sjwhitmore
) support for text2text generation models on @huggingface (h/t @deliprao 4 idea) model laboratory: easily compare the same input to different models https://
github.com/hwchase17/lang
chain/
…
LLMS
-
LangChain 0.0.10: Cloud NLP, Elastic, and Model Laboratory Features
By
–
-
Non-Experts Answer Expert Questions on MMLU and QuALITY
By
–
We ask non-experts to answer expert-level questions on MMLU, and also ask people to answer questions about long QuALITY passages under a time limit that’s too short for a careful read.
-
Scalable Oversight Framework and Language Model Question-Answering Proof of Concept
By
–
Along with developing a framework for scalable oversight, we also conduct a proof of concept experiment that demonstrates a couple of question-answering tasks that work well under this paradigm with current language models:
-
AI Systems Improving Human Oversight of Large Language Models
By
–
In "Measuring Progress on Scalable Oversight for Large Language Models” we show how humans could use AI systems to better oversee other AI systems, and demonstrate some proof-of-concept results where a language model improves human performance at a task.
-
LangChain 0.0.9: Hugging Face Embeddings and API Key Management
By
–
LangChain version 0.0.9 Support for embeddings with @huggingface through `sentence_transformers` from @abdrahman_issam (example notebook: https://colab.research.google.com/drive/1lbjO0-nITa5c8RXfagsIZDqxZ_mVl_2k?usp=sharing…) Better support for different ways of specifying API keys from @camjuu
-
Smaller Models with Better Data Can Outperform Larger Ones
By
–
Great point. We are seeing more and more than smaller models with better objectives or data can beat big ones! My main point is that an approach shouldnt go away as models get better. Scale is just one way of getting better
-
NLLB-200 Achieves Superior Translation Quality Across All Languages
By
–
Across all languages, NLLB-200 is seeing the best results for translations modified <10% compared to all other MT services on the platform — a strong signal for the quality of translations that are being generated. 4/5
-
Recommendation to Follow Le Hou’s Research on Large Language Models
By
–
People interested in large LMs should follow Le Hou (
@Hou_Le
) at @GoogleAI
, who made a new twitter account recently Le has done great work such as Flan, and self-play for reasoning (
https://
arxiv.org/abs/2210.11610). I'm sure we'll see more great work from him 🙂 -
Improving Fine-Tuning Capabilities for Better AI Customization
By
–
still have a lot to figure out, but we definitely want to let people do more and better fine-tuning
-
Latest AI Models Now Available in API
By
–
nothing to announce there yet, but the latest models available in our API are pretty good! check them out and let us know what you think.