Most of LangSmith cookbook examples has focused on Python so far, but we're going to start adding a lot of them for JS developers! First up – logging feedback in your app
@hwchase17
-
Fallbacks Now Work for All Chains Implementation
By
–
Fallbacks also works for all chains! Gist showing that here: https://
gist.github.com/hwchase17/26ec
a9f20031c349bd72fdc669c05a1f
… -
RAGAS Framework for RAG Pipeline Evaluation Webinar
By
–
LangChain "RAG Evaluation" Webinar RAGAS is an open-source evaluation framework for your Retrieval Augmented Generation (RAG) pipelines I'm VERY excited to be doing a webinar with them next week! RAGAS Repo: https://
github.com/explodinggradi
ents/ragas
… Webinar: -
LangChain Chat Updates New Pull Request Integration
By
–
always was: https://
github.com/langchain-ai/c
hat-langchain/pull/110
… -
User Feedback Integration for LangSmith Apps with Streamlit
By
–
Feedback is important for improving your apps! Previously we only had examples of logging thumbs up/thumbs down to LangSmith. We've now added examples of user corrections and comments using @streamlit s/o @OhSynap
-
LLMs for Structured Data Extraction and Web Automation
By
–
An underrated aspect of LLMs is using them for structured data extraction Extracting knowledge triplets is a great use case! I gave it our most recent blog post about @MultiON_AI (
https://
blog.langchain.dev/multion-x-lang
chain-powering-next-gen-web-automation-navigation-with-ai/
…) and it came up with the below – how did it do @DivGarg9 ?? -
ChatLangChain Improvements: Benchmarking Retrieval and Agent Methods
By
–
ChatLangChain Improvements We're benchmarking a bunch of retrieval and agent methods for our "chat langchain" app! Interact with the new beta version here: …
https://chat-
langchain-mcan-stanfordedu.vercel.app Expect rapid improvements and everything (data, retrieval algorithms, prompts) to be OSS -
LLM Fallbacks: Chain-Level Prompt Switching Strategy
By
–
Fallbacks Importantly, not just at the LLM level, but at the chain level Why does this matter? Different LLMs need different prompts. If you hit a rate-limit error on OAI and want to switch to Anthropic, you don't just want to retry with the same prompt – you want to switch
-
System Message Optimization for AI Function Agents
By
–
i would probably tell it explicitly in the system message what you just wrote – "if you do not find a person in the prof, search in founder" or someting also would use OPENAI_FUNCTIONS agent