Conversational Retrieval Agents The most popular chain in @langchain is the ConversationalRetrievalChain, which allows you chat with your data Using an agent instead can allow for great flexibility, and its a narrow and well defined enough agent that its fairly reliable
@hwchase17
-
Conversational Retrieval Agents: Advanced QA Systems
By
–
I'll dive into details in this thread, but quick links: Blog: https://
blog.langchain.dev/conversational
-retrieval-agents/
… Python Docs: https://
python.langchain.com/docs/use_cases
/question_answering/how_to/conversational_retrieval_agents
… JS Docs: https://
js.langchain.com/docs/use_cases
/question_answering/conversational_retrieval_agents
… -
LangChain New Syntax Overview Live Discussion
By
–
A good overview of some samples of the new LangChain syntax! As a reminder, we're going live in ~25 minutes with the one and only @nfcampos to discuss the motivation, the interface, and some examples https://
crowdcast.io/c/ckw1tydg29er -
Easier Custom Chain Creation with Internal Tool
By
–
We’ve been using it for a bit internally and it’s so much easier to create custom chains
-
Novel LLM Project Combines Language Models with External Data
By
–
This was a really cool project! Very impressive and novel way to combine LLMs with external data
-
Memory Management Outside Chain Architecture
By
–
Memory Right now memory is managed outside the chain, which makes it a bit more work to set up, but also easier to understand what's going on
-
SQL Database Interaction for AI Applications
By
–
SQL You can interact with SQL Databases – both to generate SQL queries as well as actually running the SQL
-
Execute Python Code Directly from LLM Output
By
–
Python REPL You can pipe the output of an LLM call into a Python REPL to run that code
-
Routing Between Downstream Chains for AI Systems
By
–
Router You can route between various downstream chains depending on the output of a previous one
-
LangChain Tools Integration for LLM Output Piping
By
–
Using Tools All tools in LangChain are also easily usable in this syntax Makes it easy to pipe output from an LLM call into a tool