zlacker

[parent] [thread] 0 comments
1. d4rkp4+(OP)[view] [source] 2023-12-28 12:46:19
In Langroid, a multi-agent LLM framework from ex-CMU/UW-Madison researchers, https://GitHub.com/langroid/langroid we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:

https://github.com/langroid/langroid/blob/main/langroid/agen...

We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool. You can also define a class method called “examples” and this will result in few-shot examples being inserted into the system message.

Inevitably an LLM will generate a wrong format or entirely forget to use a tool, and Langroid’s built-in task loop ensures a friendly error message is sent back to the LLM to have it regenerate the structured message.

For example here’s a colab quick-start that builds up to a 2-agent system to extract structured info from a document, where the Extractor agent generates questions to the RAG Agent that has access to the document:

https://colab.research.google.com/github/langroid/langroid/b...

[go to top]