GitHub - dosco/llm-client: LLMClient - A simple library to build RAG + Reasoning + Function calling Agents + LLM Proxy + Tracing + Logging

🚀 Dive into the world of large language models with LLMClient! 🤖✨ Build intelligent agents, reasoning workflows, and more using OpenAI, Azure, Google AI, and Cohere models. Simplify complex tasks with ease and unlock a new realm of AI possibilities! #AI #LLMClient #Innovation

  • LLMClient is a library for building agents and workflows using large language models (LLMs).
  • It integrates OpenAI GPT-4, Azure, Google AI, Cohere, and other models for reasoning, error-correction, and structured data extraction.
  • It allows for AI function (API) calling, like querying external data sources or invoking programs.
  • You can build diverse applications, from meeting notes apps to food finding apps, leveraging LLM's capabilities.
  • LLMClient includes functions like a Code Interpreter and Embeddings Adapter for executing code and passing embeddings.
  • ExtractInfoPrompt and SPrompt assist in extracting information from text and structuring responses.
  • The library simplifies the complexity of LLM usage by providing a well-maintained easy-to-use interface.
  • Configuration options like changing models, adjusting token length, and enabling debug logs are available.
  • LLM Proxy and Web UI offer debugging, tracing, and logging for LLM interactions.
  • Long Term Memory feature enables maintaining context across conversations.
  • Vector DB Support facilitates retrieval augmented generation (RAG) with Weaviate and Pinecone databases.
  • LLMClient aims to support various use cases, including RAG, function calling, reasoning, and intelligent conversations.