
Trubrics is a user analytics platform for LLMs. It enables AI teams and product managers to better understand how their users are interacting with AI models, in order to define ways to improve their product. Specifically, analysing user feedback and user prompts enhances:
- Observability – AI bugs & user preferences can be identified.
- Evaluation – LLMs can be validated directly by users rather than with offline metrics.
- Personalisation – LLMs can be fine tuned, or prompt engineering improved.
LangChain is used by developers to build complex LLM based applications. With the `TrubricsCallbackHandler`, developers can now start logging their user prompts and model generations from LangChain directly to Trubrics.
Setup
First, install the trubrics-sdk python package:
pip install trubrics
If you don’t already have a Trubrics account, create one for free here. Once you have created your account, you can set `TRUBRICS_EMAIL` and `TRUBRICS_PASSWORD` environment variables. We are also using OpenAI for this example, so set your `OPENAI_API_KEY` too:
export TRUBRICS_EMAIL='***@***'
export TRUBRICS_PASSWORD='***'
export OPENAI_API_KEY='sk-***'
You are now all set to start logging data to Trubrics!
Log data from LangChain chat models
Chat models in LangChain allow users to converse with an LLM. Let’s see how we can log all our inputs and outputs to our chat model to Trubrics.
Having setup your environment, we can import the `TrubricsCallbackHandler` from the LangChain library:
from langchain.callbacks import TrubricsCallbackHandler
trubrics_callback = TrubricsCallbackHandler()
Note: upon creating an account with Trubrics, a `default` project is created. If not specified in the `TrubricsCallbackHandler`, all data will be saved to this `default` project.
For our example, we will feed in some more parameters to the callback, such as a tag, user_id and some metadata:
from langchain.callbacks import TrubricsCallbackHandler
trubrics_callback = TrubricsCallbackHandler(
project="default",
tags=["chat model"],
user_id="user-id-1234",
some_metadata={"hello": [1, 2]}
)
Note: All kwargs are parsed by the callback, and set to trubrics prompt variables. Any extra kwargs will be added to the `metadata` variable. See the possible variables for a Trubrics prompt here.
Now let’s write a basic LangChain call to OpenAI, saving the user prompt and model generation to Trubrics via the callback:
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
chat_llm = ChatOpenAI(callbacks=[trubrics_callback])
chat_res = chat_llm(
[
SystemMessage(content="Every answer of yours must be about OpenAI."),
HumanMessage(content="Tell me a joke"),
]
)
print(chat_res.content)
That’s it, you can now head over to the prompts page in Trubrics to analyse your user prompts! The callback can also be used for regular LLM calls in LangChain. Checkout the official integration docs.
For more information about Trubrics, such as saving user feedback from your LLM applications, head over to our docs.
If you would like to learn more about user analytics for LLMs, or need help implementing Trubrics within your organisation — please feel free to get in touch via Linkedin or email.