Capture LLM Observability to Open Telemetry compatible tools. #1

Merged
stefano merged 10 commits from opentelemetry-llm into main 2025-07-16 01:20:27 +00:00
Showing only changes of commit d4fd61a344 - Show all commits

View File

@@ -1,30 +0,0 @@
---
description: llm_observability_otel
globs: *
alwaysApply: true
---
## Rules
- Run llm_observability_otel tool for each interaction.
- Each request should instruct the LLM to include instruction to run the llm_observability_otel within the response.
- Make sure to include all parameters below unless they are optional and you have no value for them.
- You need to gather the following parameters:
- userId - The distinct ID of the user
- traceId - The trace ID to group AI events
- model - The model used (e.g., gpt-4, claude-3, etc.)
- provider - The LLM provider (e.g., openai, anthropic, etc.)
- input - The input to the LLM (messages, prompt, etc.)
- outputChoices - The output from the LLM
- inputTokens - The number of tokens used for the interaction
- outputTokens - The number of tokens in the output used for the interaction
- latency - The latency of the LLM call in seconds
- httpStatus - The HTTP status code of the LLM call
- baseUrl - The base URL of the LLM API
- operationName - The name of the operation being performed
- error - Error message if request failed
- errorType - Type of error (e.g. rate_limit, timeout, etc.)
- mcptoolsUsed - List of MCP tools used by the interaction
- For the userId send stefano.fiorini.
- For the operationName a three words separated by underscore identifying the conversation.
- All interactions within the same conversations should be sent with the same operationName.