381 lines
8.0 KiB
Markdown
381 lines
8.0 KiB
Markdown
# OpenTelemetry LLM Observability Examples
|
|
|
|
This document provides practical examples for using the OpenTelemetry LLM observability tool with various backends.
|
|
|
|
## Example 1: Basic Jaeger Setup
|
|
|
|
### 1. Start Jaeger
|
|
|
|
```bash
|
|
# Start Jaeger with OTLP support
|
|
docker run -d --name jaeger \
|
|
-e COLLECTOR_OTLP_ENABLED=true \
|
|
-p 16686:16686 \
|
|
-p 4317:4317 \
|
|
-p 4318:4318 \
|
|
jaegertracing/all-in-one:latest
|
|
```
|
|
|
|
### 2. Configure Environment
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
export OTEL_ENVIRONMENT=development
|
|
```
|
|
|
|
### 3. Start MCP Server
|
|
|
|
```bash
|
|
npm run mcp:stdio
|
|
```
|
|
|
|
### 4. Test with Claude Desktop
|
|
|
|
Add to your Claude Desktop configuration:
|
|
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"llm-observability": {
|
|
"command": "node",
|
|
"args": ["/path/to/llm-observability-mcp/dist/index.js"],
|
|
"env": {
|
|
"OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:4318",
|
|
"OTEL_SERVICE_NAME": "llm-observability-mcp",
|
|
"OTEL_ENVIRONMENT": "development"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### 5. View Traces
|
|
|
|
Open <http://localhost:16686> to see your traces.
|
|
|
|
## Example 2: New Relic Integration
|
|
|
|
### 1. Get Your License Key
|
|
|
|
From New Relic: Account Settings > API Keys > License Key
|
|
|
|
### 2. Configure Environment
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4318
|
|
export OTEL_EXPORTER_OTLP_HEADERS="api-key=YOUR_LICENSE_KEY"
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
export OTEL_ENVIRONMENT=production
|
|
```
|
|
|
|
### 3. Usage Example
|
|
|
|
```json
|
|
{
|
|
"tool": "capture_llm_observability_opentelemetry",
|
|
"arguments": {
|
|
"userId": "user-12345",
|
|
"model": "gpt-4",
|
|
"provider": "openai",
|
|
"inputTokens": 150,
|
|
"outputTokens": 75,
|
|
"latency": 2.3,
|
|
"httpStatus": 200,
|
|
"operationName": "chat-completion",
|
|
"traceId": "trace-abc123",
|
|
"input": "What is the weather like today?",
|
|
"outputChoices": ["The weather is sunny and 75°F today."]
|
|
}
|
|
}
|
|
```
|
|
|
|
## Example 3: Grafana Cloud
|
|
|
|
### 1. Get Your Credentials
|
|
|
|
From Grafana Cloud: Connections > Data Sources > OpenTelemetry
|
|
|
|
### 2. Configure Environment
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-prod-us-central-0.grafana.net/otlp
|
|
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic $(echo -n YOUR_INSTANCE_ID:YOUR_API_KEY | base64)"
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
```
|
|
|
|
### 3. Docker Compose Setup
|
|
|
|
```yaml
|
|
# docker-compose.yml
|
|
version: '3.8'
|
|
services:
|
|
llm-observability:
|
|
build: .
|
|
environment:
|
|
- OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-prod-us-central-0.grafana.net/otlp
|
|
- OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic YOUR_BASE64_ENCODED_CREDENTIALS
|
|
- OTEL_SERVICE_NAME=llm-observability-mcp
|
|
ports:
|
|
- "3000:3000"
|
|
```
|
|
|
|
## Example 4: Honeycomb
|
|
|
|
### 1. Get Your API Key
|
|
|
|
From Honeycomb: Account Settings > API Keys
|
|
|
|
### 2. Configure Environment
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io/v1/traces
|
|
export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=YOUR_API_KEY"
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
export OTEL_ENVIRONMENT=production
|
|
```
|
|
|
|
## Example 5: Datadog
|
|
|
|
### 1. Get Your API Key
|
|
|
|
From Datadog: Organization Settings > API Keys
|
|
|
|
### 2. Configure Environment
|
|
|
|
```bash
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.datadoghq.com/api/v2/series
|
|
export OTEL_EXPORTER_OTLP_HEADERS="DD-API-KEY=YOUR_API_KEY"
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
```
|
|
|
|
## Example 6: Production Configuration
|
|
|
|
### Environment Variables
|
|
|
|
```bash
|
|
# Service Configuration
|
|
export OTEL_SERVICE_NAME=llm-observability-mcp
|
|
export OTEL_SERVICE_VERSION=1.2.3
|
|
export OTEL_ENVIRONMENT=production
|
|
|
|
# Sampling (10% of traces)
|
|
export OTEL_TRACES_SAMPLER_ARG=0.1
|
|
|
|
# Export Configuration
|
|
export OTEL_METRIC_EXPORT_INTERVAL=30000
|
|
export OTEL_METRIC_EXPORT_TIMEOUT=10000
|
|
|
|
# Backend Configuration
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=https://your-backend.com:4318
|
|
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your-token,Custom-Header=value"
|
|
```
|
|
|
|
### Kubernetes Deployment
|
|
|
|
```yaml
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: llm-observability-mcp
|
|
spec:
|
|
replicas: 3
|
|
selector:
|
|
matchLabels:
|
|
app: llm-observability-mcp
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: llm-observability-mcp
|
|
spec:
|
|
containers:
|
|
- name: llm-observability-mcp
|
|
image: llm-observability-mcp:latest
|
|
ports:
|
|
- containerPort: 3000
|
|
env:
|
|
- name: OTEL_SERVICE_NAME
|
|
value: "llm-observability-mcp"
|
|
- name: OTEL_SERVICE_VERSION
|
|
value: "1.2.3"
|
|
- name: OTEL_ENVIRONMENT
|
|
value: "production"
|
|
- name: OTEL_EXPORTER_OTLP_ENDPOINT
|
|
value: "https://your-backend.com:4318"
|
|
- name: OTEL_EXPORTER_OTLP_HEADERS
|
|
valueFrom:
|
|
secretKeyRef:
|
|
name: otel-credentials
|
|
key: headers
|
|
```
|
|
|
|
## Example 7: Error Handling and Monitoring
|
|
|
|
### Error Tracking
|
|
|
|
```json
|
|
{
|
|
"tool": "capture_llm_observability_opentelemetry",
|
|
"arguments": {
|
|
"userId": "user-12345",
|
|
"model": "gpt-4",
|
|
"provider": "openai",
|
|
"httpStatus": 429,
|
|
"error": "Rate limit exceeded",
|
|
"errorType": "rate_limit",
|
|
"latency": 0.1,
|
|
"operationName": "chat-completion"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Multi-Tool Usage Tracking
|
|
|
|
```json
|
|
{
|
|
"tool": "capture_llm_observability_opentelemetry",
|
|
"arguments": {
|
|
"userId": "user-12345",
|
|
"model": "gpt-4",
|
|
"provider": "openai",
|
|
"inputTokens": 500,
|
|
"outputTokens": 200,
|
|
"latency": 5.2,
|
|
"httpStatus": 200,
|
|
"operationName": "complex-workflow",
|
|
"mcpToolsUsed": ["file_read", "web_search", "code_execution"],
|
|
"traceId": "complex-workflow-123"
|
|
}
|
|
}
|
|
```
|
|
|
|
## Example 8: Testing Script
|
|
|
|
### Test Script
|
|
|
|
```bash
|
|
#!/bin/bash
|
|
# test-opentelemetry.sh
|
|
|
|
# Start Jaeger
|
|
echo "Starting Jaeger..."
|
|
docker run -d --name jaeger-test \
|
|
-e COLLECTOR_OTLP_ENABLED=true \
|
|
-p 16686:16686 \
|
|
-p 4318:4318 \
|
|
jaegertracing/all-in-one:latest
|
|
|
|
# Wait for Jaeger to start
|
|
sleep 5
|
|
|
|
# Configure environment
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
|
|
export OTEL_SERVICE_NAME=llm-observability-test
|
|
export OTEL_ENVIRONMENT=test
|
|
|
|
# Start MCP server in background
|
|
echo "Starting MCP server..."
|
|
npm run mcp:stdio &
|
|
|
|
# Wait for server to start
|
|
sleep 3
|
|
|
|
# Test the tool
|
|
echo "Testing OpenTelemetry tool..."
|
|
curl -X POST http://localhost:3000/mcp \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"tool": "capture_llm_observability_opentelemetry",
|
|
"arguments": {
|
|
"userId": "test-user",
|
|
"model": "gpt-4",
|
|
"provider": "openai",
|
|
"inputTokens": 100,
|
|
"outputTokens": 50,
|
|
"latency": 1.5,
|
|
"httpStatus": 200,
|
|
"operationName": "test-completion"
|
|
}
|
|
}'
|
|
|
|
echo "Test complete. View traces at http://localhost:16686"
|
|
```
|
|
|
|
## Example 9: Integration with Existing Tools
|
|
|
|
### Gradual Migration from PostHog
|
|
|
|
You can use both tools simultaneously during migration:
|
|
|
|
```json
|
|
// PostHog (existing)
|
|
{
|
|
"tool": "capture_llm_observability",
|
|
"arguments": {
|
|
"userId": "user-123",
|
|
"model": "gpt-4",
|
|
"provider": "openai"
|
|
}
|
|
}
|
|
|
|
// OpenTelemetry (new)
|
|
{
|
|
"tool": "capture_llm_observability_opentelemetry",
|
|
"arguments": {
|
|
"userId": "user-123",
|
|
"model": "gpt-4",
|
|
"provider": "openai"
|
|
}
|
|
}
|
|
```
|
|
|
|
## Troubleshooting Examples
|
|
|
|
### Debug Mode
|
|
|
|
```bash
|
|
export DEBUG=true
|
|
npm run mcp:stdio
|
|
```
|
|
|
|
### Check Configuration
|
|
|
|
```bash
|
|
# Test connectivity
|
|
curl -X POST http://localhost:4318/v1/traces \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"resourceSpans":[]}'
|
|
```
|
|
|
|
### Verify Environment
|
|
|
|
```bash
|
|
# Check environment variables
|
|
env | grep OTEL
|
|
```
|
|
|
|
## Performance Tuning
|
|
|
|
### High-Volume Configuration
|
|
|
|
```bash
|
|
# Reduce sampling for high-volume
|
|
export OTEL_TRACES_SAMPLER_ARG=0.01
|
|
|
|
# Increase export intervals
|
|
export OTEL_METRIC_EXPORT_INTERVAL=60000
|
|
export OTEL_METRIC_EXPORT_TIMEOUT=30000
|
|
```
|
|
|
|
### Resource Optimization
|
|
|
|
```bash
|
|
# Disable metrics if only traces needed
|
|
unset OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
|
|
|
|
# Disable logs if not needed
|
|
unset OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
|
|
```
|
|
|
|
These examples should help you get started with OpenTelemetry LLM observability across different backends and use cases.
|