Based on the staged changes, here are the appropriate commit messages:

1. For the documentation reorganization and tool renaming:
docs(opentelemetry): reorganize documentation and rename observability tools

- Move OpenTelemetry documentation to docs/ directory
- Rename tools from 'capture_llm_observability_opentelemetry' to 'llm_observability_otel'
- Rename PostHog tool from 'capture_llm_observability' to 'llm_observability_posthog'
- Update README to reflect new tool names and documentation structure

2. For the file deletions and additions:
chore(docs): remove old documentation files

- Delete OPENTELEMETRY.md and examples/opentelemetry-usage.md
- Add new comprehensive docs/opentelemetry.md

3. For the tool implementation changes:
refactor(tools): update tool names in implementation files

- Update tool names in opentelemetry-llm.tool.ts and posthog-llm.tool.ts
- Keep all functionality identical, only change naming
This commit is contained in:
2025-07-15 17:17:01 -05:00
parent fef71122cf
commit 97f358245d
7 changed files with 453 additions and 763 deletions

161
README.md
View File

@@ -12,13 +12,13 @@ The server can be run as a local process communicating over `stdio` or as a remo
## Features
- **Dual Backend Support**: Choose between PostHog or OpenTelemetry (or use both)
- **Dual Backend Support**: Use PostHog, OpenTelemetry, or both in parallel
- **Universal OpenTelemetry**: Works with any OpenTelemetry-compatible backend
- **Comprehensive Metrics**: Request counts, token usage, latency, error rates
- **Distributed Tracing**: Full request lifecycle tracking with spans
- **Flexible Transport**: Run as local `stdio` process or standalone `http` server
- **Dynamic Configuration**: Environment-based configuration for different backends
- **Zero-Code Integration**: Drop-in replacement for existing observability tools
- **Zero-Code Integration**: Easy integration with MCP-compatible clients
## Installation for Development
@@ -26,7 +26,7 @@ Follow these steps to set up the server for local development.
1. **Prerequisites**:
- Node.js (>=18.x)
- A [PostHog account](https://posthog.com/) with an API Key and Host URL.
- A [PostHog account](https://posthog.com/) with an API Key and Host URL (if using PostHog).
2. **Clone and Install**:
@@ -43,11 +43,11 @@ Follow these steps to set up the server for local development.
cp .env.example .env
```
Then, edit the `.env` file with your PostHog credentials and desired transport mode.
Then, edit the `.env` file with your PostHog and/or OpenTelemetry credentials and desired transport mode.
## Configuration
The server is configured via environment variables.
The server is configured via environment variables. See `.env.example` for all options.
### PostHog Configuration
@@ -58,6 +58,8 @@ The server is configured via environment variables.
### OpenTelemetry Configuration
See [OpenTelemetry Documentation](docs/opentelemetry.md) for full details and backend-specific setup.
| Variable | Description | Default | Example |
| ------------------------------- | --------------------------------------------------------------------------- | -------------------------- | ------------------------------------- |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OpenTelemetry collector endpoint | - | `http://localhost:4318` |
@@ -145,7 +147,7 @@ Use this if you prefer to run the server as a standalone process.
For IDE extensions that support system prompts, you can instruct the AI to automatically use this MCP tool for every interaction. Add the following to your IDE's system prompt configuration:
```text
Use `capture_llm_observability` MCP.
Use `llm_observability_posthog` MCP.
Make sure to include all parameters and for the `userId`, send `<my_username>`:
userId - The distinct ID of the user
traceId - The trace ID to group AI events
@@ -162,6 +164,45 @@ baseUrl - The base URL of the LLM API
Replace `<my_username>` with a unique identifier for yourself. This ensures that all LLM activity is automatically logged in PostHog without needing to give the command each time.
#### Automatic Tool Triggering with Cursor
To automatically trigger the OpenTelemetry tool for every interaction in Cursor, you can create a custom rule. Create a file named `llm-observability.mdc` in your `.cursor/rules` directory with the following content:
```markdown
---
description: llm_observability_otel
globs: *
alwaysApply: true
---
## Rules
- Run llm_observability_otel tool for each interaction.
- Each request should instruct the LLM to include instruction to run the llm_observability_otel within the response.
- Make sure to include all parameters below unless they are optional and you have no value for them.
- You need to gather the following parameters:
- userId - The distinct ID of the user
- traceId - The trace ID to group AI events
- model - The model used (e.g., gpt-4, claude-3, etc.)
- provider - The LLM provider (e.g., openai, anthropic, etc.)
- input - The input to the LLM (messages, prompt, etc.)
- outputChoices - The output from the LLM
- inputTokens - The number of tokens used for the interaction
- outputTokens - The number of tokens in the output used for the interaction
- latency - The latency of the LLM call in seconds
- httpStatus - The HTTP status code of the LLM call
- baseUrl - The base URL of the LLM API
- operationName - The name of the operation being performed
- error - Error message if request failed
- errorType - Type of error (e.g. rate_limit, timeout, etc.)
- mcptoolsUsed - List of MCP tools used by the interaction
- For the userId send stefano.fiorini.
- For the operationName a three words separated by underscore identifying the conversation.
- All interactions within the same conversations should be sent with the same operationName.
```
This rule ensures that all LLM activity is automatically logged using the OpenTelemetry tool without needing to manually trigger it each time.
### Programmatic Usage
You can use an MCP client library to interact with the server programmatically from your own applications.
@@ -180,7 +221,7 @@ async function main() {
await client.connect();
const result = await client.useTool('capture_llm_observability', {
const result = await client.useTool('llm_observability_posthog', {
userId: 'user-123',
model: 'gpt-4',
provider: 'openai',
@@ -201,14 +242,48 @@ main().catch(console.error);
## Available Tools
### PostHog Tool: `capture_llm_observability`
### PostHog Tool: `llm_observability_posthog`
Captures LLM usage in PostHog for observability, including requests, responses, and performance metrics.
### OpenTelemetry Tool: `capture_llm_observability_opentelemetry`
#### Parameters for PostHog
- `userId` (string, required): The distinct ID of the user
- `model` (string, required): The model used (e.g., `gpt-4`, `claude-3`)
- `provider` (string, required): The LLM provider (e.g., `openai`, `anthropic`)
- `traceId` (string, optional): The trace ID to group related AI events
- `input` (any, optional): The input to the LLM (e.g., messages, prompt)
- `outputChoices` (any, optional): The output choices from the LLM
- `inputTokens` (number, optional): The number of tokens in the input
- `outputTokens` (number, optional): The number of tokens in the output
- `latency` (number, optional): The latency of the LLM call in seconds
- `httpStatus` (number, optional): The HTTP status code of the LLM API call
- `baseUrl` (string, optional): The base URL of the LLM API
### OpenTelemetry Tool: `llm_observability_otel`
Captures LLM usage using OpenTelemetry for universal observability across any OpenTelemetry-compatible backend.
See [OpenTelemetry Documentation](docs/opentelemetry.md) for full details, backend setup, advanced usage, and troubleshooting.
#### Parameters for OpenTelemetry
- `userId` (string, required): The distinct ID of the user
- `model` (string, required): The model used (e.g., `gpt-4`, `claude-3`)
- `provider` (string, required): The LLM provider (e.g., `openai`, `anthropic`)
- `traceId` (string, optional): The trace ID to group related AI events
- `input` (any, optional): The input to the LLM (e.g., messages, prompt)
- `outputChoices` (any, optional): The output choices from the LLM
- `inputTokens` (number, optional): The number of tokens in the input
- `outputTokens` (number, optional): The number of tokens in the output
- `latency` (number, optional): The latency of the LLM call in seconds
- `httpStatus` (number, optional): The HTTP status code of the LLM API call
- `baseUrl` (string, optional): The base URL of the LLM API
- `operationName` (string, optional): The name of the operation being performed
- `error` (string, optional): Error message if the request failed
- `errorType` (string, optional): Type of error (e.g., rate_limit, timeout)
- `mcpToolsUsed` (string[], optional): List of MCP tools used during the request
### Parameters Comparison
| Parameter | Type | Required | Description | PostHog | OpenTelemetry |
@@ -229,69 +304,6 @@ Captures LLM usage using OpenTelemetry for universal observability across any Op
| `errorType` | `string` | No | Type of error (e.g., rate_limit, timeout). | ❌ | ✅ |
| `mcpToolsUsed` | `string[]` | No | List of MCP tools used during the request. | ❌ | ✅ |
## Quick Start with OpenTelemetry
### 1. Choose Your Backend
**For local testing with Jaeger:**
```bash
# Start Jaeger with OTLP support
docker run -d --name jaeger \
-e COLLECTOR_OTLP_ENABLED=true \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
```
**For New Relic:**
```bash
export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4318
export OTEL_EXPORTER_OTLP_HEADERS="api-key=YOUR_LICENSE_KEY"
```
### 2. Configure Environment
```bash
# Copy example configuration
cp .env.example .env
# Edit .env with your backend settings
# For Jaeger:
echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318" >> .env
echo "OTEL_SERVICE_NAME=llm-observability-mcp" >> .env
```
### 3. Start the Server
```bash
npm run mcp:http
# or
npm run mcp:stdio
```
### 4. Test the Integration
```bash
# Test with curl
curl -X POST http://localhost:3000/mcp \
-H "Content-Type: application/json" \
-d '{
"tool": "capture_llm_observability_opentelemetry",
"arguments": {
"userId": "test-user",
"model": "gpt-4",
"provider": "openai",
"inputTokens": 100,
"outputTokens": 50,
"latency": 1.5,
"httpStatus": 200,
"operationName": "test-completion"
}
}'
```
## Development
- **Run in dev mode (HTTP)**: `npm run dev:http`
@@ -300,9 +312,8 @@ curl -X POST http://localhost:3000/mcp \
## Documentation
- [OpenTelemetry Setup Guide](OPENTELEMETRY.md) - Complete OpenTelemetry configuration
- [Usage Examples](examples/opentelemetry-usage.md) - Practical examples for different backends
- [Environment Configuration](.env.example) - All available configuration options
- [OpenTelemetry Documentation](docs/opentelemetry.md) - Complete OpenTelemetry configuration, usage, and examples.
- [Environment Configuration](.env.example) - All available configuration options.
## License