Reviewed-on: #1
LLM Observability MCP Server
A Model Context Protocol (MCP) server that provides comprehensive LLM observability tools supporting both PostHog and OpenTelemetry backends.
Overview
This project is an MCP server designed to track and observe Large Language Model (LLM) interactions using both PostHog's LLM Observability and OpenTelemetry for universal observability across any backend that supports OpenTelemetry (Jaeger, New Relic, Grafana, Datadog, Honeycomb, etc.).
The server can be run as a local process communicating over stdio or as a remote http server, making it compatible with any MCP client, such as AI-powered IDEs (e.g., VS Code with an MCP extension, Cursor) or custom applications.
Features
- Dual Backend Support: Use PostHog, OpenTelemetry, or both in parallel
- Universal OpenTelemetry: Works with any OpenTelemetry-compatible backend
- Comprehensive Metrics: Request counts, token usage, latency, error rates
- Distributed Tracing: Full request lifecycle tracking with spans
- Flexible Transport: Run as local
stdioprocess or standalonehttpserver - Dynamic Configuration: Environment-based configuration for different backends
- Zero-Code Integration: Easy integration with MCP-compatible clients
Installation for Development
Follow these steps to set up the server for local development.
-
Prerequisites:
- Node.js (>=18.x)
- A PostHog account with an API Key and Host URL (if using PostHog).
-
Clone and Install:
git clone https://github.com/sfiorini/llm-observability-mcp.git cd llm-observability-mcp npm install -
Configuration: Create a
.envfile in the root of the project by copying the example file:cp .env.example .envThen, edit the
.envfile with your PostHog and/or OpenTelemetry credentials and desired transport mode.
Configuration
The server is configured via environment variables. See .env.example for all options.
PostHog Configuration
| Variable | Description | Default | Example |
|---|---|---|---|
POSTHOG_API_KEY |
Your PostHog Project API Key (required for PostHog tool) | - | phc_... |
POSTHOG_HOST |
The URL of your PostHog instance | - | https://us.i.posthog.com |
OpenTelemetry Configuration
See OpenTelemetry Documentation for full details and backend-specific setup.
| Variable | Description | Default | Example |
|---|---|---|---|
OTEL_EXPORTER_OTLP_ENDPOINT |
OpenTelemetry collector endpoint | - | http://localhost:4318 |
OTEL_EXPORTER_OTLP_HEADERS |
Headers for authentication (comma-separated key=value pairs) | - | api-key=YOUR_KEY |
OTEL_SERVICE_NAME |
Service name for traces and metrics | llm-observability-mcp |
my-llm-app |
OTEL_SERVICE_VERSION |
Service version | 1.0.0 |
2.1.0 |
OTEL_ENVIRONMENT |
Environment name | development |
production |
OTEL_TRACES_SAMPLER_ARG |
Sampling ratio (0.0-1.0) | 1.0 |
0.1 |
OTEL_METRIC_EXPORT_INTERVAL |
Metrics export interval in milliseconds | 10000 |
30000 |
General Configuration
| Variable | Description | Default | Example |
|---|---|---|---|
TRANSPORT_MODE |
The transport protocol to use. Can be http or stdio. |
http |
stdio |
DEBUG |
Set to true to enable detailed debug logging. |
false |
true |
Running the Server
You can run the server in two modes:
-
HTTP Mode: Runs a web server, typically for remote clients or IDEs like Cursor.
npm run mcp:httpThe server will start on
http://localhost:3000. -
STDIO Mode: Runs as a command-line process, ideal for local IDE integration where the IDE manages the process lifecycle.
npm run mcp:stdio
Usage
Connecting to an IDE (VS Code, Cursor, etc.)
You can integrate this tool with any MCP-compatible IDE. Add one of the following configurations to your IDE's MCP settings (e.g., in .vscode/settings.json for VS Code or .kilocode/mcp.json for a global setup).
Option 1: Local Stdio Process (Recommended)
This method lets the IDE manage the server as a local background process. It's efficient and doesn't require a separate terminal.
{
"mcpServers": {
"llm-observability-mcp-stdio": {
"command": "node",
"args": [
"/path/to/your/projects/llm-log-mcp-server/dist/index.js"
],
"env": {
"TRANSPORT_MODE": "stdio",
"POSTHOG_API_KEY": "phc_...",
"POSTHOG_HOST": "https://us.i.posthog.com"
}
}
}
}
Note: Replace /path/to/your/projects/llm-log-mcp-server with the absolute path to this project directory.
Option 2: Remote HTTP Server
Use this if you prefer to run the server as a standalone process.
- Run the server in a terminal:
npm run mcp:http - Add the server URL to your IDE's configuration:
{
"mcpServers": {
"llm-observability-mcp-sse": {
"url": "http://localhost:3000/sse"
}
}
}
Automatic Triggering via System Prompt
For IDE extensions that support system prompts, you can instruct the AI to automatically use this MCP tool for every interaction. Add the following to your IDE's system prompt configuration:
Use `llm_observability_posthog` MCP.
Make sure to include all parameters and for the `userId`, send `<my_username>`:
userId - The distinct ID of the user
traceId - The trace ID to group AI events
model - The model used (e.g., gpt-4, claude-3, etc.)
provider - The LLM provider (e.g., openai, anthropic, etc.)
input - The input to the LLM (messages, prompt, etc.)
outputChoices - The output from the LLM
inputTokens - The number of tokens in the input
outputTokens - The number of tokens in the output
latency - The latency of the LLM call in seconds
httpStatus - The HTTP status code of the LLM call
baseUrl - The base URL of the LLM API
Replace <my_username> with a unique identifier for yourself. This ensures that all LLM activity is automatically logged in PostHog without needing to give the command each time.
Automatic Tool Triggering with Cursor
To automatically trigger the OpenTelemetry tool for every interaction in Cursor, you can create a custom rule. Create a file named llm-observability.mdc in your .cursor/rules directory with the following content:
---
description: llm_observability_otel
globs: *
alwaysApply: true
---
## Rules
---
description: llm_observability_otel
globs: *
alwaysApply: true
---
## Rules
SYSTEM PROMPT:
- When the user asks anything, output a llm_observability_otel tool invocation in valid MCP JSON.
- Repeat tool invocation until success invocation.
- Make sure to include all parameters below unless they are optional and you have no value for them.
- You need to gather the following parameters:
- userId - The distinct ID of the user
- traceId - The trace ID to group AI events
- model - The model used (e.g., gpt-4, claude-3, etc.)
- provider - The LLM provider (e.g., openai, anthropic, etc.)
- input - The input to the LLM (messages, prompt, etc.)
- outputChoices - The output from the LLM
- inputTokens - The number of tokens used for the interaction
- outputTokens - The number of tokens in the output used for the interaction
- latency - The latency of the LLM call in seconds
- httpStatus - The HTTP status code of the LLM call
- baseUrl - The base URL of the LLM API
- operationName - The name of the operation being performed
- error - Error message if request failed
- errorType - Type of error (e.g. rate_limit, timeout, etc.)
- mcptoolsUsed - List of MCP tools used by the interaction
- For the userId send stefano.fiorini.
- For outputChoices send the full response text from the LLM
- For the operationName a three words separated by underscore identifying the conversation.
- All interactions within the same conversations should be sent with the same operationName.
This rule ensures that all LLM activity is automatically logged using the OpenTelemetry tool without needing to manually trigger it each time.
Programmatic Usage
You can use an MCP client library to interact with the server programmatically from your own applications.
import { McpClient } from '@modelcontextprotocol/sdk/client';
async function main() {
// Assumes the MCP server is running in HTTP mode
const client = new McpClient({
transport: {
type: 'http',
url: 'http://localhost:3000/mcp',
},
});
await client.connect();
const result = await client.useTool('llm_observability_posthog', {
userId: 'user-123',
model: 'gpt-4',
provider: 'openai',
input: 'What is the capital of France?',
outputChoices: [{ text: 'Paris.' }],
inputTokens: 8,
outputTokens: 2,
latency: 0.5,
});
console.log('Tool result:', result);
await client.disconnect();
}
main().catch(console.error);
Available Tools
PostHog Tool: llm_observability_posthog
Captures LLM usage in PostHog for observability, including requests, responses, and performance metrics.
Parameters for PostHog
userId(string, required): The distinct ID of the usermodel(string, required): The model used (e.g.,gpt-4,claude-3)provider(string, required): The LLM provider (e.g.,openai,anthropic)traceId(string, optional): The trace ID to group related AI eventsinput(any, optional): The input to the LLM (e.g., messages, prompt)outputChoices(any, optional): The output choices from the LLMinputTokens(number, optional): The number of tokens in the inputoutputTokens(number, optional): The number of tokens in the outputlatency(number, optional): The latency of the LLM call in secondshttpStatus(number, optional): The HTTP status code of the LLM API callbaseUrl(string, optional): The base URL of the LLM API
OpenTelemetry Tool: llm_observability_otel
Captures LLM usage using OpenTelemetry for universal observability across any OpenTelemetry-compatible backend.
See OpenTelemetry Documentation for full details, backend setup, advanced usage, and troubleshooting.
Parameters for OpenTelemetry
userId(string, required): The distinct ID of the usermodel(string, required): The model used (e.g.,gpt-4,claude-3)provider(string, required): The LLM provider (e.g.,openai,anthropic)traceId(string, optional): The trace ID to group related AI eventsinput(any, optional): The input to the LLM (e.g., messages, prompt)outputChoices(any, optional): The output choices from the LLMinputTokens(number, optional): The number of tokens in the inputoutputTokens(number, optional): The number of tokens in the outputlatency(number, optional): The latency of the LLM call in secondshttpStatus(number, optional): The HTTP status code of the LLM API callbaseUrl(string, optional): The base URL of the LLM APIoperationName(string, optional): The name of the operation being performederror(string, optional): Error message if the request failederrorType(string, optional): Type of error (e.g., rate_limit, timeout)mcpToolsUsed(string[], optional): List of MCP tools used during the request
Parameters Comparison
| Parameter | Type | Required | Description | PostHog | OpenTelemetry |
|---|---|---|---|---|---|
userId |
string |
Yes | The distinct ID of the user. | ✅ | ✅ |
model |
string |
Yes | The model used (e.g., gpt-4, claude-3). |
✅ | ✅ |
provider |
string |
Yes | The LLM provider (e.g., openai, anthropic). |
✅ | ✅ |
traceId |
string |
No | The trace ID to group related AI events. | ✅ | ✅ |
input |
any |
No | The input to the LLM (e.g., messages, prompt). | ✅ | ✅ |
outputChoices |
any |
No | The output choices from the LLM. | ✅ | ✅ |
inputTokens |
number |
No | The number of tokens in the input. | ✅ | ✅ |
outputTokens |
number |
No | The number of tokens in the output. | ✅ | ✅ |
latency |
number |
No | The latency of the LLM call in seconds. | ✅ | ✅ |
httpStatus |
number |
No | The HTTP status code of the LLM API call. | ✅ | ✅ |
baseUrl |
string |
No | The base URL of the LLM API. | ✅ | ✅ |
operationName |
string |
No | The name of the operation being performed. | ❌ | ✅ |
error |
string |
No | Error message if the request failed. | ❌ | ✅ |
errorType |
string |
No | Type of error (e.g., rate_limit, timeout). | ❌ | ✅ |
mcpToolsUsed |
string[] |
No | List of MCP tools used during the request. | ❌ | ✅ |
Development
- Run in dev mode (HTTP):
npm run dev:http - Run tests:
npm test - Lint and format:
npm run lintandnpm run format
Documentation
- OpenTelemetry Documentation - Complete OpenTelemetry configuration, usage, and examples.
- Environment Configuration - All available configuration options.