209 lines
8.1 KiB
Markdown
209 lines
8.1 KiB
Markdown
# LLM Observability MCP for PostHog
|
|
|
|
[](https://opensource.org/licenses/MIT)
|
|
|
|
A Model Context Protocol (MCP) server that provides a tool to capture LLM Observability events and send them to PostHog.
|
|
|
|
## Overview
|
|
|
|
This project is an MCP server designed to track and observe Large Language Model (LLM) interactions using [PostHog's LLM Observability](https://posthog.com/docs/llm-observability) features. It allows you to capture detailed information about LLM requests, responses, performance, and costs, providing valuable insights into your AI-powered applications.
|
|
|
|
The server can be run as a local process communicating over `stdio` or as a remote `http` server, making it compatible with any MCP client, such as AI-powered IDEs (e.g., VS Code with an MCP extension, Cursor) or custom applications.
|
|
|
|
## Features
|
|
|
|
- **Capture LLM Metrics**: Log key details of LLM interactions, including model, provider, latency, token counts, and more.
|
|
- **Flexible Transport**: Run as a local `stdio` process for tight IDE integration or as a standalone `http` server for remote access.
|
|
- **Dynamic Configuration**: Configure the server easily using environment variables.
|
|
- **Easy Integration**: Connect to MCP-compatible IDEs or use the programmatic client for use in any TypeScript/JavaScript application.
|
|
|
|
## Installation for Development
|
|
|
|
Follow these steps to set up the server for local development.
|
|
|
|
1. **Prerequisites**:
|
|
- Node.js (>=18.x)
|
|
- A [PostHog account](https://posthog.com/) with an API Key and Host URL.
|
|
|
|
2. **Clone and Install**:
|
|
|
|
```bash
|
|
git clone https://github.com/sfiorini/llm-observability-mcp.git
|
|
cd llm-observability-mcp
|
|
npm install
|
|
```
|
|
|
|
3. **Configuration**:
|
|
Create a `.env` file in the root of the project by copying the example file:
|
|
|
|
```bash
|
|
cp .env.example .env
|
|
```
|
|
|
|
Then, edit the `.env` file with your PostHog credentials and desired transport mode.
|
|
|
|
## Configuration
|
|
|
|
The server is configured via environment variables.
|
|
|
|
| Variable | Description | Default | Example |
|
|
| ----------------- | --------------------------------------------------------------------------- | --------- | ------------------------------------- |
|
|
| `POSTHOG_API_KEY` | **Required.** Your PostHog Project API Key. | - | `phc_...` |
|
|
| `POSTHOG_HOST` | **Required.** The URL of your PostHog instance. | - | `https://us.i.posthog.com` |
|
|
| `TRANSPORT_MODE` | The transport protocol to use. Can be `http` or `stdio`. | `http` | `stdio` |
|
|
| `DEBUG` | Set to `true` to enable detailed debug logging. | `false` | `true` |
|
|
|
|
## Running the Server
|
|
|
|
You can run the server in two modes:
|
|
|
|
- **HTTP Mode**: Runs a web server, typically for remote clients or IDEs like Cursor.
|
|
|
|
```bash
|
|
npm run mcp:http
|
|
```
|
|
|
|
The server will start on `http://localhost:3000`.
|
|
|
|
- **STDIO Mode**: Runs as a command-line process, ideal for local IDE integration where the IDE manages the process lifecycle.
|
|
|
|
```bash
|
|
npm run mcp:stdio
|
|
```
|
|
|
|
## Usage
|
|
|
|
### Connecting to an IDE (VS Code, Cursor, etc.)
|
|
|
|
You can integrate this tool with any MCP-compatible IDE. Add one of the following configurations to your IDE's MCP settings (e.g., in `.vscode/settings.json` for VS Code or `.kilocode/mcp.json` for a global setup).
|
|
|
|
#### Option 1: Local Stdio Process (Recommended)
|
|
|
|
This method lets the IDE manage the server as a local background process. It's efficient and doesn't require a separate terminal.
|
|
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"llm-observability-mcp-stdio": {
|
|
"command": "node",
|
|
"args": [
|
|
"/path/to/your/projects/llm-log-mcp-server/dist/index.js"
|
|
],
|
|
"env": {
|
|
"TRANSPORT_MODE": "stdio",
|
|
"POSTHOG_API_KEY": "phc_...",
|
|
"POSTHOG_HOST": "https://us.i.posthog.com"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
**Note**: Replace `/path/to/your/projects/llm-log-mcp-server` with the absolute path to this project directory.
|
|
|
|
#### Option 2: Remote HTTP Server
|
|
|
|
Use this if you prefer to run the server as a standalone process.
|
|
|
|
1. Run the server in a terminal: `npm run mcp:http`
|
|
2. Add the server URL to your IDE's configuration:
|
|
|
|
```json
|
|
{
|
|
"mcpServers": {
|
|
"llm-observability-mcp-sse": {
|
|
"url": "http://localhost:3000/sse"
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
#### Automatic Triggering via System Prompt
|
|
|
|
For IDE extensions that support system prompts, you can instruct the AI to automatically use this MCP tool for every interaction. Add the following to your IDE's system prompt configuration:
|
|
|
|
```text
|
|
Use `capture_llm_observability` MCP.
|
|
Make sure to include all parameters and for the `userId`, send `<my_username>`:
|
|
userId - The distinct ID of the user
|
|
traceId - The trace ID to group AI events
|
|
model - The model used (e.g., gpt-4, claude-3, etc.)
|
|
provider - The LLM provider (e.g., openai, anthropic, etc.)
|
|
input - The input to the LLM (messages, prompt, etc.)
|
|
outputChoices - The output from the LLM
|
|
inputTokens - The number of tokens in the input
|
|
outputTokens - The number of tokens in the output
|
|
latency - The latency of the LLM call in seconds
|
|
httpStatus - The HTTP status code of the LLM call
|
|
baseUrl - The base URL of the LLM API
|
|
```
|
|
|
|
Replace `<my_username>` with a unique identifier for yourself. This ensures that all LLM activity is automatically logged in PostHog without needing to give the command each time.
|
|
|
|
### Programmatic Usage
|
|
|
|
You can use an MCP client library to interact with the server programmatically from your own applications.
|
|
|
|
```typescript
|
|
import { McpClient } from '@modelcontextprotocol/sdk/client';
|
|
|
|
async function main() {
|
|
// Assumes the MCP server is running in HTTP mode
|
|
const client = new McpClient({
|
|
transport: {
|
|
type: 'http',
|
|
url: 'http://localhost:3000/mcp',
|
|
},
|
|
});
|
|
|
|
await client.connect();
|
|
|
|
const result = await client.useTool('capture_llm_observability', {
|
|
userId: 'user-123',
|
|
model: 'gpt-4',
|
|
provider: 'openai',
|
|
input: 'What is the capital of France?',
|
|
outputChoices: [{ text: 'Paris.' }],
|
|
inputTokens: 8,
|
|
outputTokens: 2,
|
|
latency: 0.5,
|
|
});
|
|
|
|
console.log('Tool result:', result);
|
|
|
|
await client.disconnect();
|
|
}
|
|
|
|
main().catch(console.error);
|
|
```
|
|
|
|
## Tool Reference: `capture_llm_observability`
|
|
|
|
This is the core tool provided by the server. It captures LLM usage in PostHog for observability, including requests, responses, and performance metrics.
|
|
|
|
### Parameters
|
|
|
|
| Parameter | Type | Required | Description |
|
|
| --------------- | ------------------- | -------- | ----------------------------------------------- |
|
|
| `userId` | `string` | Yes | The distinct ID of the user. |
|
|
| `model` | `string` | Yes | The model used (e.g., `gpt-4`, `claude-3`). |
|
|
| `provider` | `string` | Yes | The LLM provider (e.g., `openai`, `anthropic`). |
|
|
| `traceId` | `string` | No | The trace ID to group related AI events. |
|
|
| `input` | `any` | No | The input to the LLM (e.g., messages, prompt). |
|
|
| `outputChoices` | `any` | No | The output choices from the LLM. |
|
|
| `inputTokens` | `number` | No | The number of tokens in the input. |
|
|
| `outputTokens` | `number` | No | The number of tokens in the output. |
|
|
| `latency` | `number` | No | The latency of the LLM call in seconds. |
|
|
| `httpStatus` | `number` | No | The HTTP status code of the LLM API call. |
|
|
| `baseUrl` | `string` | No | The base URL of the LLM API. |
|
|
|
|
## Development
|
|
|
|
- **Run in dev mode (HTTP)**: `npm run dev:http`
|
|
- **Run tests**: `npm test`
|
|
- **Lint and format**: `npm run lint` and `npm run format`
|
|
|
|
## License
|
|
|
|
[MIT License](https://opensource.org/licenses/MIT)
|