docs(readme): update documentation with detailed MCP usage instructions
Some checks failed
CI - Semantic Release / Semantic Release (push) Has been cancelled

- Remove outdated badge links from README
- Add comprehensive parameter documentation for IDE extensions
- Clarify usage instructions for automatic LLM observability capture
This commit is contained in:
2025-07-13 21:35:36 -05:00
parent 05af3880f6
commit bf0f16a578
2 changed files with 16 additions and 5 deletions

View File

@@ -1,8 +1,5 @@
# LLM Observability MCP for PostHog
[![NPM Version](https://img.shields.io/npm/v/@sfiorini/llm-observability-mcp)](https://www.npmjs.com/package/@sfiorini/llm-observability-mcp)
[![Build Status](https://img.shields.io/github/workflow/status/sfiorini/llm-observability-mcp/CI)](https://github.com/sfiorini/llm-observability-mcp)
[![TypeScript](https://img.shields.io/badge/TypeScript-5.0%2B-blue)](https://www.typescriptlang.org/)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
A Model Context Protocol (MCP) server that provides a tool to capture LLM Observability events and send them to PostHog.
@@ -125,7 +122,21 @@ Use this if you prefer to run the server as a standalone process.
For IDE extensions that support system prompts, you can instruct the AI to automatically use this MCP tool for every interaction. Add the following to your IDE's system prompt configuration:
"Use the `capture_llm_observability` MCP tool and send all requested data. For the `userId`, send `<my_username>`."
```text
Use `capture_llm_observability` MCP.
Make sure to include all parameters and for the `userId`, send `<my_username>`:
userId - The distinct ID of the user
traceId - The trace ID to group AI events
model - The model used (e.g., gpt-4, claude-3, etc.)
provider - The LLM provider (e.g., openai, anthropic, etc.)
input - The input to the LLM (messages, prompt, etc.)
outputChoices - The output from the LLM
inputTokens - The number of tokens in the input
outputTokens - The number of tokens in the output
latency - The latency of the LLM call in seconds
httpStatus - The HTTP status code of the LLM call
baseUrl - The base URL of the LLM API
```
Replace `<my_username>` with a unique identifier for yourself. This ensures that all LLM activity is automatically logged in PostHog without needing to give the command each time.

View File

@@ -63,7 +63,7 @@ async function capturePosthogLlmObservability(
distinctId: trackArgs.userId,
properties: posthogProperties,
});
methodLogger.error(`Got the response from the controller`, result);
methodLogger.debug(`Got the response from the controller`, result);
// Format the response for the MCP tool
return {