14 Commits
v0.0.2 ... main

Author SHA1 Message Date
5b9f3696a0 Tag v0.0.6 2025-07-17 22:42:11 -05:00
825788eec3 build(deps): update @google/gemini-cli-core to ^0.1.12
- Update gemini-cli-core dependency to latest version
- Refactor imports in chatwrapper.ts to use direct module imports
- Change auth type to use string literal 'oauth-personal'
2025-07-17 22:41:03 -05:00
f83a1b3957 Tag v0.0.5 2025-06-30 18:31:19 -05:00
af3a52bac6 feat(api): add model field and root endpoint
Add a model field to the gemini request mapping and implement a new
root endpoint that returns a plain text status message.
2025-06-30 18:30:51 -05:00
e7eb40ba4e Added donation section to README 2025-06-30 16:53:44 -05:00
0932a9a3e5 Tag v0.0.4 2025-06-30 16:46:43 -05:00
f286ab3d38 feat(auth): add auto-generation of oauth credentials
Implement functionality to create the oauth_creds.json file from
environment variables (ACCESS_TOKEN, REFRESH_TOKEN, EXPIRY_DATE)
if the file is missing. Also update documentation, docker-compose,
and build scripts to support this new feature.
2025-06-30 16:45:49 -05:00
04d888ae69 Tag v0.0.3 2025-06-30 15:53:29 -05:00
190442a8cf feat(auth): add api key authentication
Implement API key authentication by introducing a new auth module.
Update configuration and .env.example to support API key setup, and add
authorization checks in the server endpoints.
2025-06-30 12:32:44 -05:00
37f0c4b643 Added model and role translation. Rewrite of code's comments. 2025-06-30 12:01:00 -05:00
2370a798d1 Fixed error with writing after response already sent to client 2025-06-28 16:34:13 -05:00
6f3fbe2a6a More linting fixes 2025-06-28 16:08:23 -05:00
10a6502f73 Linting fixes. Fixed crash when hitting rate limit 2025-06-28 15:47:33 -05:00
75dc51bcb1 Implemented getModels endpoint with the models provided by the api.Fixed some typescript errors, created types file. Added consola for logging. 2025-06-28 13:50:05 -05:00
18 changed files with 1355 additions and 386 deletions

31
.dockerignore Normal file
View File

@@ -0,0 +1,31 @@
# Dependencies
node_modules/
npm-debug.log
yarn-debug.log*
yarn-error.log*
# Environment variables
.env
.env.example
# Build output
dist/
build/
coverage/
# Development
profile/
*.test.ts
*.spec.ts
# Version control
.git/
.gitignore
# IDE
.vscode/
.idea/
# Docker
Dockerfile
docker-compose.yml

View File

@@ -1 +1,10 @@
PORT=11434 PORT=11434
VERBOSE=false
API_KEY=MY0P3NA1K3Y
ACCESS_TOKEN=MYACC3SS_T0K3N
REFRESH_TOKEN=MYR3FR3SH_T0K3N
EXPIRY_DATE=1234567890
# Docker
DOCKER_REGISTRY=
DOCKER_REGISTRY_USER=
DOCKER_HUB_USER=

5
.gitignore vendored
View File

@@ -21,4 +21,7 @@ profile/
dist/ dist/
# Environment variables # Environment variables
.env .env
# Roo Modes
.roomodes

1
.prettierignore Normal file
View File

@@ -0,0 +1 @@
*.*

11
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,11 @@
{
"editor.formatOnSave": true,
"editor.codeActionsOnSave": [
"source.fixAll.eslint"
],
"eslint.validate": ["javascript", "typescript"],
"prettier.singleQuote": true,
"cSpell.ignorePaths" : [
"src"
]
}

26
Dockerfile Normal file
View File

@@ -0,0 +1,26 @@
# Use an official Node.js runtime as a parent image
FROM node:22.15-slim
# Set the working directory in the container
WORKDIR /usr/src/app
# Create directory for oauth credentials
RUN mkdir -p /root/.gemini
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install any needed packages specified in package.json
RUN npm install
# Bundle app source
COPY . .
# Build the typescript code
RUN npm run build
# Make port 4343 available to the world outside this container
EXPOSE 4343
# Define the command to run the app
CMD [ "npm", "start" ]

215
README.md
View File

@@ -1,76 +1,163 @@
# Gemini ↔︎ OpenAI Proxy # Gemini CLI OpenAI API Proxy
Serve **Google Gemini 2.5 Pro** (or Flash) through an **OpenAI-compatible API**. This project provides a lightweight proxy server that translates OpenAI API requests to the Google Gemini API, utilizing the `@google/gemini-cli` for authentication and request handling.
Plug-and-play with clients that already speak OpenAI—SillyTavern, llama.cpp, LangChain, the VS Code *Cline* extension, etc.
--- ## Features
## ✨ Features * **OpenAI API Compatibility:** Acts as a drop-in replacement for services that use the OpenAI API format.
* **Google Gemini Integration:** Leverages the power of Google's Gemini models.
* **Authentication:** Uses `gemini-cli` for secure OAuth2 authentication with Google.
* **Docker Support:** Includes `Dockerfile` and `docker-compose.yml` for easy containerized deployment.
* **Hugging Face Spaces Ready:** Can be easily deployed as a Hugging Face Space.
| ✔ | Feature | Notes | ## Support the Project
|---|---------|-------|
| `/v1/chat/completions` | Non-stream & stream (SSE) | Works with curl, ST, LangChain… |
| Vision support | `image_url` → Gemini `inlineData` | |
| Function / Tool calling | OpenAI “functions” → Gemini Tool Registry | |
| Reasoning / chain-of-thought | Sends `enable_thoughts:true`, streams `<think>` chunks | ST shows grey bubbles |
| 1 M-token context | Proxy auto-lifts Gemini CLIs default 200 k cap | |
| CORS | Enabled (`*`) by default | Ready for browser apps |
| Zero external deps | Node 22 + TypeScript only | No Express |
--- If you find this project useful, consider supporting its development:
## 🚀 Quick start (local) [![Donate using Liberapay][liberapay-logo]][liberapay-link]
[liberapay-logo]: https://liberapay.com/assets/widgets/donate.svg "Liberapay Logo"
[liberapay-link]: https://liberapay.com/sfiorini/donate
## Prerequisites
Before you begin, ensure you have the following installed:
* [Node.js](https://nodejs.org/) (v18 or higher)
* [npm](https://www.npmjs.com/)
* [Docker](https://www.docker.com/) (for containerized deployment)
* [Git](https://git-scm.com/)
## Local Installation and Setup
1. **Clone the repository:**
```bash
git clone https://github.com/your-username/gemini-cli-openai-api.git
cd gemini-cli-openai-api
```
2. **Install project dependencies:**
```bash
npm install
```
3. **Install the Gemini CLI and Authenticate:**
This is a crucial step to authenticate with your Google account and generate the necessary credentials.
```bash
npm install -g @google/gemini-cli
gemini auth login
```
Follow the on-screen instructions to log in with your Google account. This will create a file at `~/.gemini/oauth_creds.json` containing your authentication tokens.
4. **Configure Environment Variables:**
Create a `.env` file by copying the example file:
```bash
cp .env.example .env
```
Open the `.env` file and set the following variables:
* `PORT`: The port the server will run on (default: `11434`).
* `API_KEY`: A secret key to protect your API endpoint. You can generate a strong random string for this.
## Running the Project
### Development Mode
To run the server in development mode with hot-reloading:
```bash ```bash
git clone https://huggingface.co/engineofperplexity/gemini-openai-proxy npm run dev
cd gemini-openai-proxy ```
npm ci # install deps & ts-node
# launch on port 11434 The server will be accessible at `http://localhost:11434` (or the port you specified).
npx ts-node src/server.ts
Optional env vars
PORT=3000change listen port
GEMINI_API_KEY=<key>use your own key
Minimal curl test ### Production Mode
bash
Copy
Edit
curl -X POST http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-pro-latest",
"messages":[{"role":"user","content":"Hello Gemini!"}]
}'
SillyTavern settings
Field Value
API Base URL http://127.0.0.1:11434/v1
Model gemini-2.5-pro-latest
Streaming On
Reasoning On → grey <think> lines appear
🐳 Docker To build and run the server in production mode:
bash
Copy
Edit
# build once
docker build -t gemini-openai-proxy .
# run ```bash
docker run -p 11434:11434 \ npm run build
-e GEMINI_API_KEY=$GEMINI_API_KEY \ npm start
gemini-openai-proxy ```
🗂 Project layout
pgsql ## Docker Deployment
Copy
Edit ### Using Docker Compose
src/
server.ts minimalist HTTP server The easiest way to deploy the project with Docker is by using the provided `docker-compose.yml` file.
mapper.ts OpenAI ⇄ Gemini transforms
chatwrapper.ts thin wrapper around @google/genai 1. **Authentication:**
remoteimage.ts fetch + base64 for vision
package.json deps & scripts The Docker container needs access to your OAuth credentials. You have two options:
Dockerfile
README.md * **Option A (Recommended): Mount the credentials file.**
📜 License Uncomment the `volumes` section in `docker-compose.yml` to mount your local `oauth_creds.json` file into the container.
MIT free for personal & commercial use.
```yaml
volumes:
- ~/.gemini/oauth_creds.json:/root/.gemini/oauth_creds.json
```
* **Option B: Use environment variables.**
If you cannot mount the file, you can set the `ACCESS_TOKEN`, `REFRESH_TOKEN`, and `EXPIRY_DATE` environment variables in the `docker-compose.yml` file. You can get these values from your `~/.gemini/oauth_creds.json` file.
2. **Configure `docker-compose.yml`:**
Open `docker-compose.yml` and set the `API_KEY` and other environment variables as needed.
3. **Start the container:**
```bash
docker-compose up -d
```
The server will be running on the port specified in the `ports` section of the `docker-compose.yml` file (e.g., `4343`).
### Building the Docker Image Manually
If you need to build the Docker image yourself:
```bash
docker build -t gemini-cli-openai-api .
```
Then you can run the container with the appropriate environment variables and volume mounts.
## Hugging Face Spaces Deployment
You can deploy this project as a Docker Space on Hugging Face.
1. **Create a new Space:**
* Go to [huggingface.co/new-space](https://huggingface.co/new-space).
* Choose a name for your space.
* Select "Docker" as the Space SDK.
* Choose "From scratch".
* Create the space.
2. **Upload the project files:**
* Upload all the project files (including the `Dockerfile`) to your new Hugging Face Space repository. You can do this via the web interface or by cloning the space's repository and pushing the files.
3. **Configure Secrets:**
* In your Space's settings, go to the "Secrets" section.
* Add the following secrets. You can get the values for the first three from your `~/.gemini/oauth_creds.json` file.
* `ACCESS_TOKEN`: Your Google OAuth access token.
* `REFRESH_TOKEN`: Your Google OAuth refresh token.
* `EXPIRY_DATE`: The expiry date of your access token.
* `API_KEY`: The secret API key you want to use to protect your endpoint.
* `PORT`: The port the application should run on inside the container (e.g., `7860`, which is a common default for Hugging Face Spaces).
4. **Update Dockerfile (if necessary):**
* The provided `Dockerfile` exposes port `4343`. If Hugging Face requires a different port (like `7860`), you may need to update the `EXPOSE` instruction in the `Dockerfile`.
5. **Deploy:**
* Hugging Face Spaces will automatically build and deploy your Docker container when you push changes to the repository. Check the "Logs" to monitor the build and deployment process.
Your Gemini-powered OpenAI proxy will now be running on your Hugging Face Space!

20
docker-compose.yml Normal file
View File

@@ -0,0 +1,20 @@
version: '3.8'
services:
gemini-cli-openai-api:
container_name: gemini-cli-openai-api
image: sfiorini/gemini-cli-openai-api:latest
ports:
- "4343:4343"
# Enable sharing a pre existing OAuth credentials file
# to avoid the need to set environment variables.
# volumes:
# - ~/.gemini/oauth_creds.json:/root/.gemini/oauth_creds.json
environment:
- TZ=America/Chicago
- PORT=4343
- VERBOSE=false
- API_KEY=MY0P3NA1K3Y
- ACCESS_TOKEN=MYACC3SS_T0K3N
- REFRESH_TOKEN=MYR3FR3SH_T0K3N
- EXPIRY_DATE=1234567890
restart: unless-stopped

View File

@@ -9,7 +9,7 @@ export default tseslint.config(
...tseslint.configs.strictTypeChecked, ...tseslint.configs.strictTypeChecked,
...tseslint.configs.stylisticTypeChecked, ...tseslint.configs.stylisticTypeChecked,
{ {
ignores: ['**/node_modules/*', '**/*.mjs', '**/*.js', 'src/mapper.ts'], ignores: ['**/node_modules/*', '**/*.mjs', '**/*.js'],
}, },
{ {
languageOptions: { languageOptions: {

477
package-lock.json generated
View File

@@ -1,15 +1,16 @@
{ {
"name": "gemini-cli-openai-api", "name": "gemini-cli-openai-api",
"version": "0.0.2", "version": "0.0.6",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "gemini-cli-openai-api", "name": "gemini-cli-openai-api",
"version": "0.0.2", "version": "0.0.6",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@google/gemini-cli-core": "^0.1.7", "@google/gemini-cli-core": "^0.1.12",
"consola": "^3.4.2",
"dotenv": "^17.0.0", "dotenv": "^17.0.0",
"zod": "^3.25.67" "zod": "^3.25.67"
}, },
@@ -18,6 +19,7 @@
"@stylistic/eslint-plugin": "^5.0.0", "@stylistic/eslint-plugin": "^5.0.0",
"@types/node": "^24.0.6", "@types/node": "^24.0.6",
"bumpp": "^10.2.0", "bumpp": "^10.2.0",
"dotenv-cli": "^8.0.0",
"eslint": "^9.30.0", "eslint": "^9.30.0",
"eslint-plugin-n": "^17.20.0", "eslint-plugin-n": "^17.20.0",
"jiti": "^2.4.2", "jiti": "^2.4.2",
@@ -767,11 +769,11 @@
} }
}, },
"node_modules/@google/gemini-cli-core": { "node_modules/@google/gemini-cli-core": {
"version": "0.1.7", "version": "0.1.12",
"resolved": "https://registry.npmjs.org/@google/gemini-cli-core/-/gemini-cli-core-0.1.7.tgz", "resolved": "https://registry.npmjs.org/@google/gemini-cli-core/-/gemini-cli-core-0.1.12.tgz",
"integrity": "sha512-V3KYamCruqhBSoWNvWm5MJn6EwwZVv/129h0f2SFVfgJP759QVAvcnT4nGq18Jf5nNqDkq01Uug3yR/NfGJN+g==", "integrity": "sha512-oI6DYfzHztROW65b0kzIBP9Lu3jgP9LCE203A60tQY8JRBWtLyStYa7Wn0RQNl8v/Ym1C6xYcuFcJVEs1tFUIQ==",
"dependencies": { "dependencies": {
"@google/genai": "^1.4.0", "@google/genai": "1.8.0",
"@modelcontextprotocol/sdk": "^1.11.0", "@modelcontextprotocol/sdk": "^1.11.0",
"@opentelemetry/api": "^1.9.0", "@opentelemetry/api": "^1.9.0",
"@opentelemetry/exporter-logs-otlp-grpc": "^0.52.0", "@opentelemetry/exporter-logs-otlp-grpc": "^0.52.0",
@@ -781,41 +783,52 @@
"@opentelemetry/sdk-node": "^0.52.0", "@opentelemetry/sdk-node": "^0.52.0",
"@types/glob": "^8.1.0", "@types/glob": "^8.1.0",
"@types/html-to-text": "^9.0.4", "@types/html-to-text": "^9.0.4",
"ajv": "^8.17.1",
"diff": "^7.0.0", "diff": "^7.0.0",
"dotenv": "^16.4.7", "dotenv": "^17.1.0",
"gaxios": "^6.1.1", "gaxios": "^7.1.1",
"glob": "^10.4.5", "glob": "^10.4.5",
"google-auth-library": "^9.11.0", "google-auth-library": "^9.11.0",
"html-to-text": "^9.0.5", "html-to-text": "^9.0.5",
"ignore": "^7.0.0", "ignore": "^7.0.0",
"micromatch": "^4.0.8", "micromatch": "^4.0.8",
"open": "^10.1.2", "open": "^10.1.2",
"shell-quote": "^1.8.2", "shell-quote": "^1.8.3",
"simple-git": "^3.28.0", "simple-git": "^3.28.0",
"strip-ansi": "^7.1.0", "strip-ansi": "^7.1.0",
"undici": "^7.10.0", "undici": "^7.10.0",
"ws": "^8.18.0" "ws": "^8.18.0"
}, },
"engines": { "engines": {
"node": ">=18" "node": ">=20"
} }
}, },
"node_modules/@google/gemini-cli-core/node_modules/dotenv": { "node_modules/@google/gemini-cli-core/node_modules/ajv": {
"version": "16.6.1", "version": "8.17.1",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.6.1.tgz", "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.17.1.tgz",
"integrity": "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow==", "integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==",
"license": "BSD-2-Clause", "license": "MIT",
"engines": { "dependencies": {
"node": ">=12" "fast-deep-equal": "^3.1.3",
"fast-uri": "^3.0.1",
"json-schema-traverse": "^1.0.0",
"require-from-string": "^2.0.2"
}, },
"funding": { "funding": {
"url": "https://dotenvx.com" "type": "github",
"url": "https://github.com/sponsors/epoberezkin"
} }
}, },
"node_modules/@google/gemini-cli-core/node_modules/json-schema-traverse": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz",
"integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==",
"license": "MIT"
},
"node_modules/@google/genai": { "node_modules/@google/genai": {
"version": "1.7.0", "version": "1.8.0",
"resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.7.0.tgz", "resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.8.0.tgz",
"integrity": "sha512-s/OZLkrIfBwc+SFFaZoKdEogkw4in0YRTGc4Q483jnfchNBWzrNe560eZEfGJHQRPn6YfzJgECCx0sqEOMWvYw==", "integrity": "sha512-n3KiMFesQCy2R9iSdBIuJ0JWYQ1HZBJJkmt4PPZMGZKvlgHhBAGw1kUMyX+vsAIzprN3lK45DI755lm70wPOOg==",
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"google-auth-library": "^9.14.2", "google-auth-library": "^9.14.2",
@@ -1051,9 +1064,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/@modelcontextprotocol/sdk": { "node_modules/@modelcontextprotocol/sdk": {
"version": "1.13.2", "version": "1.16.0",
"resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.13.2.tgz", "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.16.0.tgz",
"integrity": "sha512-Vx7qOcmoKkR3qhaQ9qf3GxiVKCEu+zfJddHv6x3dY/9P6+uIwJnmuAur5aB+4FDXf41rRrDnOEGkviX5oYZ67w==", "integrity": "sha512-8ofX7gkZcLj9H9rSd50mCgm3SSF8C7XoclxJuLoV0Cz3rEQ1tv9MZRYYvJtm9n1BiEQQMzSmE/w2AEkNacLYfg==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"ajv": "^6.12.6", "ajv": "^6.12.6",
@@ -1061,6 +1074,7 @@
"cors": "^2.8.5", "cors": "^2.8.5",
"cross-spawn": "^7.0.5", "cross-spawn": "^7.0.5",
"eventsource": "^3.0.2", "eventsource": "^3.0.2",
"eventsource-parser": "^3.0.0",
"express": "^5.0.1", "express": "^5.0.1",
"express-rate-limit": "^7.5.0", "express-rate-limit": "^7.5.0",
"pkce-challenge": "^5.0.0", "pkce-challenge": "^5.0.0",
@@ -2369,27 +2383,6 @@
"node": ">= 0.6" "node": ">= 0.6"
} }
}, },
"node_modules/accepts/node_modules/mime-db": {
"version": "1.54.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz",
"integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/accepts/node_modules/mime-types": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz",
"integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==",
"license": "MIT",
"dependencies": {
"mime-db": "^1.54.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/acorn": { "node_modules/acorn": {
"version": "8.15.0", "version": "8.15.0",
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz", "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz",
@@ -2422,9 +2415,9 @@
} }
}, },
"node_modules/agent-base": { "node_modules/agent-base": {
"version": "7.1.3", "version": "7.1.4",
"resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.3.tgz", "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz",
"integrity": "sha512-jRR5wdylq8CkOe6hei19GGZnxM6rBGwFl3Bg0YItGDimvjGtAvdZk4Pu6Cl4u4Igsws4a1fd1Vq3ezrhn4KmFw==", "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">= 14" "node": ">= 14"
@@ -2538,9 +2531,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/bignumber.js": { "node_modules/bignumber.js": {
"version": "9.3.0", "version": "9.3.1",
"resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.3.0.tgz", "resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.3.1.tgz",
"integrity": "sha512-EM7aMFTXbptt/wZdMlBv2t8IViwQL+h6SLHosp8Yf0dqJMTnY6iL32opnAB6kAdL0SZPuvcAzFr31o0c/R3/RA==", "integrity": "sha512-Ko0uX15oIUS7wJ3Rb30Fs6SkVbLmPBAKdlm7q9+ak9bbIeFf0MwuBsQV6z7+X768/cHsfg+WlysDWJcmthjsjQ==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": "*" "node": "*"
@@ -2941,7 +2934,6 @@
"version": "3.4.2", "version": "3.4.2",
"resolved": "https://registry.npmjs.org/consola/-/consola-3.4.2.tgz", "resolved": "https://registry.npmjs.org/consola/-/consola-3.4.2.tgz",
"integrity": "sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==", "integrity": "sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==",
"dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": "^14.18.0 || >=16.10.0" "node": "^14.18.0 || >=16.10.0"
@@ -3013,6 +3005,15 @@
"node": ">= 8" "node": ">= 8"
} }
}, },
"node_modules/data-uri-to-buffer": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz",
"integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==",
"license": "MIT",
"engines": {
"node": ">= 12"
}
},
"node_modules/debug": { "node_modules/debug": {
"version": "4.4.1", "version": "4.4.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.1.tgz", "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.1.tgz",
@@ -3174,9 +3175,9 @@
} }
}, },
"node_modules/dotenv": { "node_modules/dotenv": {
"version": "17.0.0", "version": "17.2.0",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.0.0.tgz", "resolved": "https://registry.npmjs.org/dotenv/-/dotenv-17.2.0.tgz",
"integrity": "sha512-A0BJ5lrpJVSfnMMXjmeO0xUnoxqsBHWCoqqTnGwGYVdnctqXXUEhJOO7LxmgxJon9tEZFGpe0xPRX0h2v3AANQ==", "integrity": "sha512-Q4sgBT60gzd0BB0lSyYD3xM4YxrXA9y4uBDof1JNYGzOXrQdQ6yX+7XIAqoFOGQFOTK1D3Hts5OllpxMDZFONQ==",
"license": "BSD-2-Clause", "license": "BSD-2-Clause",
"engines": { "engines": {
"node": ">=12" "node": ">=12"
@@ -3185,6 +3186,45 @@
"url": "https://dotenvx.com" "url": "https://dotenvx.com"
} }
}, },
"node_modules/dotenv-cli": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/dotenv-cli/-/dotenv-cli-8.0.0.tgz",
"integrity": "sha512-aLqYbK7xKOiTMIRf1lDPbI+Y+Ip/wo5k3eyp6ePysVaSqbyxjyK3dK35BTxG+rmd7djf5q2UPs4noPNH+cj0Qw==",
"dev": true,
"license": "MIT",
"dependencies": {
"cross-spawn": "^7.0.6",
"dotenv": "^16.3.0",
"dotenv-expand": "^10.0.0",
"minimist": "^1.2.6"
},
"bin": {
"dotenv": "cli.js"
}
},
"node_modules/dotenv-cli/node_modules/dotenv": {
"version": "16.6.1",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.6.1.tgz",
"integrity": "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow==",
"dev": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
},
"node_modules/dotenv-expand": {
"version": "10.0.0",
"resolved": "https://registry.npmjs.org/dotenv-expand/-/dotenv-expand-10.0.0.tgz",
"integrity": "sha512-GopVGCpVS1UKH75VKHGuQFqS1Gusej0z4FyQkPdwjil2gNIv+LNsqBlboOzpJFZKVT95GkCyWJbBSdFEFUWI2A==",
"dev": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
}
},
"node_modules/dts-resolver": { "node_modules/dts-resolver": {
"version": "2.1.1", "version": "2.1.1",
"resolved": "https://registry.npmjs.org/dts-resolver/-/dts-resolver-2.1.1.tgz", "resolved": "https://registry.npmjs.org/dts-resolver/-/dts-resolver-2.1.1.tgz",
@@ -3763,27 +3803,6 @@
"express": ">= 4.11" "express": ">= 4.11"
} }
}, },
"node_modules/express/node_modules/mime-db": {
"version": "1.54.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz",
"integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/express/node_modules/mime-types": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz",
"integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==",
"license": "MIT",
"dependencies": {
"mime-db": "^1.54.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/exsolve": { "node_modules/exsolve": {
"version": "1.0.7", "version": "1.0.7",
"resolved": "https://registry.npmjs.org/exsolve/-/exsolve-1.0.7.tgz", "resolved": "https://registry.npmjs.org/exsolve/-/exsolve-1.0.7.tgz",
@@ -3833,6 +3852,22 @@
"dev": true, "dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/fast-uri": {
"version": "3.0.6",
"resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.0.6.tgz",
"integrity": "sha512-Atfo14OibSv5wAp4VWNsFYE1AchQRTv9cBGWET4pZWHzYshFSS9NQI6I57rdKn9croWVMbYFbLhJ+yJvmZIIHw==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/fastify"
},
{
"type": "opencollective",
"url": "https://opencollective.com/fastify"
}
],
"license": "BSD-3-Clause"
},
"node_modules/fastq": { "node_modules/fastq": {
"version": "1.19.1", "version": "1.19.1",
"resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.1.tgz", "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.19.1.tgz",
@@ -3853,6 +3888,29 @@
"walk-up-path": "^4.0.0" "walk-up-path": "^4.0.0"
} }
}, },
"node_modules/fetch-blob": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
"integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/jimmywarting"
},
{
"type": "paypal",
"url": "https://paypal.me/jimmywarting"
}
],
"license": "MIT",
"dependencies": {
"node-domexception": "^1.0.0",
"web-streams-polyfill": "^3.0.3"
},
"engines": {
"node": "^12.20 || >= 14.13"
}
},
"node_modules/file-entry-cache": { "node_modules/file-entry-cache": {
"version": "8.0.0", "version": "8.0.0",
"resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz", "resolved": "https://registry.npmjs.org/file-entry-cache/-/file-entry-cache-8.0.0.tgz",
@@ -3965,6 +4023,18 @@
"node": ">=18.3.0" "node": ">=18.3.0"
} }
}, },
"node_modules/formdata-polyfill": {
"version": "4.0.10",
"resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
"integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
"license": "MIT",
"dependencies": {
"fetch-blob": "^3.1.2"
},
"engines": {
"node": ">=12.20.0"
}
},
"node_modules/forwarded": { "node_modules/forwarded": {
"version": "0.2.0", "version": "0.2.0",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz",
@@ -4008,6 +4078,34 @@
} }
}, },
"node_modules/gaxios": { "node_modules/gaxios": {
"version": "7.1.1",
"resolved": "https://registry.npmjs.org/gaxios/-/gaxios-7.1.1.tgz",
"integrity": "sha512-Odju3uBUJyVCkW64nLD4wKLhbh93bh6vIg/ZIXkWiLPBrdgtc65+tls/qml+un3pr6JqYVFDZbbmLDQT68rTOQ==",
"license": "Apache-2.0",
"dependencies": {
"extend": "^3.0.2",
"https-proxy-agent": "^7.0.1",
"node-fetch": "^3.3.2"
},
"engines": {
"node": ">=18"
}
},
"node_modules/gcp-metadata": {
"version": "6.1.1",
"resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-6.1.1.tgz",
"integrity": "sha512-a4tiq7E0/5fTjxPAaH4jpjkSv/uCaU2p5KC6HVGrvl0cDjA8iBZv4vv1gyzlmK0ZUKqwpOyQMKzZQe3lTit77A==",
"license": "Apache-2.0",
"dependencies": {
"gaxios": "^6.1.1",
"google-logging-utils": "^0.0.2",
"json-bigint": "^1.0.0"
},
"engines": {
"node": ">=14"
}
},
"node_modules/gcp-metadata/node_modules/gaxios": {
"version": "6.7.1", "version": "6.7.1",
"resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz", "resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz",
"integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==", "integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==",
@@ -4023,18 +4121,24 @@
"node": ">=14" "node": ">=14"
} }
}, },
"node_modules/gcp-metadata": { "node_modules/gcp-metadata/node_modules/node-fetch": {
"version": "6.1.1", "version": "2.7.0",
"resolved": "https://registry.npmjs.org/gcp-metadata/-/gcp-metadata-6.1.1.tgz", "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
"integrity": "sha512-a4tiq7E0/5fTjxPAaH4jpjkSv/uCaU2p5KC6HVGrvl0cDjA8iBZv4vv1gyzlmK0ZUKqwpOyQMKzZQe3lTit77A==", "integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
"license": "Apache-2.0", "license": "MIT",
"dependencies": { "dependencies": {
"gaxios": "^6.1.1", "whatwg-url": "^5.0.0"
"google-logging-utils": "^0.0.2",
"json-bigint": "^1.0.0"
}, },
"engines": { "engines": {
"node": ">=14" "node": "4.x || >=6.0.0"
},
"peerDependencies": {
"encoding": "^0.1.0"
},
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
} }
}, },
"node_modules/get-caller-file": { "node_modules/get-caller-file": {
@@ -4177,6 +4281,42 @@
"node": ">=14" "node": ">=14"
} }
}, },
"node_modules/google-auth-library/node_modules/gaxios": {
"version": "6.7.1",
"resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz",
"integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==",
"license": "Apache-2.0",
"dependencies": {
"extend": "^3.0.2",
"https-proxy-agent": "^7.0.1",
"is-stream": "^2.0.0",
"node-fetch": "^2.6.9",
"uuid": "^9.0.1"
},
"engines": {
"node": ">=14"
}
},
"node_modules/google-auth-library/node_modules/node-fetch": {
"version": "2.7.0",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
"integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
"license": "MIT",
"dependencies": {
"whatwg-url": "^5.0.0"
},
"engines": {
"node": "4.x || >=6.0.0"
},
"peerDependencies": {
"encoding": "^0.1.0"
},
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
}
},
"node_modules/google-logging-utils": { "node_modules/google-logging-utils": {
"version": "0.0.2", "version": "0.0.2",
"resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-0.0.2.tgz", "resolved": "https://registry.npmjs.org/google-logging-utils/-/google-logging-utils-0.0.2.tgz",
@@ -4225,6 +4365,42 @@
"node": ">=14.0.0" "node": ">=14.0.0"
} }
}, },
"node_modules/gtoken/node_modules/gaxios": {
"version": "6.7.1",
"resolved": "https://registry.npmjs.org/gaxios/-/gaxios-6.7.1.tgz",
"integrity": "sha512-LDODD4TMYx7XXdpwxAVRAIAuB0bzv0s+ywFonY46k126qzQHT9ygyoa9tncmOiQmmDrik65UYsEkv3lbfqQ3yQ==",
"license": "Apache-2.0",
"dependencies": {
"extend": "^3.0.2",
"https-proxy-agent": "^7.0.1",
"is-stream": "^2.0.0",
"node-fetch": "^2.6.9",
"uuid": "^9.0.1"
},
"engines": {
"node": ">=14"
}
},
"node_modules/gtoken/node_modules/node-fetch": {
"version": "2.7.0",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz",
"integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==",
"license": "MIT",
"dependencies": {
"whatwg-url": "^5.0.0"
},
"engines": {
"node": "4.x || >=6.0.0"
},
"peerDependencies": {
"encoding": "^0.1.0"
},
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
}
},
"node_modules/has-flag": { "node_modules/has-flag": {
"version": "4.0.0", "version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
@@ -4840,6 +5016,27 @@
"node": ">=8.6" "node": ">=8.6"
} }
}, },
"node_modules/mime-db": {
"version": "1.54.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz",
"integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime-types": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz",
"integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==",
"license": "MIT",
"dependencies": {
"mime-db": "^1.54.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/minimatch": { "node_modules/minimatch": {
"version": "9.0.5", "version": "9.0.5",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz",
@@ -4902,24 +5099,42 @@
"node": ">= 0.6" "node": ">= 0.6"
} }
}, },
"node_modules/node-domexception": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
"integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
"deprecated": "Use your platform's native DOMException instead",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/jimmywarting"
},
{
"type": "github",
"url": "https://paypal.me/jimmywarting"
}
],
"license": "MIT",
"engines": {
"node": ">=10.5.0"
}
},
"node_modules/node-fetch": { "node_modules/node-fetch": {
"version": "2.7.0", "version": "3.3.2",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.7.0.tgz", "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz",
"integrity": "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==", "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"whatwg-url": "^5.0.0" "data-uri-to-buffer": "^4.0.0",
"fetch-blob": "^3.1.4",
"formdata-polyfill": "^4.0.10"
}, },
"engines": { "engines": {
"node": "4.x || >=6.0.0" "node": "^12.20.0 || ^14.13.1 || >=16.0.0"
}, },
"peerDependencies": { "funding": {
"encoding": "^0.1.0" "type": "opencollective",
}, "url": "https://opencollective.com/node-fetch"
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
} }
}, },
"node_modules/node-fetch-native": { "node_modules/node-fetch-native": {
@@ -5426,6 +5641,15 @@
"node": ">=0.10.0" "node": ">=0.10.0"
} }
}, },
"node_modules/require-from-string": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
"integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/require-in-the-middle": { "node_modules/require-in-the-middle": {
"version": "7.5.2", "version": "7.5.2",
"resolved": "https://registry.npmjs.org/require-in-the-middle/-/require-in-the-middle-7.5.2.tgz", "resolved": "https://registry.npmjs.org/require-in-the-middle/-/require-in-the-middle-7.5.2.tgz",
@@ -5685,27 +5909,6 @@
"node": ">= 18" "node": ">= 18"
} }
}, },
"node_modules/send/node_modules/mime-db": {
"version": "1.54.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz",
"integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/send/node_modules/mime-types": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz",
"integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==",
"license": "MIT",
"dependencies": {
"mime-db": "^1.54.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/serve-static": { "node_modules/serve-static": {
"version": "2.2.0", "version": "2.2.0",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.0.tgz", "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-2.2.0.tgz",
@@ -6271,27 +6474,6 @@
"node": ">= 0.6" "node": ">= 0.6"
} }
}, },
"node_modules/type-is/node_modules/mime-db": {
"version": "1.54.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.54.0.tgz",
"integrity": "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/type-is/node_modules/mime-types": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-3.0.1.tgz",
"integrity": "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA==",
"license": "MIT",
"dependencies": {
"mime-db": "^1.54.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/typescript": { "node_modules/typescript": {
"version": "5.8.3", "version": "5.8.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.3.tgz", "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.3.tgz",
@@ -6410,6 +6592,15 @@
"node": "20 || >=22" "node": "20 || >=22"
} }
}, },
"node_modules/web-streams-polyfill": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.3.tgz",
"integrity": "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw==",
"license": "MIT",
"engines": {
"node": ">= 8"
}
},
"node_modules/webidl-conversions": { "node_modules/webidl-conversions": {
"version": "3.0.1", "version": "3.0.1",
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz", "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
@@ -6581,9 +6772,9 @@
"license": "ISC" "license": "ISC"
}, },
"node_modules/ws": { "node_modules/ws": {
"version": "8.18.2", "version": "8.18.3",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.18.2.tgz", "resolved": "https://registry.npmjs.org/ws/-/ws-8.18.3.tgz",
"integrity": "sha512-DMricUmwGZUVr++AEAe2uiVM7UoO9MAVZMDu05UQOaUII0lp+zOzLLU4Xqh/JvTqklB1T4uELaaPBKyjE1r4fQ==", "integrity": "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==",
"license": "MIT", "license": "MIT",
"engines": { "engines": {
"node": ">=10.0.0" "node": ">=10.0.0"

View File

@@ -1,11 +1,22 @@
{ {
"name": "gemini-cli-openai-api", "name": "gemini-cli-openai-api",
"version": "0.0.2", "version": "0.0.6",
"main": "server.ts", "main": "server.ts",
"scripts": { "scripts": {
"build": "tsdown", "build": "tsdown",
"bump-release": "bumpp", "bump-release": "bumpp",
"dev": "tsx watch ./src/server.ts", "dev": "tsx watch ./src/server.ts",
"docker": "npm run docker:build && npm run docker:push",
"docker:build": "npm run docker:build:version && npm run docker:tag:latest && npm run docker:build:du:version && npm run docker:tag:du:latest",
"docker:build:version": "dotenv -- bash -c 'docker build -t $DOCKER_REGISTRY/$DOCKER_REGISTRY_USER/$npm_package_name:v$npm_package_version .'",
"docker:build:du:version": "dotenv -- bash -c 'docker build -t $DOCKER_HUB_USER/$npm_package_name:v$npm_package_version .'",
"docker:push": "npm run docker:push:version && npm run docker:push:latest && npm run docker:push:du:version && npm run docker:push:du:latest",
"docker:push:latest": "dotenv -- bash -c 'docker push $DOCKER_REGISTRY/$DOCKER_REGISTRY_USER/$npm_package_name:latest'",
"docker:push:du:latest": "dotenv -- bash -c 'docker push $DOCKER_HUB_USER/$npm_package_name:latest'",
"docker:push:version": "dotenv -- bash -c 'docker push $DOCKER_REGISTRY/$DOCKER_REGISTRY_USER/$npm_package_name:v$npm_package_version'",
"docker:push:du:version": "dotenv -- bash -c 'docker push $DOCKER_HUB_USER/$npm_package_name:v$npm_package_version'",
"docker:tag:latest": "dotenv -- bash -c 'docker tag $DOCKER_REGISTRY/$DOCKER_REGISTRY_USER/$npm_package_name:v$npm_package_version $DOCKER_REGISTRY/$DOCKER_REGISTRY_USER/$npm_package_name:latest'",
"docker:tag:du:latest": "dotenv -- bash -c 'docker tag $DOCKER_HUB_USER/$npm_package_name:v$npm_package_version $DOCKER_HUB_USER/$npm_package_name:latest'",
"start": "node ./dist/server.js", "start": "node ./dist/server.js",
"knip": "knip", "knip": "knip",
"lint": "eslint --fix ." "lint": "eslint --fix ."
@@ -15,7 +26,8 @@
"license": "MIT", "license": "MIT",
"description": "", "description": "",
"dependencies": { "dependencies": {
"@google/gemini-cli-core": "^0.1.7", "@google/gemini-cli-core": "^0.1.12",
"consola": "^3.4.2",
"dotenv": "^17.0.0", "dotenv": "^17.0.0",
"zod": "^3.25.67" "zod": "^3.25.67"
}, },
@@ -24,6 +36,7 @@
"@stylistic/eslint-plugin": "^5.0.0", "@stylistic/eslint-plugin": "^5.0.0",
"@types/node": "^24.0.6", "@types/node": "^24.0.6",
"bumpp": "^10.2.0", "bumpp": "^10.2.0",
"dotenv-cli": "^8.0.0",
"eslint": "^9.30.0", "eslint": "^9.30.0",
"eslint-plugin-n": "^17.20.0", "eslint-plugin-n": "^17.20.0",
"jiti": "^2.4.2", "jiti": "^2.4.2",

77
src/auth.ts Normal file
View File

@@ -0,0 +1,77 @@
/**
* @fileoverview This file contains the authentication logic for the server.
*/
import http from 'http';
import { config } from './config';
import fs from 'fs/promises';
import path from 'path';
import os from 'os';
import consola from 'consola';
/**
* Ensures that the OAuth credentials file exists if the required environment
* variables are present.
*/
export async function ensureOAuthCredentials(): Promise<void> {
const geminiDir = path.join(os.homedir(), '.gemini');
const credsPath = path.join(geminiDir, 'oauth_creds.json');
try {
await fs.access(credsPath);
consola.info(`OAuth credentials file already exists at ${credsPath}`);
} catch {
consola.info(`OAuth credentials file not found at ${credsPath}.`);
if (config.ACCESS_TOKEN && config.REFRESH_TOKEN && config.EXPIRY_DATE) {
consola.info('Creating OAuth credentials file' +
' from environment variables.');
await fs.mkdir(geminiDir, { recursive: true });
const creds = {
access_token: config.ACCESS_TOKEN,
refresh_token: config.REFRESH_TOKEN,
token_type: 'Bearer',
expiry_date: config.EXPIRY_DATE,
};
await fs.writeFile(credsPath, JSON.stringify(creds, null, 2));
consola.info(`Successfully created ${credsPath}`);
} else {
consola.error(
'OAuth credentials file is missing and one or more required ' +
'environment variables: ACCESS_TOKEN, REFRESH_TOKEN, EXPIRY_DATE.',
);
throw new Error('Missing OAuth credentials or environment variables.');
}
}
}
/**
* Checks for API key authentication.
* @param req - The HTTP incoming message object.
* @param res - The HTTP server response object.
* @returns True if the request is authorized, false otherwise.
*/
export function isAuthorized(
req: http.IncomingMessage,
res: http.ServerResponse,
): boolean {
if (!config.API_KEY) {
return true; // No key configured, public access.
}
const authHeader = req.headers.authorization;
if (!authHeader) {
res.writeHead(401, { 'Content-Type': 'application/json' });
res.end(
JSON.stringify({ error: { message: 'Missing Authorization header' } }),
);
return false;
}
const token = authHeader.split(' ')[1];
if (token !== config.API_KEY) {
res.writeHead(401, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: { message: 'Invalid API key' } }));
return false;
}
return true;
}

View File

@@ -1,69 +1,208 @@
// src/chatwrapper.ts /**
import { * @fileoverview This file provides a wrapper around the Gemini API, handling
* content generation, model management, and retry logic.
*/
import {
AuthType, AuthType,
createContentGeneratorConfig, createContentGeneratorConfig,
createContentGenerator, createContentGenerator,
} from '@google/gemini-cli-core/dist/src/core/contentGenerator.js'; ContentGenerator,
Config,
DEFAULT_GEMINI_MODEL,
DEFAULT_GEMINI_FLASH_MODEL } from '@google/gemini-cli-core';
/* ------------------------------------------------------------------ */ import { Content, GeminiResponse, Model } from './types.js';
/* 1. Build the ContentGenerator exactly like the CLI does */ import consola from 'consola';
/* ------------------------------------------------------------------ */
let modelName: string; // we'll fill this once
const generatorPromise = (async () => {
// Pass undefined for model so the helper falls back to DEFAULT_GEMINI_MODEL
const cfg = await createContentGeneratorConfig(
undefined, // let helper pick default (Gemini-2.5-Pro)
AuthType.LOGIN_WITH_GOOGLE_PERSONAL, // same mode the CLI defaults to
);
modelName = cfg.model; // remember the actual model string
return await createContentGenerator(cfg);
})();
/* ------------------------------------------------------------------ */ // ==================================================================
/* 2. Helpers consumed by server.ts */ // 1. ContentGenerator Management
/* ------------------------------------------------------------------ */ // ==================================================================
/**
* A cache for ContentGenerator instances to avoid re-creating them.
* The key is the model name, or 'default' for the default model.
*/
const generatorCache = new Map<
string,
Promise<{
generator: ContentGenerator,
model: string,
}>
>();
/**
* Retrieves a ContentGenerator, creating and caching it if necessary.
* If an unsupported model is requested, it falls back to the default model.
*
* @param model - The name of the model to use.
* @returns A promise that resolves to an object containing
* the generator and the effective model name.
*/
function getGenerator(
model?: string,
): Promise<{
generator: ContentGenerator,
model: string,
}> {
// Fallback to default if the specified model is not supported.
const modelToUse =
model === DEFAULT_GEMINI_MODEL || model === DEFAULT_GEMINI_FLASH_MODEL
? model
: undefined;
// Use the effective model name for the cache key.
const key = modelToUse ?? 'default';
if (generatorCache.has(key)) {
return generatorCache.get(key)!;
}
// Create and cache a new generator.
const generatorPromise = (async () => {
const cfg = await createContentGeneratorConfig(
modelToUse,
'oauth-personal' as AuthType, // Use OAuth for personal access
);
// Using core's createContentGenerator with minimal valid arguments
const generator =
await createContentGenerator(cfg, {} as unknown as Config);
return { generator, model: cfg.model };
})();
generatorCache.set(key, generatorPromise);
return generatorPromise;
}
// ==================================================================
// 2. API Helpers
// ==================================================================
type GenConfig = Record<string, unknown>; type GenConfig = Record<string, unknown>;
const MAX_RETRIES = 3;
const INITIAL_RETRY_DELAY = 1000; // 1 second
/**
* A higher-order function that adds retry logic with exponential backoff
* to an operation that may fail due to rate limiting.
*
* @param operation - The async operation to perform.
* @returns The result of the operation.
* @throws Throws an error if the operation fails after all retries.
*/
async function withRetry<T>(operation: () => Promise<T>): Promise<T> {
let retries = 0;
while (true) {
try {
return await operation();
} catch (error) {
// Only retry on 'RESOURCE_EXHAUSTED' errors.
if (!(error instanceof Error) ||
!error.message.includes('RESOURCE_EXHAUSTED') ||
retries >= MAX_RETRIES) {
throw error;
}
retries++;
const delay = INITIAL_RETRY_DELAY * Math.pow(2, retries - 1);
consola.error(
`Rate limit hit, retrying in ${delay}ms ` +
`(attempt ${retries}/${MAX_RETRIES})`,
);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
/**
* Sends a chat request to the Gemini API.
*
* @param params - The request parameters.
* @param params.model - The model to use.
* @param params.contents - The chat history.
* @param params.generationConfig - Configuration for the generation.
* @returns The Gemini API response.
*/
export async function sendChat({ export async function sendChat({
model,
contents, contents,
generationConfig = {}, generationConfig = {},
}: { }: {
contents: any[], model?: string,
contents: Content[],
generationConfig?: GenConfig, generationConfig?: GenConfig,
tools?: unknown, // accepted but ignored for now tools?: unknown, // accepted but ignored for now
}) { }): Promise<GeminiResponse> {
const generator: any = await generatorPromise; const { generator, model: modelName } = await getGenerator(model);
return await generator.generateContent({ const gResp = await withRetry(() => generator.generateContent({
model: modelName, model: modelName,
contents, contents,
config: generationConfig, config: generationConfig,
}); }));
return {
text: gResp.text ?? '',
usageMetadata: {
promptTokens: gResp.usageMetadata?.promptTokenCount ?? 0,
candidatesTokens: gResp.usageMetadata?.candidatesTokenCount ?? 0,
totalTokens: gResp.usageMetadata?.totalTokenCount ?? 0,
},
};
} }
/**
* Sends a streaming chat request to the Gemini API.
*
* @param params - The request parameters.
* @param params.model - The model to use.
* @param params.contents - The chat history.
* @param params.generationConfig - Configuration for the generation.
* @yields Chunks of the Gemini API response.
*/
export async function* sendChatStream({ export async function* sendChatStream({
model,
contents, contents,
generationConfig = {}, generationConfig = {},
}: { }: {
contents: any[], model?: string,
contents: Content[],
generationConfig?: GenConfig, generationConfig?: GenConfig,
tools?: unknown, tools?: unknown,
}) { }) {
const generator: any = await generatorPromise; const { generator, model: modelName } = await getGenerator(model);
const stream = await generator.generateContentStream({ const stream = await withRetry(() => generator.generateContentStream({
model: modelName, model: modelName,
contents, contents,
config: generationConfig, config: generationConfig,
}); }));
for await (const chunk of stream) yield chunk; for await (const chunk of stream) yield chunk;
} }
/* ------------------------------------------------------------------ */ /**
/* 3. Additional stubs to implement later */ * Lists the available models.
/* ------------------------------------------------------------------ */ *
// export function listModels() { * @returns An array of available models.
// return [{ id: modelName }]; */
// } export function listModels(): Model[] {
return [
{
id: DEFAULT_GEMINI_MODEL,
object: 'model',
owned_by: 'google',
},
{
id: DEFAULT_GEMINI_FLASH_MODEL,
object: 'model',
owned_by: 'google',
},
];
}
// ==================================================================
// 3. Future Implementations
// ==================================================================
// The embeddings endpoint is not yet implemented.
// export async function embed(_input: unknown) { // export async function embed(_input: unknown) {
// throw new Error('Embeddings endpoint not implemented yet.'); // throw new Error('Embeddings endpoint not implemented yet.');
// } // }

View File

@@ -1,8 +1,49 @@
/**
* @fileoverview This file manages the application's configuration,
* loading environment variables and providing them in a structured object.
*/
/* eslint-disable n/no-process-env */
import dotenv from 'dotenv'; import dotenv from 'dotenv';
dotenv.config(); dotenv.config();
/**
* Application configuration object.
*/
export const config = { export const config = {
// eslint-disable-next-line n/no-process-env /**
* The port number for the server to listen on.
* Defaults to 11434 if not specified in the environment.
* @type {number}
*/
PORT: Number(process.env.PORT ?? 11434), PORT: Number(process.env.PORT ?? 11434),
/**
* A flag to enable or disable verbose logging.
* Defaults to true if not specified in the environment.
* @type {boolean}
*/
VERBOSE: Boolean(process.env.VERBOSE ?? true),
/**
* The API key for securing the server.
* If not set, the server will be public.
* @type {string | undefined}
*/
API_KEY: process.env.API_KEY,
/**
* The access token for OAuth.
* @type {string | undefined}
*/
ACCESS_TOKEN: process.env.ACCESS_TOKEN,
/**
* The refresh token for OAuth.
* @type {string | undefined}
*/
REFRESH_TOKEN: process.env.REFRESH_TOKEN,
/**
* The expiry date for the access token.
* @type {number | undefined}
*/
EXPIRY_DATE: process.env.EXPIRY_DATE
? Number(process.env.EXPIRY_DATE)
: undefined,
}; };

View File

@@ -1,95 +1,153 @@
/* ------------------------------------------------------------------ */ /**
/* mapper.ts OpenAI ⇆ Gemini (with reasoning/1 M context) */ * @fileoverview This file contains the logic for mapping requests and
/* ------------------------------------------------------------------ */ * responses between the OpenAI and Gemini API formats. It handles message
* conversion, vision support, and tool mapping.
*/
import { fetchAndEncode } from './remoteimage'; import { fetchAndEncode } from './remoteimage';
import { z } from 'zod'; import { z, ZodRawShape } from 'zod';
import { ToolRegistry } from '@google/gemini-cli-core/dist/src/tools/tool-registry.js'; import { ToolRegistry }
from '@google/gemini-cli-core/dist/src/tools/tool-registry.js';
import { Config } from '@google/gemini-cli-core/dist/src/config/config.js';
import { Tool } from '@google/gemini-cli-core/dist/src/tools/tools.js';
import {
Part,
RequestBody,
GeminiResponse,
GeminiStreamChunk,
GeminiRequestBody,
Content,
} from './types';
/* ------------------------------------------------------------------ */ /**
interface Part { text?: string; inlineData?: { mimeType: string, data: string } } * A placeholder for a local function call.
*
/* ------------------------------------------------------------------ */ * @returns A promise that resolves to a successful execution result.
function callLocalFunction(_name: string, _args: unknown) { */
return { ok: true }; async function callLocalFunction(/*_name: string, _args: unknown*/) {
return Promise.resolve({
ok: true,
llmContent: [],
returnDisplay: 'Function executed successfully',
});
} }
/* ================================================================== */ // ==================================================================
/* Request mapper: OpenAI Gemini */ // Request Mapper: OpenAI -> Gemini
/* ================================================================== */ // ==================================================================
export async function mapRequest(body: any) { /**
const parts: Part[] = []; * Maps an OpenAI-compatible request body to a Gemini-compatible format.
*
* @param body - The incoming OpenAI request body.
* @returns An object containing the mapped Gemini request and tools.
*/
export async function mapRequest(body: RequestBody) {
const contents: Content[] = [];
const systemParts: Part[] = [];
/* ---- convert messages & vision --------------------------------- */ // Convert messages and handle vision content.
for (const m of body.messages) { for (const m of body.messages) {
const parts: Part[] = [];
if (Array.isArray(m.content)) { if (Array.isArray(m.content)) {
for (const item of m.content) { for (const item of m.content) {
if (item.type === 'image_url') { if (item.type === 'image_url' && item.image_url) {
parts.push({ inlineData: await fetchAndEncode(item.image_url.url) }); parts.push({ inlineData: await fetchAndEncode(item.image_url.url) });
} else if (item.type === 'text') { } else if (item.type === 'text') {
parts.push({ text: item.text }); parts.push({ text: item.text });
} }
} }
} else { } else if (m.content) {
parts.push({ text: m.content }); parts.push({ text: m.content });
} }
if (m.role === 'system') {
systemParts.push(...parts);
continue;
}
if (m.role === 'user') {
contents.push({ role: 'user', parts: [...systemParts, ...parts] });
systemParts.length = 0;
} else if (m.role === 'assistant') {
contents.push({ role: 'model', parts });
}
} }
/* ---- base generationConfig ------------------------------------- */ // Map generation configuration parameters.
const generationConfig: Record<string, unknown> = { const generationConfig: Record<string, unknown> = {
temperature: body.temperature, temperature: body.temperature,
maxOutputTokens: body.max_tokens, maxOutputTokens: body.max_tokens,
topP: body.top_p, topP: body.top_p,
...(body.generationConfig ?? {}), // copy anything ST already merged ...(body.generationConfig ?? {}), // Preserve existing ST-merged config.
}; };
if (body.include_reasoning === true) { if (body.include_reasoning === true) {
generationConfig.enable_thoughts = true; // ← current flag // The current flag for enabling thoughts.
generationConfig.thinking_budget ??= 2048; // optional limit generationConfig.enable_thoughts = true;
// Optional limit for thinking budget.
generationConfig.thinking_budget ??= 2048;
} }
/* ---- auto-enable reasoning & 1 M context ----------------------- */ // Auto-enable reasoning and a 1 million token context window.
if (body.include_reasoning === true && generationConfig.thinking !== true) { if (body.include_reasoning === true && generationConfig.thinking !== true) {
generationConfig.thinking = true; generationConfig.thinking = true;
generationConfig.thinking_budget ??= 2048; generationConfig.thinking_budget ??= 2048;
} }
generationConfig.maxInputTokens ??= 1_000_000; // lift context cap generationConfig.maxInputTokens ??= 1_000_000; // Increase the context cap.
const geminiReq = { // Map tools and functions.
contents: [{ role: 'user', parts }], // Note: ToolRegistry expects a complex Config object that is not available
generationConfig, // here. Casting to `Config` is a necessary workaround.
stream: body.stream, const tools = new ToolRegistry({} as Config);
};
/* ---- Tool / function mapping ----------------------------------- */
const tools = new ToolRegistry({} as any);
if (body.functions?.length) { if (body.functions?.length) {
const reg = tools as any; for (const fn of body.functions) {
body.functions.forEach((fn: any) => tools.registerTool({
reg.registerTool( name: fn.name,
fn.name, displayName: fn.name,
{ description: fn.description ?? '',
title: fn.name, schema: z.object((fn.parameters?.properties as ZodRawShape) ?? {}),
description: fn.description ?? '', isOutputMarkdown: false,
inputSchema: z.object(fn.parameters?.properties ?? {}), canUpdateOutput: false,
}, validateToolParams: () => null,
(args: unknown) => callLocalFunction(fn.name, args), getDescription: (params: unknown) =>
), `Executing ${fn.name} with parameters: ` + JSON.stringify(params),
); shouldConfirmExecute: () => Promise.resolve(false),
execute: () => callLocalFunction(),
} as Tool);
}
} }
return { geminiReq, tools }; return {
geminiReq: {
model: body.model,
contents,
generationConfig,
stream: body.stream,
} as GeminiRequestBody,
tools,
};
} }
/* ================================================================== */ // ==================================================================
/* Non-stream response: Gemini OpenAI */ // Response Mapper: Gemini -> OpenAI (Non-Streaming)
/* ================================================================== */ // ==================================================================
export function mapResponse(gResp: any) { /**
const usage = gResp.usageMetadata ?? {}; * Maps a Gemini API response to the OpenAI format for non-streaming responses.
*
* @param gResp - The response from the Gemini API.
* @param body - The original OpenAI request body.
* @returns An OpenAI-compatible chat completion object.
*/
export function mapResponse(gResp: GeminiResponse, body: RequestBody) {
const usage = gResp.usageMetadata ?? {
promptTokens: 0,
candidatesTokens: 0,
totalTokens: 0,
};
return { return {
id: `chatcmpl-${Date.now()}`, id: `chatcmpl-${Date.now()}`,
object: 'chat.completion', object: 'chat.completion',
created: Math.floor(Date.now() / 1000), created: Math.floor(Date.now() / 1000),
model: 'gemini-2.5-pro-latest', model: body.model,
choices: [ choices: [
{ {
index: 0, index: 0,
@@ -98,27 +156,32 @@ export function mapResponse(gResp: any) {
}, },
], ],
usage: { usage: {
prompt_tokens: usage.promptTokens ?? 0, prompt_tokens: usage.promptTokens,
completion_tokens: usage.candidatesTokens ?? 0, completion_tokens: usage.candidatesTokens,
total_tokens: usage.totalTokens ?? 0, total_tokens: usage.totalTokens,
}, },
}; };
} }
/* ================================================================== */ // ==================================================================
/* Stream chunk mapper: Gemini OpenAI */ // Stream Chunk Mapper: Gemini -> OpenAI
/* ================================================================== */ // ==================================================================
export function mapStreamChunk(chunk: any) { /**
* Maps a Gemini stream chunk to the OpenAI format.
*
* @param chunk - A chunk from the Gemini API stream.
* @returns An OpenAI-compatible stream chunk.
*/
export function mapStreamChunk(chunk: GeminiStreamChunk) {
const part = chunk?.candidates?.[0]?.content?.parts?.[0] ?? {}; const part = chunk?.candidates?.[0]?.content?.parts?.[0] ?? {};
const delta: any = { role: 'assistant' }; const delta: { role: 'assistant', content?: string } = { role: 'assistant' };
if (part.thought === true) { if (part.thought === true) {
delta.content = `<think>${part.text ?? ''}`; // ST renders grey bubble // Wrap thought content in <think> tags for rendering.
delta.content = `<think>${part.text ?? ''}`;
} else if (typeof part.text === 'string') { } else if (typeof part.text === 'string') {
delta.content = part.text; delta.content = part.text;
} }
return { choices: [ { delta, index: 0 } ] }; return { choices: [{ delta, index: 0 }] };
} }

View File

@@ -1,3 +1,17 @@
/**
* @fileoverview This file provides a utility function for fetching a remote
* image and encoding it in base64.
*/
/**
* Fetches an image from a URL and returns
* its MIME type and base64-encoded data.
*
* @param url - The URL of the image to fetch.
* @returns A promise that resolves to an object containing the MIME type and
* base64-encoded image data.
* @throws Throws an error if the image fetch fails.
*/
export async function fetchAndEncode(url: string) { export async function fetchAndEncode(url: string) {
const res = await fetch(url); const res = await fetch(url);
if (!res.ok) throw new Error(`Failed to fetch image: ${url}`); if (!res.ok) throw new Error(`Failed to fetch image: ${url}`);

View File

@@ -1,23 +1,56 @@
/**
* @fileoverview This file sets up and runs the HTTP server that acts as a
* proxy between an OpenAI-compatible client and the Gemini API.
*/
import consola from 'consola';
import http from 'http'; import http from 'http';
import { sendChat, sendChatStream } from './chatwrapper'; import { listModels, sendChat, sendChatStream } from './chatwrapper';
import { mapRequest, mapResponse, mapStreamChunk } from './mapper'; import { mapRequest, mapResponse, mapStreamChunk } from './mapper.js';
import { RequestBody, GeminiResponse, GeminiStreamChunk, Part } from './types';
import { config } from './config'; import { config } from './config';
import { isAuthorized, ensureOAuthCredentials } from './auth';
/* ── basic config ─────────────────────────────────────────────────── */ // ==================================================================
// Server Configuration
// ==================================================================
const PORT = config.PORT; const PORT = config.PORT;
const VERBOSE = config.VERBOSE;
/* ── CORS helper ──────────────────────────────────────────────────── */ // ==================================================================
// Logger Setup
// ==================================================================
if (VERBOSE) {
consola.level = 5;
consola.info('Verbose logging enabled');
}
consola.info('Google CLI OpenAI API');
// ==================================================================
// HTTP Server Helpers
// ==================================================================
/**
* Sets CORS headers to allow cross-origin requests.
* @param res - The HTTP server response object.
*/
function allowCors(res: http.ServerResponse) { function allowCors(res: http.ServerResponse) {
res.setHeader('Access-Control-Allow-Origin', '*'); res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Headers', '*'); res.setHeader('Access-Control-Allow-Headers', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET,POST,OPTIONS'); res.setHeader('Access-Control-Allow-Methods', 'GET,POST,OPTIONS');
} }
/* ── JSON body helper ─────────────────────────────────────────────── */ /**
* Reads and parses a JSON request body.
* @param req - The HTTP incoming message object.
* @param res - The HTTP server response object.
* @returns A promise that resolves to the parsed request body
* or null if invalid.
*/
function readJSON( function readJSON(
req: http.IncomingMessage, req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
): Promise<any | null> { ): Promise<RequestBody | null> {
return new Promise((resolve) => { return new Promise((resolve) => {
let data = ''; let data = '';
req.on('data', (c) => (data += c)); req.on('data', (c) => (data += c));
@@ -30,87 +63,157 @@ function readJSON(
error: { message: 'Request body is missing for POST request' }, error: { message: 'Request body is missing for POST request' },
}), }),
); );
resolve(null);
return;
} }
return resolve(null); resolve(null);
return;
} }
try { try {
resolve(JSON.parse(data)); resolve(JSON.parse(data) as RequestBody);
} catch { } catch {
res.writeHead(400, { 'Content-Type': 'application/json' }); // malformed JSON // Handle malformed JSON.
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: { message: 'Malformed JSON' } })); res.end(JSON.stringify({ error: { message: 'Malformed JSON' } }));
resolve(null); resolve(null);
return;
} }
}); });
}); });
} }
/* ── server ───────────────────────────────────────────────────────── */ // ==================================================================
http // Main Server Logic
.createServer(async (req, res) => { // ==================================================================
allowCors(res);
const url = new URL(req.url ?? '/', `http://${req.headers.host}`);
const pathname = url.pathname.replace(/\/$/, '') || '/';
console.log(`[proxy] ${req.method} ${url.pathname}`);
/* -------- pre-flight ---------- */ ensureOAuthCredentials()
if (req.method === 'OPTIONS') { .then(() => {
res.writeHead(204).end(); http
return; .createServer(async (req, res) => {
} allowCors(res);
const url = new URL(req.url ?? '/', `http://${req.headers.host}`);
const pathname = url.pathname.replace(/\/$/, '') || '/';
consola.info(`${req.method} ${url.pathname}`);
/* -------- /v1/models ---------- */ // Handle pre-flight CORS requests.
if (pathname === '/v1/models' || pathname === '/models') { if (req.method === 'OPTIONS') {
res.writeHead(200, { 'Content-Type': 'application/json' }); res.writeHead(204).end();
res.end( return;
JSON.stringify({
data: [
{
id: 'gemini-2.5-pro',
object: 'model',
owned_by: 'google',
},
],
}),
);
return;
}
/* ---- /v1/chat/completions ---- */
if (
(pathname === '/chat/completions' ||
(pathname === '/v1/chat/completions' ) && req.method === 'POST')
) {
const body = await readJSON(req, res);
if (!body) return;
try {
const { geminiReq, tools } = await mapRequest(body);
if (body.stream) {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
});
for await (const chunk of sendChatStream({ ...geminiReq, tools })) {
res.write(`data: ${JSON.stringify(mapStreamChunk(chunk))}\n\n`);
}
res.end('data: [DONE]\n\n');
} else {
const gResp = await sendChat({ ...geminiReq, tools });
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(mapResponse(gResp)));
} }
} catch (err: any) {
console.error('Proxy error ➜', err);
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: { message: err.message } }));
}
return;
}
/* ---- anything else ---------- */ if (pathname === '/') {
res.writeHead(404).end(); res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Google CLI OpenAI API server is running......');
return;
}
if (!isAuthorized(req, res)) {
return;
}
// Route for listing available models.
if (pathname === '/v1/models' || pathname === '/models') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(
JSON.stringify({
data: listModels(),
}),
);
return;
}
// Route for chat completions.
if (
(pathname === '/chat/completions' ||
pathname === '/v1/chat/completions') &&
req.method === 'POST'
) {
const body = await readJSON(req, res);
if (!body) return;
try {
const { geminiReq, tools } = await mapRequest(body);
if (body.stream) {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
});
for await (
const chunk of sendChatStream({ ...geminiReq, tools })) {
// Transform the chunk to match the expected stream format.
const transformedParts =
chunk.candidates?.[0]?.content?.parts?.map((part) => {
const transformedPart: Part = {
text: part.text,
thought: part.text?.startsWith?.('<think>') ?? false,
};
if (part.inlineData?.data) {
transformedPart.inlineData = {
mimeType: part.inlineData.mimeType ?? 'text/plain',
data: part.inlineData.data,
};
}
return transformedPart;
}) ?? [];
const streamChunk: GeminiStreamChunk = {
candidates: [
{
content: {
parts: transformedParts,
},
},
],
};
res.write(
`data: ${JSON.stringify(mapStreamChunk(streamChunk))}\n\n`,
);
}
res.end('data: [DONE]\n\n');
} else {
const gResp: GeminiResponse =
await sendChat({ ...geminiReq, tools });
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify(mapResponse(gResp, body)));
}
} catch (err) {
const error = err as Error;
consola.error('Proxy error ➜', error);
// Handle errors, sending them in the appropriate
// format for streaming or non-streaming responses.
if (body.stream && res.headersSent) {
res.write(
`data: ${JSON.stringify({
error: {
message: error.message,
type: 'error',
},
})}\n\n`,
);
res.end('data: [DONE]\n\n');
return;
} else {
res.writeHead(500, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: { message: error.message } }));
}
}
}
})
.listen(PORT, () => {
consola.info(`Listening on port :${PORT}`);
});
}) })
.listen(PORT, () => console.log(`OpenAI proxy on :${PORT}`)); .catch((err: unknown) => {
if (err instanceof Error) {
consola.error(err.message);
} else {
consola.error('An unknown error occurred during startup.');
}
});

140
src/types.ts Normal file
View File

@@ -0,0 +1,140 @@
/**
* @fileoverview This file contains type definitions for the data structures
* used throughout the application, including request and response bodies for
* both the OpenAI and Gemini APIs.
*/
/**
* Represents a model available in the API.
*/
export interface Model {
/** The unique identifier for the model. */
id: string;
/** The type of object, always 'model'. */
object: 'model';
/** The owner of the model, always 'google'. */
owned_by: 'google';
}
/**
* Represents inline data, such as an image.
*/
interface InlineData {
/** The MIME type of the data (e.g., 'image/png'). */
mimeType: string;
/** The base64-encoded data. */
data: string;
}
/**
* Represents a part of a multi-part message.
*/
export interface Part {
/** The text content of the part. */
text?: string;
/** The inline data content of the part. */
inlineData?: InlineData;
/** A flag indicating if this part represents a thought process. */
thought?: boolean;
}
/**
* Represents a piece of content in a conversation.
*/
export interface Content {
/**
* The producer of the content. Must be either 'user' or 'model'.
*
* Useful to set for multi-turn conversations, otherwise can be empty.
* If role is not specified, SDK will determine the role.
*/
role?: 'user' | 'model';
/** An array of parts that make up the content. */
parts: Part[];
}
/**
* Represents a function definition for tool use.
*/
interface FunctionDef {
/** The name of the function. */
name: string;
/** A description of the function. */
description?: string;
/** The parameters of the function, described as a JSON schema. */
parameters?: {
properties?: Record<string, unknown>,
};
}
/**
* Represents the body of an incoming OpenAI-compatible request.
*/
export interface RequestBody {
/** The model to use for the request. */
model: string;
/** A list of messages in the conversation history. */
messages: {
role: string,
content:
| string
| { type: string, image_url?: { url: string }, text?: string }[],
}[];
/** The sampling temperature. */
temperature?: number;
/** The maximum number of tokens to generate. */
max_tokens?: number;
/** The nucleus sampling probability. */
top_p?: number;
/** Additional generation configuration for the Gemini API. */
generationConfig?: Record<string, unknown>;
/** A flag to include reasoning/thoughts in the response. */
include_reasoning?: boolean;
/** A flag to indicate if the response should be streamed. */
stream?: boolean;
/** A list of functions the model can call. */
functions?: FunctionDef[];
}
/**
* Represents the request body for the Gemini API.
*/
export interface GeminiRequestBody {
/** The model to use. */
model?: string;
/** The content of the conversation. */
contents: Content[];
/** Configuration for the generation process. */
generationConfig: Record<string, unknown>;
/** Whether to stream the response. */
stream?: boolean;
}
/**
* Represents a non-streaming response from the Gemini API.
*/
export interface GeminiResponse {
/** The generated text content. */
text: string;
/** Metadata about token usage. */
usageMetadata?: {
/** The number of tokens in the prompt. */
promptTokens: number,
/** The number of tokens in the generated candidates. */
candidatesTokens: number,
/** The total number of tokens used. */
totalTokens: number,
};
}
/**
* Represents a chunk of a streaming response from the Gemini API.
*/
export interface GeminiStreamChunk {
/** A list of candidate responses. */
candidates?: {
content?: {
parts?: Part[],
},
}[];
}