feat(auth): add auto-generation of oauth credentials
Implement functionality to create the oauth_creds.json file from environment variables (ACCESS_TOKEN, REFRESH_TOKEN, EXPIRY_DATE) if the file is missing. Also update documentation, docker-compose, and build scripts to support this new feature.
This commit is contained in:
206
README.md
206
README.md
@@ -1,76 +1,154 @@
|
||||
# Gemini ↔︎ OpenAI Proxy
|
||||
# Gemini CLI OpenAI API Proxy
|
||||
|
||||
Serve **Google Gemini 2.5 Pro** (or Flash) through an **OpenAI-compatible API**.
|
||||
Plug-and-play with clients that already speak OpenAI—SillyTavern, llama.cpp, LangChain, the VS Code *Cline* extension, etc.
|
||||
This project provides a lightweight proxy server that translates OpenAI API requests to the Google Gemini API, utilizing the `@google/gemini-cli` for authentication and request handling.
|
||||
|
||||
---
|
||||
## Features
|
||||
|
||||
## ✨ Features
|
||||
* **OpenAI API Compatibility:** Acts as a drop-in replacement for services that use the OpenAI API format.
|
||||
* **Google Gemini Integration:** Leverages the power of Google's Gemini models.
|
||||
* **Authentication:** Uses `gemini-cli` for secure OAuth2 authentication with Google.
|
||||
* **Docker Support:** Includes `Dockerfile` and `docker-compose.yml` for easy containerized deployment.
|
||||
* **Hugging Face Spaces Ready:** Can be easily deployed as a Hugging Face Space.
|
||||
|
||||
| ✔ | Feature | Notes |
|
||||
|---|---------|-------|
|
||||
| `/v1/chat/completions` | Non-stream & stream (SSE) | Works with curl, ST, LangChain… |
|
||||
| Vision support | `image_url` → Gemini `inlineData` | |
|
||||
| Function / Tool calling | OpenAI “functions” → Gemini Tool Registry | |
|
||||
| Reasoning / chain-of-thought | Sends `enable_thoughts:true`, streams `<think>` chunks | ST shows grey bubbles |
|
||||
| 1 M-token context | Proxy auto-lifts Gemini CLI’s default 200 k cap | |
|
||||
| CORS | Enabled (`*`) by default | Ready for browser apps |
|
||||
| Zero external deps | Node 22 + TypeScript only | No Express |
|
||||
## Prerequisites
|
||||
|
||||
---
|
||||
Before you begin, ensure you have the following installed:
|
||||
|
||||
## 🚀 Quick start (local)
|
||||
* [Node.js](https://nodejs.org/) (v18 or higher)
|
||||
* [npm](https://www.npmjs.com/)
|
||||
* [Docker](https://www.docker.com/) (for containerized deployment)
|
||||
* [Git](https://git-scm.com/)
|
||||
|
||||
## Local Installation and Setup
|
||||
|
||||
1. **Clone the repository:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/your-username/gemini-cli-openai-api.git
|
||||
cd gemini-cli-openai-api
|
||||
```
|
||||
|
||||
2. **Install project dependencies:**
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
3. **Install the Gemini CLI and Authenticate:**
|
||||
|
||||
This is a crucial step to authenticate with your Google account and generate the necessary credentials.
|
||||
|
||||
```bash
|
||||
npm install -g @google/gemini-cli
|
||||
gemini auth login
|
||||
```
|
||||
|
||||
Follow the on-screen instructions to log in with your Google account. This will create a file at `~/.gemini/oauth_creds.json` containing your authentication tokens.
|
||||
|
||||
4. **Configure Environment Variables:**
|
||||
|
||||
Create a `.env` file by copying the example file:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Open the `.env` file and set the following variables:
|
||||
|
||||
* `PORT`: The port the server will run on (default: `11434`).
|
||||
* `API_KEY`: A secret key to protect your API endpoint. You can generate a strong random string for this.
|
||||
|
||||
## Running the Project
|
||||
|
||||
### Development Mode
|
||||
|
||||
To run the server in development mode with hot-reloading:
|
||||
|
||||
```bash
|
||||
git clone https://huggingface.co/engineofperplexity/gemini-openai-proxy
|
||||
cd gemini-openai-proxy
|
||||
npm ci # install deps & ts-node
|
||||
npm run dev
|
||||
```
|
||||
|
||||
# launch on port 11434
|
||||
npx ts-node src/server.ts
|
||||
Optional env vars
|
||||
PORT=3000 change listen port
|
||||
GEMINI_API_KEY=<key> use your own key
|
||||
The server will be accessible at `http://localhost:11434` (or the port you specified).
|
||||
|
||||
Minimal curl test
|
||||
bash
|
||||
Copy
|
||||
Edit
|
||||
curl -X POST http://localhost:11434/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gemini-2.5-pro-latest",
|
||||
"messages":[{"role":"user","content":"Hello Gemini!"}]
|
||||
}'
|
||||
SillyTavern settings
|
||||
Field Value
|
||||
API Base URL http://127.0.0.1:11434/v1
|
||||
Model gemini-2.5-pro-latest
|
||||
Streaming On
|
||||
Reasoning On → grey <think> lines appear
|
||||
### Production Mode
|
||||
|
||||
🐳 Docker
|
||||
bash
|
||||
Copy
|
||||
Edit
|
||||
# build once
|
||||
docker build -t gemini-openai-proxy .
|
||||
To build and run the server in production mode:
|
||||
|
||||
# run
|
||||
docker run -p 11434:11434 \
|
||||
-e GEMINI_API_KEY=$GEMINI_API_KEY \
|
||||
gemini-openai-proxy
|
||||
🗂 Project layout
|
||||
pgsql
|
||||
Copy
|
||||
Edit
|
||||
src/
|
||||
server.ts – minimalist HTTP server
|
||||
mapper.ts – OpenAI ⇄ Gemini transforms
|
||||
chatwrapper.ts – thin wrapper around @google/genai
|
||||
remoteimage.ts – fetch + base64 for vision
|
||||
package.json – deps & scripts
|
||||
Dockerfile
|
||||
README.md
|
||||
📜 License
|
||||
MIT – free for personal & commercial use.
|
||||
```bash
|
||||
npm run build
|
||||
npm start
|
||||
```
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
### Using Docker Compose
|
||||
|
||||
The easiest way to deploy the project with Docker is by using the provided `docker-compose.yml` file.
|
||||
|
||||
1. **Authentication:**
|
||||
|
||||
The Docker container needs access to your OAuth credentials. You have two options:
|
||||
|
||||
* **Option A (Recommended): Mount the credentials file.**
|
||||
Uncomment the `volumes` section in `docker-compose.yml` to mount your local `oauth_creds.json` file into the container.
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
- ~/.gemini/oauth_creds.json:/root/.gemini/oauth_creds.json
|
||||
```
|
||||
|
||||
* **Option B: Use environment variables.**
|
||||
If you cannot mount the file, you can set the `ACCESS_TOKEN`, `REFRESH_TOKEN`, and `EXPIRY_DATE` environment variables in the `docker-compose.yml` file. You can get these values from your `~/.gemini/oauth_creds.json` file.
|
||||
|
||||
2. **Configure `docker-compose.yml`:**
|
||||
|
||||
Open `docker-compose.yml` and set the `API_KEY` and other environment variables as needed.
|
||||
|
||||
3. **Start the container:**
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
The server will be running on the port specified in the `ports` section of the `docker-compose.yml` file (e.g., `4343`).
|
||||
|
||||
### Building the Docker Image Manually
|
||||
|
||||
If you need to build the Docker image yourself:
|
||||
|
||||
```bash
|
||||
docker build -t gemini-cli-openai-api .
|
||||
```
|
||||
|
||||
Then you can run the container with the appropriate environment variables and volume mounts.
|
||||
|
||||
## Hugging Face Spaces Deployment
|
||||
|
||||
You can deploy this project as a Docker Space on Hugging Face.
|
||||
|
||||
1. **Create a new Space:**
|
||||
* Go to [huggingface.co/new-space](https://huggingface.co/new-space).
|
||||
* Choose a name for your space.
|
||||
* Select "Docker" as the Space SDK.
|
||||
* Choose "From scratch".
|
||||
* Create the space.
|
||||
|
||||
2. **Upload the project files:**
|
||||
* Upload all the project files (including the `Dockerfile`) to your new Hugging Face Space repository. You can do this via the web interface or by cloning the space's repository and pushing the files.
|
||||
|
||||
3. **Configure Secrets:**
|
||||
* In your Space's settings, go to the "Secrets" section.
|
||||
* Add the following secrets. You can get the values for the first three from your `~/.gemini/oauth_creds.json` file.
|
||||
* `ACCESS_TOKEN`: Your Google OAuth access token.
|
||||
* `REFRESH_TOKEN`: Your Google OAuth refresh token.
|
||||
* `EXPIRY_DATE`: The expiry date of your access token.
|
||||
* `API_KEY`: The secret API key you want to use to protect your endpoint.
|
||||
* `PORT`: The port the application should run on inside the container (e.g., `7860`, which is a common default for Hugging Face Spaces).
|
||||
|
||||
4. **Update Dockerfile (if necessary):**
|
||||
* The provided `Dockerfile` exposes port `4343`. If Hugging Face requires a different port (like `7860`), you may need to update the `EXPOSE` instruction in the `Dockerfile`.
|
||||
|
||||
5. **Deploy:**
|
||||
* Hugging Face Spaces will automatically build and deploy your Docker container when you push changes to the repository. Check the "Logs" to monitor the build and deployment process.
|
||||
|
||||
Your Gemini-powered OpenAI proxy will now be running on your Hugging Face Space!
|
||||
|
||||
Reference in New Issue
Block a user