Stefano af3a52bac6 feat(api): add model field and root endpoint
Add a model field to the gemini request mapping and implement a new
root endpoint that returns a plain text status message.
2025-06-30 18:30:51 -05:00
2025-06-28 16:08:23 -05:00
2025-06-28 16:08:23 -05:00
2025-06-28 03:40:35 +00:00
2025-06-30 16:46:43 -05:00
2025-06-30 16:46:43 -05:00
2025-06-30 16:53:44 -05:00

Gemini CLI OpenAI API Proxy

This project provides a lightweight proxy server that translates OpenAI API requests to the Google Gemini API, utilizing the @google/gemini-cli for authentication and request handling.

Features

  • OpenAI API Compatibility: Acts as a drop-in replacement for services that use the OpenAI API format.
  • Google Gemini Integration: Leverages the power of Google's Gemini models.
  • Authentication: Uses gemini-cli for secure OAuth2 authentication with Google.
  • Docker Support: Includes Dockerfile and docker-compose.yml for easy containerized deployment.
  • Hugging Face Spaces Ready: Can be easily deployed as a Hugging Face Space.

Support the Project

If you find this project useful, consider supporting its development:

Donate using Liberapay

Prerequisites

Before you begin, ensure you have the following installed:

Local Installation and Setup

  1. Clone the repository:

    git clone https://github.com/your-username/gemini-cli-openai-api.git
    cd gemini-cli-openai-api
    
  2. Install project dependencies:

    npm install
    
  3. Install the Gemini CLI and Authenticate:

    This is a crucial step to authenticate with your Google account and generate the necessary credentials.

    npm install -g @google/gemini-cli
    gemini auth login
    

    Follow the on-screen instructions to log in with your Google account. This will create a file at ~/.gemini/oauth_creds.json containing your authentication tokens.

  4. Configure Environment Variables:

    Create a .env file by copying the example file:

    cp .env.example .env
    

    Open the .env file and set the following variables:

    • PORT: The port the server will run on (default: 11434).
    • API_KEY: A secret key to protect your API endpoint. You can generate a strong random string for this.

Running the Project

Development Mode

To run the server in development mode with hot-reloading:

npm run dev

The server will be accessible at http://localhost:11434 (or the port you specified).

Production Mode

To build and run the server in production mode:

npm run build
npm start

Docker Deployment

Using Docker Compose

The easiest way to deploy the project with Docker is by using the provided docker-compose.yml file.

  1. Authentication:

    The Docker container needs access to your OAuth credentials. You have two options:

    • Option A (Recommended): Mount the credentials file. Uncomment the volumes section in docker-compose.yml to mount your local oauth_creds.json file into the container.

      volumes:
        - ~/.gemini/oauth_creds.json:/root/.gemini/oauth_creds.json
      
    • Option B: Use environment variables. If you cannot mount the file, you can set the ACCESS_TOKEN, REFRESH_TOKEN, and EXPIRY_DATE environment variables in the docker-compose.yml file. You can get these values from your ~/.gemini/oauth_creds.json file.

  2. Configure docker-compose.yml:

    Open docker-compose.yml and set the API_KEY and other environment variables as needed.

  3. Start the container:

    docker-compose up -d
    

    The server will be running on the port specified in the ports section of the docker-compose.yml file (e.g., 4343).

Building the Docker Image Manually

If you need to build the Docker image yourself:

docker build -t gemini-cli-openai-api .

Then you can run the container with the appropriate environment variables and volume mounts.

Hugging Face Spaces Deployment

You can deploy this project as a Docker Space on Hugging Face.

  1. Create a new Space:

    • Go to huggingface.co/new-space.
    • Choose a name for your space.
    • Select "Docker" as the Space SDK.
    • Choose "From scratch".
    • Create the space.
  2. Upload the project files:

    • Upload all the project files (including the Dockerfile) to your new Hugging Face Space repository. You can do this via the web interface or by cloning the space's repository and pushing the files.
  3. Configure Secrets:

    • In your Space's settings, go to the "Secrets" section.
    • Add the following secrets. You can get the values for the first three from your ~/.gemini/oauth_creds.json file.
      • ACCESS_TOKEN: Your Google OAuth access token.
      • REFRESH_TOKEN: Your Google OAuth refresh token.
      • EXPIRY_DATE: The expiry date of your access token.
      • API_KEY: The secret API key you want to use to protect your endpoint.
      • PORT: The port the application should run on inside the container (e.g., 7860, which is a common default for Hugging Face Spaces).
  4. Update Dockerfile (if necessary):

    • The provided Dockerfile exposes port 4343. If Hugging Face requires a different port (like 7860), you may need to update the EXPOSE instruction in the Dockerfile.
  5. Deploy:

    • Hugging Face Spaces will automatically build and deploy your Docker container when you push changes to the repository. Check the "Logs" to monitor the build and deployment process.

Your Gemini-powered OpenAI proxy will now be running on your Hugging Face Space!

Description
OpenAI API compatible proxy to expose gemini-cli models
Readme MIT 728 KiB
Languages
TypeScript 98.1%
Dockerfile 1.9%