Implement functionality to create the oauth_creds.json file from environment variables (ACCESS_TOKEN, REFRESH_TOKEN, EXPIRY_DATE) if the file is missing. Also update documentation, docker-compose, and build scripts to support this new feature.
Gemini CLI OpenAI API Proxy
This project provides a lightweight proxy server that translates OpenAI API requests to the Google Gemini API, utilizing the @google/gemini-cli for authentication and request handling.
Features
- OpenAI API Compatibility: Acts as a drop-in replacement for services that use the OpenAI API format.
- Google Gemini Integration: Leverages the power of Google's Gemini models.
- Authentication: Uses
gemini-clifor secure OAuth2 authentication with Google. - Docker Support: Includes
Dockerfileanddocker-compose.ymlfor easy containerized deployment. - Hugging Face Spaces Ready: Can be easily deployed as a Hugging Face Space.
Prerequisites
Before you begin, ensure you have the following installed:
Local Installation and Setup
-
Clone the repository:
git clone https://github.com/your-username/gemini-cli-openai-api.git cd gemini-cli-openai-api -
Install project dependencies:
npm install -
Install the Gemini CLI and Authenticate:
This is a crucial step to authenticate with your Google account and generate the necessary credentials.
npm install -g @google/gemini-cli gemini auth loginFollow the on-screen instructions to log in with your Google account. This will create a file at
~/.gemini/oauth_creds.jsoncontaining your authentication tokens. -
Configure Environment Variables:
Create a
.envfile by copying the example file:cp .env.example .envOpen the
.envfile and set the following variables:PORT: The port the server will run on (default:11434).API_KEY: A secret key to protect your API endpoint. You can generate a strong random string for this.
Running the Project
Development Mode
To run the server in development mode with hot-reloading:
npm run dev
The server will be accessible at http://localhost:11434 (or the port you specified).
Production Mode
To build and run the server in production mode:
npm run build
npm start
Docker Deployment
Using Docker Compose
The easiest way to deploy the project with Docker is by using the provided docker-compose.yml file.
-
Authentication:
The Docker container needs access to your OAuth credentials. You have two options:
-
Option A (Recommended): Mount the credentials file. Uncomment the
volumessection indocker-compose.ymlto mount your localoauth_creds.jsonfile into the container.volumes: - ~/.gemini/oauth_creds.json:/root/.gemini/oauth_creds.json -
Option B: Use environment variables. If you cannot mount the file, you can set the
ACCESS_TOKEN,REFRESH_TOKEN, andEXPIRY_DATEenvironment variables in thedocker-compose.ymlfile. You can get these values from your~/.gemini/oauth_creds.jsonfile.
-
-
Configure
docker-compose.yml:Open
docker-compose.ymland set theAPI_KEYand other environment variables as needed. -
Start the container:
docker-compose up -dThe server will be running on the port specified in the
portssection of thedocker-compose.ymlfile (e.g.,4343).
Building the Docker Image Manually
If you need to build the Docker image yourself:
docker build -t gemini-cli-openai-api .
Then you can run the container with the appropriate environment variables and volume mounts.
Hugging Face Spaces Deployment
You can deploy this project as a Docker Space on Hugging Face.
-
Create a new Space:
- Go to huggingface.co/new-space.
- Choose a name for your space.
- Select "Docker" as the Space SDK.
- Choose "From scratch".
- Create the space.
-
Upload the project files:
- Upload all the project files (including the
Dockerfile) to your new Hugging Face Space repository. You can do this via the web interface or by cloning the space's repository and pushing the files.
- Upload all the project files (including the
-
Configure Secrets:
- In your Space's settings, go to the "Secrets" section.
- Add the following secrets. You can get the values for the first three from your
~/.gemini/oauth_creds.jsonfile.ACCESS_TOKEN: Your Google OAuth access token.REFRESH_TOKEN: Your Google OAuth refresh token.EXPIRY_DATE: The expiry date of your access token.API_KEY: The secret API key you want to use to protect your endpoint.PORT: The port the application should run on inside the container (e.g.,7860, which is a common default for Hugging Face Spaces).
-
Update Dockerfile (if necessary):
- The provided
Dockerfileexposes port4343. If Hugging Face requires a different port (like7860), you may need to update theEXPOSEinstruction in theDockerfile.
- The provided
-
Deploy:
- Hugging Face Spaces will automatically build and deploy your Docker container when you push changes to the repository. Check the "Logs" to monitor the build and deployment process.
Your Gemini-powered OpenAI proxy will now be running on your Hugging Face Space!