Added PostgreSQL RDS database. Added channels protected endpoints. Added scripts and docker config to run application locally in dev mode.
Some checks failed
AWS Deploy on Push / build (push) Failing after 41s

This commit is contained in:
2025-05-21 14:02:01 -05:00
parent b947ac67f0
commit 489281f3eb
18 changed files with 409 additions and 125 deletions

View File

@@ -1,3 +1,11 @@
# For use with Docker Compose to run application locally
MOCK_AUTH=true/false
DB_USER=MyDBUser
DB_PASSWORD=MyDBPassword
DB_HOST=MyDBHost
DB_NAME=iptv_updater
FREEDNS_User=MyFreeDNSUsername
FREEDNS_Password=MyFreeDNSPassword
DOMAIN_NAME=mydomain.com

6
.gitignore vendored
View File

@@ -4,10 +4,16 @@ __pycache__
.pytest_cache
.env
.venv
*.pid
*.log
*.egg-info
.coverage
.roomodes
cdk.out/
node_modules/
data/
.roo/
.ruru/
# CDK asset staging directory
.cdk.staging

10
.vscode/settings.json vendored
View File

@@ -2,8 +2,10 @@
"cSpell.words": [
"adminpassword",
"altinstall",
"autoflush",
"awscliv",
"boto",
"BURSTABLE",
"cabletv",
"certbot",
"certifi",
@@ -17,9 +19,17 @@
"iptv",
"LETSENCRYPT",
"nohup",
"onupdate",
"passlib",
"psycopg",
"pycache",
"pyjwt",
"pytest",
"reloadcmd",
"roomodes",
"ruru",
"sessionmaker",
"sqlalchemy",
"starlette",
"stefano",
"uvicorn",

151
README.md
View File

@@ -1,133 +1,44 @@
# IPTV Management System
# Roo Commander Build - v{BUILD_VERSION} ({BUILD_CODENAME})
**Status**: Actively in development ⚠️
**Build Date:** {BUILD_DATE}
A modern IPTV management system that leverages AWS Cognito for secure user authentication and provides tools for EPG generation, stream validation, and cloud deployment via AWS CDK. It automatically provisions infrastructure, including an EC2 instance running the application behind Nginx, and uses `acme.sh` with FreeDNS for automated SSL certificate management.
## Overview
## Key Features
This archive contains the configuration files for Roo Commander, a system designed to enhance AI-assisted software development within VS Code.
**Implemented**
## Installation
- **User Authentication**:
- AWS Cognito integration for secure user sign-in
- JWT token generation & validation
- Role-based access control (RBAC) with the [`require_roles`](app/auth/dependencies.py) decorator
(_Endpoints include both general and admin-protected routes_)
1. **Ensure you are in your desired VS Code workspace root directory.** This is the top-level folder of the project you want Roo Commander to assist with.
2. **Extract the contents of this zip archive directly into your workspace root.**
- **Stream & EPG Management**:
- EPG generation from M3U8 playlists ([`app/iptv/createEpg.py`](app/iptv/createEpg.py))
- Playlist creation utility ([`app/iptv/createPlaylist.py`](app/iptv/createPlaylist.py))
- Stream validation tooling ([`app/utils/check_streams.py`](app/utils/check_streams.py))
This will create/overwrite the following hidden directories and files:
- **Deployment & Infrastructure**:
- Infrastructure provisioning using AWS CDK ([`app.py`](app.py), [`infrastructure/stack.py`](infrastructure/stack.py))
- Automated SSL certificate provisioning using `acme.sh` and FreeDNS DNS API.
- Nginx configured as a reverse proxy with SSL termination.
- Deployment scripts to deploy/destroy the stack and update running instances ([`scripts/deploy.sh`](scripts/deploy.sh), [`scripts/destroy.sh`](scripts/destroy.sh))
- Environment configuration driven by a `.env` file ([`.env`](.env), [.env.example](.env.example))
- Gitea Actions workflow for automated deployment on push ([`.gitea/workflows/aws_deploy_on_push.yml`](.gitea/workflows/aws_deploy_on_push.yml)), compatible with minor changes for GitHub Actions.
* `.ruru/modes/` (Contains all mode definitions)
* `.ruru/processes/` (Contains standard process definitions)
* `.roo/` (Contains Roo Commander specific rules and configurations)
* `.ruru/templates/` (Contains templates for various artifacts)
* `.ruru/workflows/` (Contains workflow definitions)
* `.ruru/archive/` (Empty placeholder)
* `.ruru/context/` (Empty placeholder)
* `.ruru/decisions/` (Empty placeholder)
* `.ruru/docs/` (Empty placeholder)
* `.ruru/ideas/` (Empty placeholder)
* `.ruru/logs/` (Empty placeholder)
* `.ruru/planning/` (Empty placeholder)
* `.ruru/reports/` (Empty placeholder)
* `.ruru/snippets/` (Empty placeholder)
* `.ruru/tasks/` (Empty placeholder)
* `build_mode_summary.js`
* `build_roomodes.js`
* `LICENSE`
* `.roomodes`
🛠️ **In Progress**
- User management interface and additional API endpoints
- Automated EPG updates and playlist management endpoints
- Refresh token implementation and enhanced security features
- Comprehensive API documentation
## Installation & Deployment
### Prerequisites
- AWS Account and configured AWS CLI credentials.
- Node.js and npm installed (for AWS CDK).
- Python 3.8+ and pip installed.
- `uv` installed (`pip install uv`).
- A domain name hosted on FreeDNS.
- FreeDNS API credentials (username and password).
- An email address for Let's Encrypt registration.
- An SSH public key to access the EC2 instance.
### Local Setup
1. **Clone the repository:**
```bash
git clone [repository-url]
cd iptv-updater-aws
```
2. **Set up the virtual environment:**
```bash
uv venv .venv
source .venv/bin/activate
uv pip install -r requirements.txt
```
3. **Configure environment variables:**
Copy [.env.example](.env.example) to `.env` and update the credentials and domain information. You will need to provide:
- `FREEDNS_User`: Your FreeDNS username.
- `FREEDNS_Password`: Your FreeDNS password.
- `DOMAIN_NAME`: Your domain name registered with FreeDNS.
- `SSH_PUBLIC_KEY`: Your SSH public key string.
- `REPO_URL`: The URL of this git repository.
- `LETSENCRYPT_EMAIL`: The email address for Let's Encrypt notifications.
### Deploying Infrastructure
The project uses AWS CDK to provision the required AWS resources.
1. **Install dependencies and CDK globally:**
```bash
./install.sh
```
2. **Deploy the stack:**
```bash
./scripts/deploy.sh
```
This script will read variables from your `.env` file, synthesize the CDK stack, deploy it to AWS, and then use AWS SSM to update the application code on the newly created EC2 instance. The EC2 instance's userdata script will handle the installation of dependencies, Nginx, `acme.sh`, and the initial certificate provisioning using the FreeDNS API credentials passed via environment variables.
3. **Update application on running instances:**
The deployment script ([`scripts/deploy.sh`](scripts/deploy.sh)) automatically updates the application code on running instances after the initial deployment. You can re-run this script to pull the latest code and restart the service without destroying and recreating the infrastructure.
4. **Destroy the stack:**
```bash
./scripts/destroy.sh
```
This script will read variables from your `.env` file and destroy all resources created by the CDK stack.
### Automated Deployment (Gitea Actions)
The repository includes a Gitea Actions workflow definition at [`.gitea/workflows/aws_deploy_on_push.yml`](.gitea/workflows/aws_deploy_on_push.yml). This workflow is triggered on pushes to the `main` branch and automates the deployment process using AWS CDK and SSM. This workflow is largely compatible with GitHub Actions with minimal modifications.
To use the automated deployment:
1. Configure the required secrets (`AWS_ACCESS_KEY`, `AWS_SECRET_KEY`, `FREEDNS_USER`, `FREEDNS_PASSWORD`, `DOMAIN_NAME`, `SSH_PUBLIC_KEY`, `REPO_URL`, `LETSENCRYPT_EMAIL`) in your Gitea repository settings.
2. Push changes to the `main` branch.
**Important:** Extracting these files may overwrite existing configurations if you have previously set up Roo Commander.
## Usage
- **API Endpoints**: The application will be accessible via HTTPS on your configured domain name.
- Sign-in: `/signin`
- Protected endpoints: `/protected` and `/protected_admin`
Once extracted, Roo Commander should be active within your VS Code workspace (you might need to reload the window). You can interact with it via the chat interface.
- **EPG & Playlist Generation**:
- Create playlists using [`app/iptv/createPlaylist.py`](app/iptv/createPlaylist.py)
- Generate EPG data using [`app/iptv/createEpg.py`](app/iptv/createEpg.py)
## Changelog
- **Stream Validation**:
- Validate stream URLs using the utility ([`app/utils/check_streams.py`](app/utils/check_streams.py))
## Notes
- This project is under active development. Expect additional functionality and improvements in upcoming releases.
- For deployment details and troubleshooting, refer to the deployment scripts and AWS CDK documentation.
- Ensure your FreeDNS API credentials and domain name are correctly configured in the `.env` file for `acme.sh` to function correctly.
Please refer to `CHANGELOG.md` (included in this archive) for details on what's new in this version.

View File

@@ -1,12 +1,18 @@
from functools import wraps
from typing import Callable
import os
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from app.auth.cognito import get_user_from_token
from app.models.auth import CognitoUser
# Use mock auth for local testing if MOCK_AUTH is set
if os.getenv("MOCK_AUTH", "").lower() == "true":
from app.auth.mock_auth import mock_get_user_from_token as get_user_from_token
else:
from app.auth.cognito import get_user_from_token
oauth2_scheme = OAuth2PasswordBearer(
tokenUrl="signin",
scheme_name="Bearer"

32
app/auth/mock_auth.py Normal file
View File

@@ -0,0 +1,32 @@
from fastapi import HTTPException, status
from app.models.auth import CognitoUser
MOCK_USERS = {
"testuser": {
"username": "testuser",
"roles": ["admin"]
}
}
def mock_get_user_from_token(token: str) -> CognitoUser:
"""
Mock version of get_user_from_token for local testing
Accepts 'testuser' as a valid token and returns admin user
"""
if token == "testuser":
return CognitoUser(**MOCK_USERS["testuser"])
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid mock token - use 'testuser'"
)
def mock_initiate_auth(username: str, password: str) -> dict:
"""
Mock version of initiate_auth for local testing
Accepts any username/password and returns a mock token
"""
return {
"AccessToken": "testuser",
"ExpiresIn": 3600,
"TokenType": "Bearer"
}

View File

@@ -1,10 +1,15 @@
from fastapi.security import OAuth2PasswordBearer
import uvicorn
from fastapi import FastAPI, Depends
from fastapi import FastAPI, Depends, HTTPException, status
from fastapi.responses import RedirectResponse
from sqlalchemy.orm import Session
from typing import List
from app.auth.cognito import initiate_auth
from app.auth.dependencies import get_current_user, require_roles
from app.models.auth import CognitoUser, SigninRequest, TokenResponse
from app.models import ChannelDB, ChannelCreate, ChannelResponse
from app.utils.database import get_db
from fastapi import FastAPI, Depends, Security
from fastapi.security import OAuth2PasswordBearer
@@ -90,3 +95,82 @@ def protected_admin_endpoint(user: CognitoUser = Depends(get_current_user)):
If the user has 'admin' role, returns success message.
"""
return {"message": f"Hello {user.username}, you have admin privileges!"}
# Channel CRUD Endpoints
@app.post("/channels", response_model=ChannelResponse, status_code=status.HTTP_201_CREATED)
@require_roles("admin")
def create_channel(
channel: ChannelCreate,
db: Session = Depends(get_db),
user: CognitoUser = Depends(get_current_user)
):
"""Create a new channel"""
db_channel = ChannelDB(**channel.model_dump())
db.add(db_channel)
db.commit()
db.refresh(db_channel)
return db_channel
@app.get("/channels/{tvg_id}", response_model=ChannelResponse)
def get_channel(
tvg_id: str,
db: Session = Depends(get_db)
):
"""Get a channel by tvg_id"""
channel = db.query(ChannelDB).filter(ChannelDB.tvg_id == tvg_id).first()
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Channel not found"
)
return channel
@app.put("/channels/{tvg_id}", response_model=ChannelResponse)
@require_roles("admin")
def update_channel(
tvg_id: str,
channel: ChannelCreate,
db: Session = Depends(get_db),
user: CognitoUser = Depends(get_current_user)
):
"""Update a channel"""
db_channel = db.query(ChannelDB).filter(ChannelDB.tvg_id == tvg_id).first()
if not db_channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Channel not found"
)
for key, value in channel.model_dump().items():
setattr(db_channel, key, value)
db.commit()
db.refresh(db_channel)
return db_channel
@app.delete("/channels/{tvg_id}", status_code=status.HTTP_204_NO_CONTENT)
@require_roles("admin")
def delete_channel(
tvg_id: str,
db: Session = Depends(get_db),
user: CognitoUser = Depends(get_current_user)
):
"""Delete a channel"""
channel = db.query(ChannelDB).filter(ChannelDB.tvg_id == tvg_id).first()
if not channel:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Channel not found"
)
db.delete(channel)
db.commit()
return None
@app.get("/channels", response_model=List[ChannelResponse])
def list_channels(
skip: int = 0,
limit: int = 100,
db: Session = Depends(get_db)
):
"""List all channels with pagination"""
return db.query(ChannelDB).offset(skip).limit(limit).all()

View File

@@ -0,0 +1,4 @@
from .db import Base, ChannelDB
from .schemas import ChannelCreate, ChannelResponse
__all__ = ["Base", "ChannelDB", "ChannelCreate", "ChannelResponse"]

18
app/models/db.py Normal file
View File

@@ -0,0 +1,18 @@
from datetime import datetime, timezone
from sqlalchemy import Column, String, JSON, DateTime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class ChannelDB(Base):
"""SQLAlchemy model for IPTV channels"""
__tablename__ = "channels"
tvg_id = Column(String, primary_key=True)
name = Column(String, nullable=False)
group_title = Column(String)
tvg_name = Column(String)
tvg_logo = Column(String)
urls = Column(JSON) # Stores list of URLs as JSON
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))

17
app/models/schemas.py Normal file
View File

@@ -0,0 +1,17 @@
from datetime import datetime
from typing import List
from pydantic import BaseModel
class ChannelCreate(BaseModel):
"""Pydantic model for creating channels"""
urls: List[str]
name: str
group_title: str
tvg_id: str
tvg_logo: str
tvg_name: str
class ChannelResponse(ChannelCreate):
"""Pydantic model for channel responses"""
created_at: datetime
updated_at: datetime

25
app/utils/database.py Normal file
View File

@@ -0,0 +1,25 @@
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os
DATABASE_URL = (
f"postgresql://{os.getenv('DB_USER')}:{os.getenv('DB_PASSWORD')}"
f"@{os.getenv('DB_HOST')}/{os.getenv('DB_NAME')}"
)
engine = create_engine(DATABASE_URL)
# Create all tables
from app.models import Base
Base.metadata.create_all(bind=engine)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def get_db():
"""Dependency for getting database session"""
db = SessionLocal()
try:
yield db
finally:
db.close()

28
docker/Dockerfile Normal file
View File

@@ -0,0 +1,28 @@
# Use official Python image
FROM python:3.9-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose the port the app runs on
EXPOSE 8000
# Command to run the application
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

View File

@@ -0,0 +1,16 @@
version: '3.8'
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: iptv_updater
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:

View File

@@ -0,0 +1,32 @@
version: '3.8'
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: iptv_updater
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
app:
build:
context: ..
dockerfile: docker/Dockerfile
environment:
DB_USER: postgres
DB_PASSWORD: postgres
DB_HOST: postgres
DB_NAME: iptv_updater
MOCK_AUTH: "true"
ports:
- "8000:8000"
depends_on:
- postgres
command: uvicorn app.main:app --host 0.0.0.0 --port 8000
volumes:
postgres_data:

View File

@@ -6,6 +6,7 @@ from aws_cdk import (
aws_ec2 as ec2,
aws_iam as iam,
aws_cognito as cognito,
aws_rds as rds,
CfnOutput
)
from constructs import Construct
@@ -181,10 +182,56 @@ class IptvUpdaterStack(Stack):
)
userdata.add_commands(str(userdata_file, 'utf-8'))
# Update instance with userdata
# Create RDS Security Group
rds_sg = ec2.SecurityGroup(
self, "RdsSecurityGroup",
vpc=vpc,
description="Security group for RDS PostgreSQL"
)
rds_sg.add_ingress_rule(
security_group,
ec2.Port.tcp(5432),
"Allow PostgreSQL access from EC2 instance"
)
# Create RDS PostgreSQL instance (free tier compatible - db.t3.micro)
db = rds.DatabaseInstance(
self, "IptvUpdaterDB",
engine=rds.DatabaseInstanceEngine.postgres(
version=rds.PostgresEngineVersion.VER_13
),
instance_type=ec2.InstanceType.of(
ec2.InstanceClass.BURSTABLE2,
ec2.InstanceSize.MICRO
),
vpc=vpc,
security_groups=[rds_sg],
allocated_storage=10,
max_allocated_storage=10,
database_name="iptv_updater",
removal_policy=RemovalPolicy.DESTROY,
deletion_protection=False,
publicly_accessible=False
)
# Add RDS permissions to instance role
role.add_managed_policy(
iam.ManagedPolicy.from_aws_managed_policy_name(
"AmazonRDSFullAccess"
)
)
# Update instance with userdata and DB connection info
userdata.add_commands(
f'echo "DB_HOST={db.db_instance_endpoint_address}" >> /etc/environment',
f'echo "DB_NAME=iptv_updater" >> /etc/environment',
f'echo "DB_USER={db.secret.secret_value_from_json("username").to_string()}" >> /etc/environment',
f'echo "DB_PASSWORD={db.secret.secret_value_from_json("password").to_string()}" >> /etc/environment'
)
instance.add_user_data(userdata.render())
# Outputs
CfnOutput(self, "DBEndpoint", value=db.db_instance_endpoint_address)
CfnOutput(self, "InstancePublicIP", value=eip.attr_public_ip)
CfnOutput(self, "UserPoolId", value=user_pool.user_pool_id)
CfnOutput(self, "UserPoolClientId", value=client.user_pool_client_id)

View File

@@ -9,3 +9,5 @@ passlib[bcrypt]==1.7.4
boto3==1.28.0
starlette>=0.27.0
pyjwt==2.7.0
sqlalchemy==2.0.23
psycopg2-binary==2.9.9

19
scripts/start_local_dev.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash
# Start PostgreSQL
docker-compose -f docker/docker-compose-db.yml up -d
# Set mock auth and database environment variables
export MOCK_AUTH=true
export DB_USER=postgres
export DB_PASSWORD=postgres
export DB_HOST=localhost
export DB_NAME=iptv_updater
nohup uvicorn app.main:app --host 127.0.0.1 --port 8000 > app.log 2>&1 &
echo $! > iptv-updater.pid
echo "Services started:"
echo "- PostgreSQL running on localhost:5432"
echo "- FastAPI running on http://127.0.0.1:8000"
echo "- Mock auth enabled (use token: testuser)"

19
scripts/stop_local_dev.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/bash
# Stop FastAPI
if [ -f iptv-updater.pid ]; then
kill $(cat iptv-updater.pid)
rm iptv-updater.pid
echo "Stopped FastAPI"
fi
# Clean up mock auth and database environment variables
unset MOCK_AUTH
unset DB_USER
unset DB_PASSWORD
unset DB_HOST
unset DB_NAME
# Stop PostgreSQL
docker-compose -f docker/docker-compose-db.yml down
echo "Stopped PostgreSQL"