Compare commits
40 Commits
feature/we
...
7187ba9ea3
| Author | SHA1 | Date | |
|---|---|---|---|
| 7187ba9ea3 | |||
|
|
fdfc9f0996 | ||
|
|
9f650faf88 | ||
|
|
1be0317192 | ||
|
|
fb39fe76cb | ||
|
|
6c02e0b7c6 | ||
|
|
d3fd874330 | ||
|
|
10a9d40f1d | ||
|
|
82cf3d9010 | ||
|
|
decf3132d5 | ||
|
|
c3c0d85908 | ||
|
|
8f797b3a51 | ||
|
|
faff555757 | ||
|
|
0c2e34f2f0 | ||
|
|
291b729894 | ||
|
|
59dbaf8a6c | ||
|
|
8d5dd046a4 | ||
|
|
fc8e388c0a | ||
|
|
e6dccb5656 | ||
|
|
7d8eb89911 | ||
|
|
60f425a4fc | ||
|
|
647828aa78 | ||
|
|
b4c8d3fdb8 | ||
|
|
09b1c1e37a | ||
|
|
916d8bf95a | ||
|
|
6bc21219a7 | ||
|
|
ca33b2d74a | ||
|
|
d0c50f5d8a | ||
|
|
a8a285b356 | ||
|
|
045cf6aad2 | ||
|
|
78be4fc600 | ||
|
|
d1b4d58c5d | ||
|
|
4a539a33c9 | ||
|
|
b326153d26 | ||
|
|
2612cef1dc | ||
|
|
120721bbc6 | ||
|
|
fe5b4659fe | ||
|
|
4078073f0b | ||
|
|
c9254ed7eb | ||
|
|
49d0236a52 |
@@ -18,9 +18,11 @@ This repository contains practical OpenClaw skills and companion integrations. I
|
||||
|---|---|---|
|
||||
| `elevenlabs-stt` | Transcribe local audio files with ElevenLabs Speech-to-Text, with diarization, language hints, event tags, and JSON output. | `skills/elevenlabs-stt` |
|
||||
| `gitea-api` | Interact with Gitea via REST API (repos, issues, PRs, releases, branches, user info). | `skills/gitea-api` |
|
||||
| `nordvpn-client` | Install, log in to, connect, disconnect, and verify NordVPN sessions across Linux CLI and macOS NordLynx/WireGuard backends. | `skills/nordvpn-client` |
|
||||
| `portainer` | Manage Portainer stacks via API (list, start/stop/restart, update, prune images). | `skills/portainer` |
|
||||
| `searxng` | Search through a local or self-hosted SearXNG instance for web, news, images, and more. | `skills/searxng` |
|
||||
| `web-automation` | One-shot extraction plus broader browsing/scraping with Playwright + Camoufox (auth flows, extraction, bot-protected sites). | `skills/web-automation` |
|
||||
| `us-cpa` | Federal individual 1040 workflow for tax questions, case intake, preparation, review, and draft e-file-ready export. | `skills/us-cpa` |
|
||||
| `web-automation` | One-shot extraction plus broader browsing/scraping with Playwright-compatible CloakBrowser (auth flows, extraction, bot-protected sites). | `skills/web-automation` |
|
||||
|
||||
## Integrations
|
||||
|
||||
|
||||
@@ -6,9 +6,11 @@ This folder contains detailed docs for each skill in this repository.
|
||||
|
||||
- [`elevenlabs-stt`](elevenlabs-stt.md) — Local audio transcription through ElevenLabs Speech-to-Text
|
||||
- [`gitea-api`](gitea-api.md) — REST-based Gitea automation (no `tea` CLI required)
|
||||
- [`nordvpn-client`](nordvpn-client.md) — Cross-platform NordVPN install, login, connect, disconnect, and verification with Linux CLI and macOS NordLynx/WireGuard support
|
||||
- [`portainer`](portainer.md) — Portainer stack management (list, lifecycle, updates, image pruning)
|
||||
- [`searxng`](searxng.md) — Privacy-respecting metasearch via a local or self-hosted SearXNG instance
|
||||
- [`web-automation`](web-automation.md) — One-shot extraction plus Playwright + Camoufox browser automation and scraping
|
||||
- [`us-cpa`](us-cpa.md) — Federal individual 1040 workflow for tax questions, case intake, preparation, review, and draft e-file-ready export
|
||||
- [`web-automation`](web-automation.md) — One-shot extraction plus Playwright-compatible CloakBrowser browser automation and scraping
|
||||
|
||||
## Integrations
|
||||
|
||||
|
||||
371
docs/nordvpn-client.md
Normal file
371
docs/nordvpn-client.md
Normal file
@@ -0,0 +1,371 @@
|
||||
# nordvpn-client
|
||||
|
||||
Cross-platform NordVPN lifecycle skill for macOS and Linux.
|
||||
|
||||
## Overview
|
||||
|
||||
`nordvpn-client` is the operator-facing VPN control skill for OpenClaw. It can:
|
||||
|
||||
- detect whether the host is ready for NordVPN automation
|
||||
- install or bootstrap the required backend
|
||||
- validate auth
|
||||
- connect to a target country or city
|
||||
- verify the public exit location
|
||||
- disconnect and restore normal local networking state
|
||||
|
||||
The skill uses different backends by platform:
|
||||
|
||||
- Linux: official `nordvpn` CLI
|
||||
- macOS: NordLynx/WireGuard with `wireguard-go` and `wireguard-tools`
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js install
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js login
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js verify
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js verify --country "Italy"
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js verify --country "Italy" --city "Milan"
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js connect --country "Italy"
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js connect --city "Tokyo"
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js connect --country "Japan" --city "Tokyo"
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js disconnect
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status --debug
|
||||
```
|
||||
|
||||
## Credentials
|
||||
|
||||
Supported inputs:
|
||||
|
||||
- `NORDVPN_TOKEN`
|
||||
- `NORDVPN_TOKEN_FILE`
|
||||
- `NORDVPN_USERNAME`
|
||||
- `NORDVPN_PASSWORD`
|
||||
- `NORDVPN_PASSWORD_FILE`
|
||||
|
||||
Default OpenClaw credential paths:
|
||||
|
||||
- token: `~/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt`
|
||||
- password: `~/.openclaw/workspace/.clawdbot/credentials/nordvpn/password.txt`
|
||||
|
||||
Recommended setup on macOS is a token file with strict permissions:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.openclaw/workspace/.clawdbot/credentials/nordvpn
|
||||
chmod 700 ~/.openclaw/workspace/.clawdbot/credentials/nordvpn
|
||||
printf '%s\n' '<your-nordvpn-token>' > ~/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt
|
||||
chmod 600 ~/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt
|
||||
```
|
||||
|
||||
Do not commit secrets into the repo or the skill docs.
|
||||
|
||||
## Platform Backends
|
||||
|
||||
### macOS
|
||||
|
||||
Current macOS backend:
|
||||
|
||||
- NordLynx/WireGuard
|
||||
- `wireguard-go`
|
||||
- `wireguard-tools`
|
||||
- NordVPN DNS in the generated WireGuard config:
|
||||
- `103.86.96.100`
|
||||
- `103.86.99.100`
|
||||
|
||||
Important behavior:
|
||||
|
||||
- `NordVPN.app` may remain installed, but the automated backend does not reuse app login state.
|
||||
- The skill automatically suspends Tailscale before connect if Tailscale is active.
|
||||
- The skill resumes Tailscale after disconnect, or after a failed connect, if it stopped it.
|
||||
- The Homebrew NordVPN app does not need to be uninstalled.
|
||||
|
||||
### Linux
|
||||
|
||||
Current Linux backend:
|
||||
|
||||
- official `nordvpn` CLI
|
||||
- official NordVPN installer
|
||||
- token login through `nordvpn login --token ...`
|
||||
|
||||
## Install / Bootstrap
|
||||
|
||||
### macOS
|
||||
|
||||
Bootstrap the automation backend:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js install
|
||||
```
|
||||
|
||||
Equivalent Homebrew command:
|
||||
|
||||
```bash
|
||||
brew install wireguard-go wireguard-tools
|
||||
```
|
||||
|
||||
What `install` does on macOS:
|
||||
|
||||
- checks whether `wireguard-go` is present
|
||||
- checks whether `wg` and `wg-quick` are present
|
||||
- installs missing packages through Homebrew
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js install
|
||||
```
|
||||
|
||||
What `install` does on Linux:
|
||||
|
||||
- downloads NordVPN’s official installer script
|
||||
- runs it
|
||||
- leaves subsequent login/connect to the official `nordvpn` CLI
|
||||
|
||||
## macOS sudoers Setup
|
||||
|
||||
Automated macOS connect/disconnect requires passwordless `sudo` for the helper script that invokes `wg-quick`.
|
||||
|
||||
Installed OpenClaw helper path:
|
||||
|
||||
```text
|
||||
/Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh
|
||||
```
|
||||
|
||||
Edit sudoers safely:
|
||||
|
||||
```bash
|
||||
sudo visudo
|
||||
```
|
||||
|
||||
Add this exact rule:
|
||||
|
||||
```sudoers
|
||||
stefano ALL=(root) NOPASSWD: /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh probe, /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh up, /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh down
|
||||
```
|
||||
|
||||
If you run the repo copy directly instead of the installed OpenClaw skill, adjust the helper path accordingly.
|
||||
|
||||
## Common Flows
|
||||
|
||||
### Status
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status
|
||||
```
|
||||
|
||||
Use this first to answer:
|
||||
|
||||
- is the correct backend available?
|
||||
- is the token visible?
|
||||
- is `sudoReady` true?
|
||||
- is the machine currently connected?
|
||||
|
||||
### Login
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js login
|
||||
```
|
||||
|
||||
On macOS this validates the token and populates the local auth cache. It does not connect the VPN.
|
||||
|
||||
### Connect
|
||||
|
||||
Country:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js connect --country "Germany"
|
||||
```
|
||||
|
||||
City:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js connect --country "Japan" --city "Tokyo"
|
||||
```
|
||||
|
||||
Expected macOS behavior:
|
||||
|
||||
- stop Tailscale if active
|
||||
- select a NordVPN server for the target
|
||||
- bring up the WireGuard tunnel
|
||||
- verify the public exit location
|
||||
- return JSON describing the chosen server and final verified location
|
||||
|
||||
### Verify
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js verify --country "Germany"
|
||||
```
|
||||
|
||||
Use this after connect if you want an explicit location check without changing VPN state.
|
||||
|
||||
### Disconnect
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js disconnect
|
||||
```
|
||||
|
||||
Expected macOS behavior:
|
||||
|
||||
- attempt `wg-quick down` whenever there is active or residual NordVPN WireGuard state
|
||||
- remove stale local NordVPN state files after teardown
|
||||
- resume Tailscale if the skill had suspended it
|
||||
|
||||
## Output Model
|
||||
|
||||
Normal JSON is redacted by default.
|
||||
|
||||
Redacted fields in normal mode:
|
||||
|
||||
- `cliPath`
|
||||
- `appPath`
|
||||
- `wireguard.configPath`
|
||||
- `wireguard.helperPath`
|
||||
- `wireguard.authCache.tokenSource`
|
||||
|
||||
Operational fields preserved in normal mode:
|
||||
|
||||
- `connected`
|
||||
- `wireguard.active`
|
||||
- `wireguard.endpoint`
|
||||
- `requestedTarget`
|
||||
- `verification`
|
||||
- public IP and location
|
||||
|
||||
For deeper troubleshooting, use:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status --debug
|
||||
```
|
||||
|
||||
`--debug` keeps the internal local paths and other low-level metadata in the JSON output.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### `Invalid authorization header`
|
||||
|
||||
Meaning:
|
||||
|
||||
- the token file was found
|
||||
- the token value is not valid for NordVPN’s API
|
||||
|
||||
Actions:
|
||||
|
||||
1. generate a fresh NordVPN access token
|
||||
2. replace the contents of `~/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt`
|
||||
3. run:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js login
|
||||
```
|
||||
|
||||
### `sudoReady: false`
|
||||
|
||||
Meaning:
|
||||
|
||||
- the helper script is present
|
||||
- the agent cannot run `wg-quick` non-interactively
|
||||
|
||||
Actions:
|
||||
|
||||
1. add the `visudo` rule shown above
|
||||
2. rerun:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status
|
||||
```
|
||||
|
||||
Expected:
|
||||
|
||||
- `wireguard.sudoReady: true`
|
||||
|
||||
### WireGuard tools missing
|
||||
|
||||
Meaning:
|
||||
|
||||
- macOS backend is selected
|
||||
- `wireguard-go`, `wg`, or `wg-quick` is missing
|
||||
|
||||
Actions:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js install
|
||||
```
|
||||
|
||||
or:
|
||||
|
||||
```bash
|
||||
brew install wireguard-go wireguard-tools
|
||||
```
|
||||
|
||||
### Tailscale interaction
|
||||
|
||||
Expected behavior on macOS:
|
||||
|
||||
- Tailscale is suspended before the NordVPN connect
|
||||
- Tailscale is resumed after disconnect or failed connect
|
||||
|
||||
If a connect succeeds but later traffic is wrong, check:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js status
|
||||
/opt/homebrew/bin/tailscale status --json
|
||||
```
|
||||
|
||||
Look for:
|
||||
|
||||
- `connected: true` and a foreign exit IP while NordVPN is up
|
||||
- `connected: false` and Texas/Garland IP after disconnect
|
||||
|
||||
### Status says disconnected after a verified connect
|
||||
|
||||
This was a previous macOS false-negative path and is now normalized in the connect response.
|
||||
|
||||
Current expectation:
|
||||
|
||||
- if `connect` verifies the target location successfully
|
||||
- the returned `state` snapshot should also show:
|
||||
- `connected: true`
|
||||
- `wireguard.active: true`
|
||||
|
||||
If that regresses, capture:
|
||||
|
||||
- `connect` JSON
|
||||
- `verify` JSON
|
||||
- `status --debug` JSON
|
||||
|
||||
### Disconnect says “no active connection” but traffic is still foreign
|
||||
|
||||
The current macOS disconnect path now treats residual WireGuard state as sufficient reason to attempt teardown.
|
||||
|
||||
Safe operator check:
|
||||
|
||||
```bash
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js disconnect
|
||||
node skills/nordvpn-client/scripts/nordvpn-client.js verify
|
||||
```
|
||||
|
||||
Expected after a good disconnect:
|
||||
|
||||
- Texas/Garland public IP again
|
||||
- `wireguard.configPath: null` in normal status output
|
||||
- `wireguard.lastConnection: null`
|
||||
|
||||
If that regresses, capture:
|
||||
|
||||
- `disconnect` JSON
|
||||
- `verify` JSON
|
||||
- `status --debug` JSON
|
||||
|
||||
## Recommended Agent Workflow
|
||||
|
||||
For VPN-routed work:
|
||||
|
||||
1. `status`
|
||||
2. `install` if backend tooling is missing
|
||||
3. `login` if token validation has not happened yet
|
||||
4. `connect --country ...` or `connect --country ... --city ...`
|
||||
5. `verify`
|
||||
6. run the follow-up skill such as `web-automation`
|
||||
7. `disconnect`
|
||||
8. `verify` again if you need proof the machine returned to the normal exit path
|
||||
21
docs/plans/2026-03-10-web-automation-consolidation-design.md
Normal file
21
docs/plans/2026-03-10-web-automation-consolidation-design.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Web Automation Consolidation Design
|
||||
|
||||
## Goal
|
||||
Consolidate `playwright-safe` into `web-automation` so the repo exposes a single web skill. Keep the proven one-shot extractor behavior, rename it to `extract.js`, and remove the separate `playwright-safe` skill and docs.
|
||||
|
||||
## Architecture
|
||||
`web-automation` remains the only published skill. It will expose two capability bands under one skill: one-shot extraction via `scripts/extract.js`, and broader stateful automation via the existing `auth.ts`, `browse.ts`, `flow.ts`, and `scrape.ts` commands. The one-shot extractor will keep the current safe Playwright behavior: single URL, JSON output, bounded stealth/anti-bot handling, and no sandbox-disabling Chromium flags.
|
||||
|
||||
## Migration
|
||||
- Copy the working extractor into `skills/web-automation/scripts/extract.js`
|
||||
- Update `skills/web-automation/SKILL.md` and `docs/web-automation.md` to describe both one-shot extraction and full automation
|
||||
- Remove `skills/playwright-safe/`
|
||||
- Remove `docs/playwright-safe.md`
|
||||
- Remove README/doc index references to `playwright-safe`
|
||||
|
||||
## Verification
|
||||
- `node skills/web-automation/scripts/extract.js` -> JSON error for missing URL
|
||||
- `node skills/web-automation/scripts/extract.js ftp://example.com` -> JSON error for invalid scheme
|
||||
- `node skills/web-automation/scripts/extract.js https://example.com` -> valid JSON result with title/status
|
||||
- Repo text scan confirms no remaining published references directing users to `playwright-safe`
|
||||
- Commit, push, and clean up the worktree
|
||||
132
docs/plans/2026-03-10-web-automation-consolidation.md
Normal file
132
docs/plans/2026-03-10-web-automation-consolidation.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Web Automation Consolidation Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Consolidate the separate `playwright-safe` skill into `web-automation` and publish a single web skill with both one-shot extraction and broader automation.
|
||||
|
||||
**Architecture:** Move the proven safe one-shot extractor into `skills/web-automation/scripts/extract.js`, update `web-automation` docs to expose it as the simple path, and remove the separate `playwright-safe` skill and docs. Keep the extractor behavior unchanged except for its new location/name.
|
||||
|
||||
**Tech Stack:** Node.js, Playwright, Camoufox skill docs, git
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Create isolated worktree
|
||||
|
||||
**Files:**
|
||||
- Modify: repo git metadata only
|
||||
|
||||
**Step 1: Create worktree**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills worktree add /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation -b feature/web-automation-consolidation
|
||||
```
|
||||
|
||||
**Step 2: Verify baseline**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation status --short --branch
|
||||
```
|
||||
Expected: clean feature branch
|
||||
|
||||
### Task 2: Move the extractor into web-automation
|
||||
|
||||
**Files:**
|
||||
- Create: `skills/web-automation/scripts/extract.js`
|
||||
- Read: `skills/playwright-safe/scripts/playwright-safe.js`
|
||||
|
||||
**Step 1: Copy the extractor**
|
||||
- Copy the proven script content into `skills/web-automation/scripts/extract.js`
|
||||
- Adjust only relative paths/messages if needed
|
||||
|
||||
**Step 2: Preserve behavior**
|
||||
- Keep JSON-only output
|
||||
- Keep URL validation
|
||||
- Keep stealth/anti-bot behavior
|
||||
- Keep sandbox enabled
|
||||
|
||||
### Task 3: Update skill and docs
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/web-automation/SKILL.md`
|
||||
- Modify: `docs/web-automation.md`
|
||||
- Modify: `README.md`
|
||||
- Modify: `docs/README.md`
|
||||
- Delete: `skills/playwright-safe/SKILL.md`
|
||||
- Delete: `skills/playwright-safe/package.json`
|
||||
- Delete: `skills/playwright-safe/package-lock.json`
|
||||
- Delete: `skills/playwright-safe/.gitignore`
|
||||
- Delete: `skills/playwright-safe/scripts/playwright-safe.js`
|
||||
- Delete: `docs/playwright-safe.md`
|
||||
|
||||
**Step 1: Update docs**
|
||||
- Make `web-automation` the only published web skill
|
||||
- Document `extract.js` as the one-shot extraction path
|
||||
- Remove published references to `playwright-safe`
|
||||
|
||||
**Step 2: Remove redundant skill**
|
||||
- Delete the separate `playwright-safe` skill files and doc
|
||||
|
||||
### Task 4: Verify behavior
|
||||
|
||||
**Files:**
|
||||
- Test: `skills/web-automation/scripts/extract.js`
|
||||
|
||||
**Step 1: Missing URL check**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation && node skills/web-automation/scripts/extract.js
|
||||
```
|
||||
Expected: JSON error about missing URL
|
||||
|
||||
**Step 2: Invalid scheme check**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation && node skills/web-automation/scripts/extract.js ftp://example.com
|
||||
```
|
||||
Expected: JSON error about only http/https URLs allowed
|
||||
|
||||
**Step 3: Smoke test**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation && node skills/web-automation/scripts/extract.js https://example.com
|
||||
```
|
||||
Expected: JSON with title `Example Domain`, status `200`, and no sandbox-disabling flags in code
|
||||
|
||||
**Step 4: Reference scan**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-consolidation && rg -n "playwright-safe" README.md docs skills
|
||||
```
|
||||
Expected: no remaining published references, or only intentional historical plan docs
|
||||
|
||||
### Task 5: Commit, push, and clean up
|
||||
|
||||
**Files:**
|
||||
- Modify: git history only
|
||||
|
||||
**Step 1: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add skills/web-automation docs README.md
|
||||
git commit -m "refactor: consolidate web scraping into web-automation"
|
||||
```
|
||||
|
||||
**Step 2: Push**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git push -u origin feature/web-automation-consolidation
|
||||
```
|
||||
|
||||
**Step 3: Merge and cleanup**
|
||||
- Fast-forward or merge to `main`
|
||||
- Push `main`
|
||||
- Remove the worktree
|
||||
- Delete the feature branch
|
||||
40
docs/plans/2026-03-11-nordvpn-client-design.md
Normal file
40
docs/plans/2026-03-11-nordvpn-client-design.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# NordVPN Client Skill Design
|
||||
|
||||
## Goal
|
||||
Create a `nordvpn-client` skill that works on macOS and Linux gateway hosts. It should detect whether NordVPN is already installed, bootstrap it if missing, handle login/auth setup, connect to a requested country or city, verify the VPN state and public IP location, disconnect when requested, and then be usable alongside other skills like `web-automation`.
|
||||
|
||||
## Architecture
|
||||
The skill exposes one logical interface with platform-specific backends. Linux uses the official NordVPN CLI path. macOS probes for a usable CLI first, but falls back to the official app workflow when needed. The skill is responsible only for VPN lifecycle and verification, not for wrapping arbitrary commands inside a VPN session.
|
||||
|
||||
## Interface
|
||||
Single script entrypoint:
|
||||
- `node scripts/nordvpn-client.js install`
|
||||
- `node scripts/nordvpn-client.js login`
|
||||
- `node scripts/nordvpn-client.js connect --country "Italy"`
|
||||
- `node scripts/nordvpn-client.js connect --city "Milan"`
|
||||
- `node scripts/nordvpn-client.js disconnect`
|
||||
- `node scripts/nordvpn-client.js status`
|
||||
|
||||
## Platform Model
|
||||
### Linux
|
||||
- Probe for `nordvpn`
|
||||
- If missing, bootstrap official NordVPN package/CLI
|
||||
- Prefer token-based login for non-interactive auth
|
||||
- Connect/disconnect/status through official CLI
|
||||
|
||||
### macOS
|
||||
- Probe for `nordvpn` CLI if available
|
||||
- Otherwise probe/install the official app
|
||||
- Use CLI when present, otherwise automate the app/login flow
|
||||
- Verify connection using app/CLI state plus external IP/geolocation
|
||||
|
||||
## Auth and Safety
|
||||
- Do not store raw NordVPN secrets in skill docs
|
||||
- Read token/credentials from env vars or a local credential file path
|
||||
- Keep the skill focused on install/login/connect/disconnect/status
|
||||
- After `connect`, verify both local VPN state and external IP/location before the agent proceeds to tasks like `web-automation`
|
||||
|
||||
## Verification
|
||||
- `status` reports platform, install state, auth state, connection state, and public IP/location check
|
||||
- `connect` verifies the requested target as closely as available data allows
|
||||
- Local validation happens first in the OpenClaw workspace, then the proven skill is copied into `stef-openclaw-skills`, documented, committed, and pushed
|
||||
127
docs/plans/2026-03-11-nordvpn-client.md
Normal file
127
docs/plans/2026-03-11-nordvpn-client.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# NordVPN Client Skill Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Build a cross-platform `nordvpn-client` skill for macOS and Linux that can install/bootstrap NordVPN, log in, connect to a target country or city, verify the VPN session, disconnect, and report status.
|
||||
|
||||
**Architecture:** Implement one skill with one script entrypoint and platform-specific backends. Linux uses the official NordVPN CLI. macOS uses a CLI path when present and otherwise falls back to the NordVPN app workflow. The skill manages VPN state only, leaving follow-up operations like `web-automation` to separate agent steps.
|
||||
|
||||
**Tech Stack:** Node.js, shell/OS commands, NordVPN CLI/app integration, OpenClaw skills, git
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Create isolated worktree
|
||||
|
||||
**Files:**
|
||||
- Modify: repo git metadata only
|
||||
|
||||
**Step 1: Create worktree**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills worktree add /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/nordvpn-client -b feature/nordvpn-client
|
||||
```
|
||||
|
||||
**Step 2: Verify baseline**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/nordvpn-client status --short --branch
|
||||
```
|
||||
Expected: clean feature branch
|
||||
|
||||
### Task 2: Create the local skill runtime
|
||||
|
||||
**Files:**
|
||||
- Create: `skills/nordvpn-client/SKILL.md`
|
||||
- Create: `skills/nordvpn-client/scripts/nordvpn-client.js`
|
||||
- Optional Create: helper files under `skills/nordvpn-client/scripts/`
|
||||
|
||||
**Step 1: Write the failing checks**
|
||||
- Missing command/action should fail with clear usage output
|
||||
- Unsupported platform should fail clearly
|
||||
|
||||
**Step 2: Implement platform detection and install probe**
|
||||
- detect `darwin` vs `linux`
|
||||
- detect whether NordVPN CLI/app is already present
|
||||
- expose `status` with install/auth/connect fields
|
||||
|
||||
### Task 3: Implement install and auth bootstrap
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/nordvpn-client/scripts/nordvpn-client.js`
|
||||
|
||||
**Step 1: Linux install/login path**
|
||||
- implement official CLI probe/install path
|
||||
- implement token-based login path
|
||||
|
||||
**Step 2: macOS install/login path**
|
||||
- probe CLI first
|
||||
- if absent, probe/install NordVPN app path
|
||||
- implement login/bootstrap state verification for the app workflow
|
||||
|
||||
**Step 3: Keep secrets external**
|
||||
- env vars or local credential path only
|
||||
- no raw secrets in docs or skill text
|
||||
|
||||
### Task 4: Implement connect/disconnect/status/verification
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/nordvpn-client/scripts/nordvpn-client.js`
|
||||
|
||||
**Step 1: Connect**
|
||||
- support `--country` and `--city`
|
||||
- normalize target handling per platform
|
||||
|
||||
**Step 2: Verify**
|
||||
- report local connection state
|
||||
- run public IP / geolocation verification
|
||||
- fail if connection target cannot be reasonably verified
|
||||
|
||||
**Step 3: Disconnect and status**
|
||||
- implement clean disconnect
|
||||
- ensure `status` emits machine-readable output for agent use
|
||||
|
||||
### Task 5: Validate locally in OpenClaw workspace
|
||||
|
||||
**Files:**
|
||||
- Test: local workspace copy of `nordvpn-client`
|
||||
|
||||
**Step 1: Direct command validation**
|
||||
- usage errors are correct
|
||||
- install probe works on this host
|
||||
- status output is coherent before login/connect
|
||||
|
||||
**Step 2: One real connect flow**
|
||||
- connect to a test country/city if credentials are available
|
||||
- verify local state + external IP/location
|
||||
- disconnect cleanly
|
||||
|
||||
### Task 6: Promote to repo docs and publish
|
||||
|
||||
**Files:**
|
||||
- Modify: `README.md`
|
||||
- Modify: `docs/README.md`
|
||||
- Create: `docs/nordvpn-client.md`
|
||||
- Create/Modify: `skills/nordvpn-client/...`
|
||||
|
||||
**Step 1: Document the skill**
|
||||
- install/bootstrap behavior
|
||||
- auth expectations
|
||||
- connect/disconnect/status commands
|
||||
- macOS vs Linux notes
|
||||
|
||||
**Step 2: Commit and push**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add skills/nordvpn-client docs README.md
|
||||
git commit -m "feat: add nordvpn client skill"
|
||||
git push -u origin feature/nordvpn-client
|
||||
```
|
||||
|
||||
**Step 3: Merge and cleanup**
|
||||
- fast-forward or merge to `main`
|
||||
- push `main`
|
||||
- remove the worktree
|
||||
- delete the feature branch
|
||||
34
docs/plans/2026-03-11-nordvpn-wireguard-macos-design.md
Normal file
34
docs/plans/2026-03-11-nordvpn-wireguard-macos-design.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# NordVPN macOS WireGuard Backend Design
|
||||
|
||||
## Goal
|
||||
Replace the current macOS app-manual fallback in `nordvpn-client` with a scripted WireGuard/NordLynx backend inspired by `wg-nord` and `wgnord`, while preserving the official Linux `nordvpn` CLI backend.
|
||||
|
||||
## Key decisions
|
||||
- Keep Linux on the official `nordvpn` CLI.
|
||||
- Prefer a native macOS WireGuard backend over the GUI app.
|
||||
- Do not vendor third-party scripts directly; reimplement the needed logic in our own JSON-based Node skill.
|
||||
- Do not require uninstalling the Homebrew `nordvpn` app. The new backend can coexist with it.
|
||||
|
||||
## macOS backend model
|
||||
- Bootstrap via Homebrew:
|
||||
- `wireguard-tools`
|
||||
- `wireguard-go`
|
||||
- Read NordVPN token from existing env/file inputs.
|
||||
- Discover a WireGuard-capable NordVPN server via the public Nord API.
|
||||
- Generate a private key locally.
|
||||
- Exchange the private key for Nord-provided interface credentials using the token.
|
||||
- Materialize a temporary WireGuard config under a skill-owned state directory.
|
||||
- Connect and disconnect via `wg-quick`.
|
||||
- Verify with public IP/geolocation after connect.
|
||||
|
||||
## Data/state
|
||||
- Keep state under a skill-owned directory in the user's home, not `/etc`.
|
||||
- Persist only what is needed for reconnect/disconnect/status.
|
||||
- Never store secrets in docs.
|
||||
|
||||
## Rollout
|
||||
1. Implement the macOS WireGuard backend in the skill.
|
||||
2. Update status output so backend selection is explicit.
|
||||
3. Update skill docs and repo docs.
|
||||
4. Verify non-destructive flows on this host.
|
||||
5. Commit, push, and then decide whether to run a live connect test.
|
||||
11
docs/plans/2026-03-11-nordvpn-wireguard-macos.md
Normal file
11
docs/plans/2026-03-11-nordvpn-wireguard-macos.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# NordVPN macOS WireGuard Backend Plan
|
||||
|
||||
1. Add a backend selector to `nordvpn-client`.
|
||||
2. Keep Linux CLI behavior unchanged.
|
||||
3. Add macOS WireGuard dependency probing and install guidance.
|
||||
4. Implement token-based NordLynx config generation inspired by `wg-nord`/`wgnord`.
|
||||
5. Replace the current preferred macOS control mode from `app-manual` to WireGuard when dependencies and token are available.
|
||||
6. Keep app-manual as the last fallback only.
|
||||
7. Update `status`, `login`, `connect`, `disconnect`, and `verify` JSON to expose the backend in use.
|
||||
8. Update repo docs and skill docs to reflect the new model and required token/dependencies.
|
||||
9. Verify command behavior locally without forcing a live VPN connection unless requested.
|
||||
33
docs/plans/2026-03-11-web-automation-cloakbrowser-design.md
Normal file
33
docs/plans/2026-03-11-web-automation-cloakbrowser-design.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Web Automation CloakBrowser Migration Design
|
||||
|
||||
## Goal
|
||||
Replace all Camoufox and direct Playwright Chromium usage in `web-automation` with CloakBrowser, while preserving the existing feature set: one-shot extraction, persistent browsing sessions, authenticated flows, multi-step automation, and markdown scraping. After local validation, keep the repo copy and docs as the canonical published version, then commit and push.
|
||||
|
||||
## Architecture
|
||||
`web-automation` will become CloakBrowser-only. A single browser-launch layer in `skills/web-automation/scripts/browse.ts` will provide the canonical runtime for the other scripts. Stateful flows will use CloakBrowser persistent contexts; one-shot extraction will also use CloakBrowser instead of raw `playwright.chromium`.
|
||||
|
||||
## Scope
|
||||
- Replace `camoufox-js` in `browse.ts` and any helper/test scripts
|
||||
- Replace direct `playwright.chromium` launch in `extract.js`
|
||||
- Update shared types/imports to match the CloakBrowser Playwright-compatible API
|
||||
- Remove Camoufox/Chromium-specific setup instructions from skill docs
|
||||
- Update package metadata and lockfile to depend on `cloakbrowser`
|
||||
- Keep the user-facing command surface stable where possible
|
||||
|
||||
## Compatibility Strategy
|
||||
To minimize user breakage:
|
||||
- keep the script filenames and CLI interfaces stable
|
||||
- support old `CAMOUFOX_*` env vars as temporary aliases where practical
|
||||
- introduce neutral naming in docs and code for the new canonical path
|
||||
|
||||
## Testing Strategy
|
||||
- Verify launcher setup directly through `browse.ts`
|
||||
- Verify `extract.js` still handles: missing URL, invalid scheme, smoke extraction from `https://example.com`
|
||||
- Verify one persistent-context path and one higher-level consumer (`scrape.ts` or `flow.ts`) still works
|
||||
- Update docs only after the runtime is validated
|
||||
|
||||
## Rollout
|
||||
1. Implement and verify locally in the repo worktree
|
||||
2. Update repo docs/indexes to describe CloakBrowser-based `web-automation`
|
||||
3. Commit and push the repo changes
|
||||
4. If needed, sync the installed OpenClaw workspace copy from the validated repo version
|
||||
136
docs/plans/2026-03-11-web-automation-cloakbrowser.md
Normal file
136
docs/plans/2026-03-11-web-automation-cloakbrowser.md
Normal file
@@ -0,0 +1,136 @@
|
||||
# Web Automation CloakBrowser Migration Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Replace Camoufox and direct Chromium launches in `web-automation` with CloakBrowser and publish the updated repo/docs.
|
||||
|
||||
**Architecture:** Use a single CloakBrowser-backed launch path in `skills/web-automation/scripts/browse.ts`, migrate `extract.js` to the same backend, update dependent scripts and tests, then update docs and publish the repo changes.
|
||||
|
||||
**Tech Stack:** Node.js, TypeScript, JavaScript, Playwright-compatible browser automation, CloakBrowser, git
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Create isolated worktree
|
||||
|
||||
**Files:**
|
||||
- Modify: repo git metadata only
|
||||
|
||||
**Step 1: Create worktree**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills worktree add /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser -b feature/web-automation-cloakbrowser
|
||||
```
|
||||
|
||||
**Step 2: Verify baseline**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git -C /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser status --short --branch
|
||||
```
|
||||
Expected: clean feature branch
|
||||
|
||||
### Task 2: Migrate the browser launcher
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/web-automation/scripts/browse.ts`
|
||||
- Modify: `skills/web-automation/scripts/package.json`
|
||||
- Modify: `skills/web-automation/scripts/pnpm-lock.yaml`
|
||||
- Optional Modify: test helper scripts under `skills/web-automation/scripts/`
|
||||
|
||||
**Step 1: Write the failing verification**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser/skills/web-automation/scripts && node -e "require.resolve('cloakbrowser/package.json')"
|
||||
```
|
||||
Expected: fail before dependency migration
|
||||
|
||||
**Step 2: Replace backend dependency**
|
||||
- remove `camoufox-js`
|
||||
- add `cloakbrowser`
|
||||
- update `browse.ts` to launch CloakBrowser contexts
|
||||
- preserve persistent profile support
|
||||
|
||||
**Step 3: Verify launcher wiring**
|
||||
|
||||
Run a direct browse smoke test after install/update.
|
||||
|
||||
### Task 3: Migrate dependent scripts
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/web-automation/scripts/extract.js`
|
||||
- Modify: `skills/web-automation/scripts/auth.ts`
|
||||
- Modify: `skills/web-automation/scripts/flow.ts`
|
||||
- Modify: `skills/web-automation/scripts/scrape.ts`
|
||||
- Modify: `skills/web-automation/scripts/test-minimal.ts`
|
||||
- Modify: `skills/web-automation/scripts/test-full.ts`
|
||||
- Modify: `skills/web-automation/scripts/test-profile.ts`
|
||||
|
||||
**Step 1: Keep interfaces stable**
|
||||
- preserve CLI usage where possible
|
||||
- use CloakBrowser through shared launcher code
|
||||
- keep one-shot extraction JSON output unchanged except for backend wording if needed
|
||||
|
||||
**Step 2: Add compatibility aliases**
|
||||
- support old `CAMOUFOX_*` env vars where practical
|
||||
- document new canonical naming
|
||||
|
||||
### Task 4: Verify behavior
|
||||
|
||||
**Files:**
|
||||
- Test: `skills/web-automation/scripts/*`
|
||||
|
||||
**Step 1: Extractor error checks**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser && node skills/web-automation/scripts/extract.js
|
||||
```
|
||||
Expected: JSON error for missing URL
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser && node skills/web-automation/scripts/extract.js ftp://example.com
|
||||
```
|
||||
Expected: JSON error for invalid scheme
|
||||
|
||||
**Step 2: Extractor smoke test**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /Users/stefano/.openclaw/workspace/projects/stef-openclaw-skills/.worktrees/web-automation-cloakbrowser && node skills/web-automation/scripts/extract.js https://example.com
|
||||
```
|
||||
Expected: JSON result with title `Example Domain` and status `200`
|
||||
|
||||
**Step 3: Stateful path verification**
|
||||
- run one direct `browse.ts`, `scrape.ts`, or `flow.ts` command using the CloakBrowser backend
|
||||
- confirm persistent-context code still initializes successfully
|
||||
|
||||
### Task 5: Update docs and publish
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/web-automation/SKILL.md`
|
||||
- Modify: `docs/web-automation.md`
|
||||
- Modify: `README.md`
|
||||
- Modify: `docs/README.md`
|
||||
|
||||
**Step 1: Update docs**
|
||||
- replace Camoufox wording with CloakBrowser wording
|
||||
- replace old setup/install steps
|
||||
- document any compatibility env vars and new canonical names
|
||||
|
||||
**Step 2: Commit and push**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add skills/web-automation docs README.md
|
||||
git commit -m "refactor: migrate web-automation to cloakbrowser"
|
||||
git push -u origin feature/web-automation-cloakbrowser
|
||||
```
|
||||
|
||||
**Step 3: Merge and cleanup**
|
||||
- fast-forward or merge to `main`
|
||||
- push `main`
|
||||
- remove the worktree
|
||||
- delete the feature branch
|
||||
76
docs/plans/2026-03-12-nordvpn-client-docs-refresh.md
Normal file
76
docs/plans/2026-03-12-nordvpn-client-docs-refresh.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# NordVPN Client Docs Refresh Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Refresh the `nordvpn-client` documentation so operators and the OpenClaw agent have complete, accurate setup and troubleshooting guidance for the current macOS and Linux backends.
|
||||
|
||||
**Architecture:** Expand the canonical repo doc into a full operator guide, tighten the agent-facing `SKILL.md` to match the current behavior, and lightly update summary docs only if their current one-line descriptions are materially incomplete. Sync the updated `SKILL.md` into the installed OpenClaw workspace copy so runtime guidance matches the repo.
|
||||
|
||||
**Tech Stack:** Markdown docs, local repo skill docs, OpenClaw workspace skill sync
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Refresh canonical operator documentation
|
||||
|
||||
**Files:**
|
||||
- Modify: `docs/nordvpn-client.md`
|
||||
|
||||
**Step 1: Rewrite the doc structure**
|
||||
- Add sections for overview, platform backends, prerequisites, credential paths, install/bootstrap, macOS sudoers setup, command flows, output model, and troubleshooting.
|
||||
|
||||
**Step 2: Add exact operator setup details**
|
||||
- Include the exact `visudo` entry for the helper script.
|
||||
- Document default token/password file locations.
|
||||
- Document Homebrew install commands for macOS tooling.
|
||||
|
||||
**Step 3: Add safe troubleshooting guidance**
|
||||
- Include only safe operator procedures from the debugging work:
|
||||
- invalid token handling
|
||||
- `sudoReady: false`
|
||||
- Tailscale suspend/resume expectations
|
||||
- what normal redacted output includes
|
||||
- how to use `--debug` when deeper inspection is needed
|
||||
|
||||
### Task 2: Refresh agent-facing skill documentation
|
||||
|
||||
**Files:**
|
||||
- Modify: `skills/nordvpn-client/SKILL.md`
|
||||
- Sync: `/Users/stefano/.openclaw/workspace/skills/nordvpn-client/SKILL.md`
|
||||
|
||||
**Step 1: Tighten the skill instructions**
|
||||
- Keep the doc shorter than the canonical operator guide.
|
||||
- Ensure it explicitly covers the default credential paths, macOS sudoers requirement, Tailscale suspend/resume behavior, and `--debug` usage.
|
||||
|
||||
**Step 2: Sync installed OpenClaw copy**
|
||||
- Copy the updated repo `SKILL.md` into the installed workspace skill path.
|
||||
|
||||
### Task 3: Update summary docs if needed
|
||||
|
||||
**Files:**
|
||||
- Check: `README.md`
|
||||
- Check: `docs/README.md`
|
||||
- Modify only if current summary text is materially missing the current backend model.
|
||||
|
||||
**Step 1: Review summary descriptions**
|
||||
- Confirm whether the one-line descriptions already adequately describe Linux CLI + macOS NordLynx/WireGuard.
|
||||
|
||||
**Step 2: Update only if necessary**
|
||||
- Avoid churn if the existing summaries are already sufficient.
|
||||
|
||||
### Task 4: Verify and publish
|
||||
|
||||
**Files:**
|
||||
- Verify: `docs/nordvpn-client.md`
|
||||
- Verify: `skills/nordvpn-client/SKILL.md`
|
||||
- Verify: `/Users/stefano/.openclaw/workspace/skills/nordvpn-client/SKILL.md`
|
||||
|
||||
**Step 1: Run doc verification checks**
|
||||
- Run: `rg -n "sudoers|visudo|--debug|Tailscale|token.txt|wireguard-helper" docs/nordvpn-client.md skills/nordvpn-client/SKILL.md`
|
||||
- Expected: all required topics present
|
||||
|
||||
**Step 2: Confirm installed workspace skill matches repo skill**
|
||||
- Run: `cmp skills/nordvpn-client/SKILL.md /Users/stefano/.openclaw/workspace/skills/nordvpn-client/SKILL.md`
|
||||
- Expected: no output
|
||||
|
||||
**Step 3: Commit and push**
|
||||
- Commit message: `docs: expand nordvpn client setup and troubleshooting`
|
||||
40
docs/plans/2026-03-12-nordvpn-macos-dns-design.md
Normal file
40
docs/plans/2026-03-12-nordvpn-macos-dns-design.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# NordVPN macOS DNS Design
|
||||
|
||||
## Goal
|
||||
Keep NordVPN DNS while connected on macOS, but only apply it to active physical services so the WireGuard backend does not break Tailscale or other virtual interfaces.
|
||||
|
||||
## Behavior
|
||||
- Keep the generated WireGuard config free of `DNS = ...`
|
||||
- During `connect` on macOS:
|
||||
- detect active physical network services
|
||||
- snapshot current DNS/search-domain settings
|
||||
- set NordVPN DNS only on those physical services
|
||||
- During `disconnect`:
|
||||
- restore the saved DNS/search-domain settings
|
||||
- During failed `connect` after DNS changes:
|
||||
- restore DNS before returning the error
|
||||
|
||||
## DNS Values
|
||||
- IPv4 primary: `103.86.96.100`
|
||||
- IPv4 secondary: `103.86.99.100`
|
||||
- No IPv6 DNS for now
|
||||
|
||||
## Service Selection
|
||||
Include only enabled physical services from `networksetup`.
|
||||
Exclude names matching:
|
||||
- Tailscale
|
||||
- Bridge
|
||||
- Thunderbolt Bridge
|
||||
- Loopback
|
||||
- VPN
|
||||
- utun
|
||||
|
||||
## Persistence
|
||||
- Save DNS snapshot under `~/.nordvpn-client`
|
||||
- Overwrite on each successful connect
|
||||
- Clear after successful disconnect restore
|
||||
|
||||
## Verification
|
||||
- Unit tests for service selection and DNS snapshot/restore helpers
|
||||
- Direct logic/config tests
|
||||
- Avoid live connect tests from this session unless explicitly requested because they can drop connectivity
|
||||
11
docs/plans/2026-03-12-nordvpn-macos-dns.md
Normal file
11
docs/plans/2026-03-12-nordvpn-macos-dns.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# NordVPN macOS DNS Plan
|
||||
|
||||
1. Add macOS DNS state file support under `~/.nordvpn-client`.
|
||||
2. Implement helpers to enumerate eligible physical services and snapshot existing DNS/search-domain settings.
|
||||
3. Implement helpers to apply NordVPN DNS only to eligible physical services.
|
||||
4. Implement helpers to restore previous DNS/search-domain settings on disconnect or failed connect.
|
||||
5. Add unit tests for service filtering and DNS state transitions.
|
||||
6. Update skill/docs to explain macOS physical-service DNS management.
|
||||
7. Sync the installed workspace copy.
|
||||
8. Run tests and non-destructive verification.
|
||||
9. Commit and push.
|
||||
@@ -0,0 +1,26 @@
|
||||
# NordVPN Tailscale Coordination Design
|
||||
|
||||
## Goal
|
||||
Stabilize macOS NordVPN connects by explicitly stopping Tailscale before bringing up the NordVPN WireGuard tunnel, then restarting Tailscale after NordVPN disconnects.
|
||||
|
||||
## Behavior
|
||||
- macOS only
|
||||
- on `connect`:
|
||||
- detect whether Tailscale is active
|
||||
- if active, stop it and record that state
|
||||
- bring up NordVPN
|
||||
- on `disconnect`:
|
||||
- tear down NordVPN
|
||||
- if the skill stopped Tailscale earlier, start it again
|
||||
- clear the saved state
|
||||
- on connect failure after stopping Tailscale:
|
||||
- attempt to start Tailscale again before returning the error
|
||||
|
||||
## State
|
||||
- persist `tailscaleWasActive` under `~/.nordvpn-client`
|
||||
- only restart Tailscale if the skill actually stopped it
|
||||
|
||||
## Rollback target if successful
|
||||
- remove the temporary macOS physical-service DNS management patch
|
||||
- restore the simpler NordVPN config path that uses NordVPN DNS directly in the WireGuard config
|
||||
- keep Tailscale suspend/resume as the macOS coexistence solution
|
||||
10
docs/plans/2026-03-12-nordvpn-tailscale-coordination.md
Normal file
10
docs/plans/2026-03-12-nordvpn-tailscale-coordination.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# NordVPN Tailscale Coordination Plan
|
||||
|
||||
1. Add macOS Tailscale state file support under `~/.nordvpn-client`.
|
||||
2. Implement helpers to detect, stop, and start Tailscale on macOS.
|
||||
3. Add unit tests for Tailscale state transitions.
|
||||
4. Wire Tailscale stop into macOS `connect` before WireGuard up.
|
||||
5. Wire Tailscale restart into macOS `disconnect` and connect-failure rollback.
|
||||
6. Sync the installed workspace copy.
|
||||
7. Run tests and non-destructive verification.
|
||||
8. Commit and push.
|
||||
298
docs/us-cpa.md
Normal file
298
docs/us-cpa.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# us-cpa
|
||||
|
||||
`us-cpa` is a Python CLI plus OpenClaw skill wrapper for U.S. federal individual tax work.
|
||||
|
||||
## Standalone package usage
|
||||
|
||||
From `skills/us-cpa/`:
|
||||
|
||||
```bash
|
||||
pip install -e .[dev]
|
||||
us-cpa --help
|
||||
```
|
||||
|
||||
Without installing, the repo-local wrapper works directly:
|
||||
|
||||
```bash
|
||||
skills/us-cpa/scripts/us-cpa --help
|
||||
```
|
||||
|
||||
## OpenClaw installation
|
||||
|
||||
To install the skill for OpenClaw itself, copy the repo skill into the workspace skill directory and install its Python dependencies there.
|
||||
|
||||
1. Sync the repo copy into the workspace:
|
||||
|
||||
```bash
|
||||
rsync -a --delete \
|
||||
~/.openclaw/workspace/projects/stef-openclaw-skills/skills/us-cpa/ \
|
||||
~/.openclaw/workspace/skills/us-cpa/
|
||||
```
|
||||
|
||||
2. Create a workspace-local virtualenv and install the package:
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/us-cpa
|
||||
python3 -m venv .venv
|
||||
. .venv/bin/activate
|
||||
pip install -e .[dev]
|
||||
```
|
||||
|
||||
3. Verify the installed workspace wrapper:
|
||||
|
||||
```bash
|
||||
~/.openclaw/workspace/skills/us-cpa/scripts/us-cpa --help
|
||||
```
|
||||
|
||||
The wrapper prefers `.venv/bin/python` inside the skill directory when present, so OpenClaw can run the workspace copy without relying on global Python packages.
|
||||
|
||||
## Current Milestone
|
||||
|
||||
Current implementation now includes:
|
||||
|
||||
- deterministic cache layout under `~/.cache/us-cpa` by default
|
||||
- `fetch-year` download flow for the bootstrap IRS corpus
|
||||
- source manifest with URL, hash, authority rank, and local path traceability
|
||||
- primary-law URL building for IRC and Treasury regulation escalation
|
||||
- case-folder intake, document registration, and machine-usable fact extraction from JSON, text, and PDF inputs
|
||||
- question workflow with conversation and memo output
|
||||
- prepare workflow for the current supported multi-form 1040 package
|
||||
- review workflow with findings-first output
|
||||
- fillable-PDF first rendering with overlay fallback
|
||||
- e-file-ready draft export payload generation
|
||||
|
||||
## CLI Surface
|
||||
|
||||
```bash
|
||||
skills/us-cpa/scripts/us-cpa question --question "What is the standard deduction?" --tax-year 2025
|
||||
skills/us-cpa/scripts/us-cpa question --question "What is the standard deduction?" --tax-year 2025 --style memo --format markdown
|
||||
skills/us-cpa/scripts/us-cpa prepare --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa review --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa fetch-year --tax-year 2025
|
||||
skills/us-cpa/scripts/us-cpa extract-docs --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe --create-case --case-label "Jane Doe" --facts-json ./facts.json
|
||||
skills/us-cpa/scripts/us-cpa render-forms --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa export-efile-ready --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
```
|
||||
|
||||
## Tax-Year Cache
|
||||
|
||||
Default cache root:
|
||||
|
||||
```text
|
||||
~/.cache/us-cpa
|
||||
```
|
||||
|
||||
Override for isolated runs:
|
||||
|
||||
```bash
|
||||
US_CPA_CACHE_DIR=/tmp/us-cpa-cache skills/us-cpa/scripts/us-cpa fetch-year --tax-year 2025
|
||||
```
|
||||
|
||||
Current `fetch-year` bootstrap corpus for tax year `2025` is verified against live IRS `irs-prior` PDFs for:
|
||||
|
||||
- Form 1040
|
||||
- Schedules 1, 2, 3, A, B, C, D, E, SE, and 8812
|
||||
- Forms 8949, 4562, 4797, 6251, 8606, 8863, 8889, 8959, 8960, 8995, 8995-A, 5329, 5695, and 1116
|
||||
- General Form 1040 instructions and selected schedule/form instructions
|
||||
|
||||
Current bundled tax-year computation data:
|
||||
|
||||
- 2024
|
||||
- 2025
|
||||
|
||||
Other years fetch/source correctly, but deterministic return calculations currently stop with an explicit unsupported-year error until rate tables are added.
|
||||
|
||||
Adding a new supported year is a deliberate data-table change in `tax_years.py`, not an automatic runtime discovery step. That is intentional for tax-engine correctness.
|
||||
|
||||
## Interaction Model
|
||||
|
||||
- `question`
|
||||
- stateless by default
|
||||
- optional case context
|
||||
- `prepare`
|
||||
- requires a case directory
|
||||
- if none exists, OpenClaw should ask whether to create one and where
|
||||
- `review`
|
||||
- requires a case directory
|
||||
- can operate on an existing or newly-created review case
|
||||
|
||||
## Planned Case Layout
|
||||
|
||||
```text
|
||||
<case-dir>/
|
||||
input/
|
||||
extracted/
|
||||
return/
|
||||
output/
|
||||
reports/
|
||||
issues/
|
||||
sources/
|
||||
```
|
||||
|
||||
Current implementation writes:
|
||||
|
||||
- `case-manifest.json`
|
||||
- `extracted/facts.json`
|
||||
- `issues/open-issues.json`
|
||||
|
||||
## Intake Flow
|
||||
|
||||
Current `extract-docs` supports:
|
||||
|
||||
- `--create-case`
|
||||
- `--case-label`
|
||||
- `--facts-json <path>`
|
||||
- repeated `--input-file <path>`
|
||||
|
||||
Behavior:
|
||||
|
||||
- creates the full case directory layout when `--create-case` is used
|
||||
- copies input documents into `input/`
|
||||
- stores normalized facts with source metadata in `extracted/facts.json`
|
||||
- extracts machine-usable facts from JSON/text/PDF documents where supported
|
||||
- appends document registry entries to `case-manifest.json`
|
||||
- stops with a structured issue and non-zero exit if a new fact conflicts with an existing stored fact
|
||||
|
||||
## Output Contract
|
||||
|
||||
- JSON by default
|
||||
- markdown available with `--format markdown`
|
||||
- `question` supports:
|
||||
- `--style conversation`
|
||||
- `--style memo`
|
||||
- `question` emits answered analysis output
|
||||
- `prepare` emits a prepared return package summary
|
||||
- `export-efile-ready` emits a draft e-file-ready payload
|
||||
- `review` emits a findings-first review result
|
||||
- `fetch-year` emits a downloaded manifest location and source count
|
||||
|
||||
## Question Engine
|
||||
|
||||
Current `question` implementation:
|
||||
|
||||
- loads the cached tax-year corpus
|
||||
- searches a small IRS-first topical rule set
|
||||
- returns one canonical analysis object
|
||||
- renders that analysis as:
|
||||
- conversational output
|
||||
- memo output
|
||||
- marks questions outside the current topical rule set as requiring primary-law escalation
|
||||
|
||||
Current implemented topics:
|
||||
|
||||
- standard deduction
|
||||
- Schedule C / sole proprietorship reporting trigger
|
||||
- Schedule D / capital gains reporting trigger
|
||||
- Schedule E / rental income reporting trigger
|
||||
|
||||
## Form Rendering
|
||||
|
||||
Current rendering path:
|
||||
|
||||
- official IRS PDFs from the cached tax-year corpus
|
||||
- deterministic field-fill when usable AcroForm fields are present
|
||||
- overlay rendering onto those official PDFs using `reportlab` + `pypdf` as fallback
|
||||
- artifact manifest written to `output/artifacts.json`
|
||||
|
||||
Current rendered form support:
|
||||
|
||||
- field-fill support for known mapped fillable forms
|
||||
- overlay generation for the current required-form set resolved by the return model
|
||||
|
||||
Current review rule:
|
||||
|
||||
- field-filled artifacts are not automatically flagged for review
|
||||
- overlay-rendered artifacts are marked `reviewRequired: true`
|
||||
|
||||
Overlay coordinates are currently a fallback heuristic and are not treated as line-perfect authoritative field maps. Overlay output must be visually reviewed before any filing/export handoff.
|
||||
|
||||
## Preparation Workflow
|
||||
|
||||
Current `prepare` implementation:
|
||||
|
||||
- loads case facts from `extracted/facts.json`
|
||||
- normalizes them into the current supported federal return model
|
||||
- preserves source provenance for normalized values
|
||||
- computes the current supported 1040 package
|
||||
- resolves required forms across the current supported subset
|
||||
- writes:
|
||||
- `return/normalized-return.json`
|
||||
- `output/artifacts.json`
|
||||
- `reports/prepare-summary.json`
|
||||
|
||||
Current supported calculation inputs:
|
||||
|
||||
- `filingStatus`
|
||||
- `spouse.fullName`
|
||||
- `dependents`
|
||||
- `wages`
|
||||
- `taxableInterest`
|
||||
- `businessIncome`
|
||||
- `capitalGainLoss`
|
||||
- `rentalIncome`
|
||||
- `federalWithholding`
|
||||
- `itemizedDeductions`
|
||||
- `hsaContribution`
|
||||
- `educationCredit`
|
||||
- `foreignTaxCredit`
|
||||
- `qualifiedBusinessIncome`
|
||||
- `traditionalIraBasis`
|
||||
- `additionalMedicareTax`
|
||||
- `netInvestmentIncomeTax`
|
||||
- `alternativeMinimumTax`
|
||||
- `additionalTaxPenalty`
|
||||
- `energyCredit`
|
||||
- `depreciationExpense`
|
||||
- `section1231GainLoss`
|
||||
|
||||
## E-file-ready Export
|
||||
|
||||
`export-efile-ready` writes:
|
||||
|
||||
- `output/efile-ready.json`
|
||||
|
||||
Current export behavior:
|
||||
|
||||
- draft-only
|
||||
- includes required forms
|
||||
- includes refund or balance due summary
|
||||
- includes attachment manifest
|
||||
- includes unresolved issues
|
||||
|
||||
## Review Workflow
|
||||
|
||||
Current `review` implementation:
|
||||
|
||||
- recomputes the return from current case facts
|
||||
- compares stored normalized return values to recomputed values
|
||||
- flags source-fact mismatches for key income fields
|
||||
- flags likely omitted income when document-extracted facts support an amount the stored return omits
|
||||
- checks whether required rendered artifacts are present
|
||||
- flags high-complexity forms for specialist follow-up
|
||||
- flags overlay-rendered artifacts as requiring human review
|
||||
- sorts findings by severity
|
||||
|
||||
Current render modes:
|
||||
|
||||
- `--style conversation`
|
||||
- `--style memo`
|
||||
|
||||
## Scope Rules
|
||||
|
||||
- U.S. federal individual returns only in v1
|
||||
- official IRS artifacts are the target output for compiled forms
|
||||
- conflicting facts must stop the workflow for user resolution
|
||||
|
||||
## Authority Ranking
|
||||
|
||||
Current authority classes are ranked to preserve source hierarchy:
|
||||
|
||||
- IRS forms
|
||||
- IRS instructions
|
||||
- IRS publications
|
||||
- IRS FAQs
|
||||
- Internal Revenue Code
|
||||
- Treasury regulations
|
||||
- other primary authority
|
||||
|
||||
Later research and review flows should consume this ranking rather than inventing their own.
|
||||
@@ -1,6 +1,6 @@
|
||||
# web-automation
|
||||
|
||||
Automated web browsing and scraping using Playwright, with one-shot extraction and broader Camoufox-based automation under a single skill.
|
||||
Automated web browsing and scraping using Playwright-compatible CloakBrowser, with one-shot extraction and broader persistent automation under a single skill.
|
||||
|
||||
## What this skill is for
|
||||
|
||||
@@ -20,15 +20,24 @@ Automated web browsing and scraping using Playwright, with one-shot extraction a
|
||||
|
||||
- Node.js 20+
|
||||
- `pnpm`
|
||||
- Network access to download browser binaries
|
||||
- Network access to download the CloakBrowser binary on first use or via preinstall
|
||||
|
||||
## First-time setup
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/web-automation/scripts
|
||||
pnpm install
|
||||
npx playwright install chromium
|
||||
npx camoufox-js fetch
|
||||
npx cloakbrowser install
|
||||
pnpm approve-builds
|
||||
pnpm rebuild better-sqlite3 esbuild
|
||||
```
|
||||
|
||||
## Updating CloakBrowser
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/web-automation/scripts
|
||||
pnpm up cloakbrowser playwright-core
|
||||
npx cloakbrowser install
|
||||
pnpm approve-builds
|
||||
pnpm rebuild better-sqlite3 esbuild
|
||||
```
|
||||
@@ -48,7 +57,7 @@ pnpm approve-builds
|
||||
pnpm rebuild better-sqlite3 esbuild
|
||||
```
|
||||
|
||||
Without this, `browse.ts` and `scrape.ts` may fail before launch because the native bindings are missing.
|
||||
Without this, helper scripts may fail before launch because the native bindings are missing.
|
||||
|
||||
## Common commands
|
||||
|
||||
@@ -56,7 +65,7 @@ Without this, `browse.ts` and `scrape.ts` may fail before launch because the nat
|
||||
# One-shot JSON extraction
|
||||
node skills/web-automation/scripts/extract.js "https://example.com"
|
||||
|
||||
# Browse a page
|
||||
# Browse a page with persistent profile
|
||||
npx tsx browse.ts --url "https://example.com"
|
||||
|
||||
# Scrape markdown
|
||||
@@ -104,6 +113,24 @@ USER_AGENT="Mozilla/5.0 ..." node skills/web-automation/scripts/extract.js "http
|
||||
- optional `screenshot`
|
||||
- optional `htmlFile`
|
||||
|
||||
## Persistent browsing profile
|
||||
|
||||
`browse.ts`, `auth.ts`, `flow.ts`, and `scrape.ts` use a persistent CloakBrowser profile so sessions survive across runs.
|
||||
|
||||
Canonical env vars:
|
||||
|
||||
- `CLOAKBROWSER_PROFILE_PATH`
|
||||
- `CLOAKBROWSER_HEADLESS`
|
||||
- `CLOAKBROWSER_USERNAME`
|
||||
- `CLOAKBROWSER_PASSWORD`
|
||||
|
||||
Legacy aliases still supported for compatibility:
|
||||
|
||||
- `CAMOUFOX_PROFILE_PATH`
|
||||
- `CAMOUFOX_HEADLESS`
|
||||
- `CAMOUFOX_USERNAME`
|
||||
- `CAMOUFOX_PASSWORD`
|
||||
|
||||
## Natural-language flow runner (`flow.ts`)
|
||||
|
||||
Use `flow.ts` when you want a general command style like:
|
||||
|
||||
108
skills/nordvpn-client/SKILL.md
Normal file
108
skills/nordvpn-client/SKILL.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
name: nordvpn-client
|
||||
description: Use when managing NordVPN on macOS or Linux, including install/bootstrap, login, connect, disconnect, status checks, or verifying a VPN location before running another skill.
|
||||
---
|
||||
|
||||
# NordVPN Client
|
||||
|
||||
Cross-platform NordVPN lifecycle management for macOS and Linux hosts.
|
||||
|
||||
## Use This Skill For
|
||||
|
||||
- probing whether NordVPN automation is ready
|
||||
- bootstrapping missing backend dependencies
|
||||
- validating auth
|
||||
- connecting to a country or city
|
||||
- verifying the public exit location
|
||||
- disconnecting and restoring the normal network state
|
||||
|
||||
## Command Surface
|
||||
|
||||
```bash
|
||||
node scripts/nordvpn-client.js status
|
||||
node scripts/nordvpn-client.js install
|
||||
node scripts/nordvpn-client.js login
|
||||
node scripts/nordvpn-client.js verify
|
||||
node scripts/nordvpn-client.js verify --country "Germany"
|
||||
node scripts/nordvpn-client.js verify --country "Japan" --city "Tokyo"
|
||||
node scripts/nordvpn-client.js connect --country "Germany"
|
||||
node scripts/nordvpn-client.js connect --country "Japan" --city "Tokyo"
|
||||
node scripts/nordvpn-client.js disconnect
|
||||
node scripts/nordvpn-client.js status --debug
|
||||
```
|
||||
|
||||
## Backend Model
|
||||
|
||||
- Linux:
|
||||
- use the official `nordvpn` CLI
|
||||
- `install` uses the official NordVPN installer
|
||||
- token login is supported
|
||||
- macOS:
|
||||
- use NordLynx/WireGuard through `wireguard-go` and `wireguard-tools`
|
||||
- `install` bootstraps them with Homebrew
|
||||
- `login` validates the token for the WireGuard backend
|
||||
- Tailscale is suspended before connect and resumed after disconnect or failed connect
|
||||
- `NordVPN.app` may remain installed but is only the manual fallback
|
||||
|
||||
## Credentials
|
||||
|
||||
Default OpenClaw credential paths:
|
||||
|
||||
- token: `~/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt`
|
||||
- password: `~/.openclaw/workspace/.clawdbot/credentials/nordvpn/password.txt`
|
||||
|
||||
Supported env vars:
|
||||
|
||||
- `NORDVPN_TOKEN`
|
||||
- `NORDVPN_TOKEN_FILE`
|
||||
- `NORDVPN_USERNAME`
|
||||
- `NORDVPN_PASSWORD`
|
||||
- `NORDVPN_PASSWORD_FILE`
|
||||
|
||||
## macOS Requirements
|
||||
|
||||
Automated macOS connects require all of:
|
||||
|
||||
- `wireguard-go`
|
||||
- `wireguard-tools`
|
||||
- `NORDVPN_TOKEN` or the default token file
|
||||
- non-interactive `sudo` for the installed helper script:
|
||||
- `~/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh`
|
||||
|
||||
Exact `visudo` rule for the installed OpenClaw skill:
|
||||
|
||||
```sudoers
|
||||
stefano ALL=(root) NOPASSWD: /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh probe, /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh up, /Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh down
|
||||
```
|
||||
|
||||
## Agent Guidance
|
||||
|
||||
- run `status` first when the machine state is unclear
|
||||
- on macOS, if tooling is missing, run `install`
|
||||
- if auth is unclear, run `login`
|
||||
- use `connect` before location-sensitive skills such as `web-automation`
|
||||
- use `verify` after connect when you need an explicit location check
|
||||
- use `disconnect` after the follow-up task
|
||||
|
||||
## Output Rules
|
||||
|
||||
- normal JSON output redacts local path metadata
|
||||
- use `--debug` only when deeper troubleshooting requires internal local paths and helper/config metadata
|
||||
|
||||
## Troubleshooting Cues
|
||||
|
||||
- `Invalid authorization header`:
|
||||
- token file exists but the token is invalid; replace the token and rerun `login`
|
||||
- `sudoReady: false`:
|
||||
- the helper is not allowed in sudoers; add the `visudo` rule above
|
||||
- connect succeeds but final state looks inconsistent:
|
||||
- rely on the verified public IP/location first
|
||||
- then inspect `status --debug`
|
||||
- disconnect should leave:
|
||||
- normal public IP restored
|
||||
- no active WireGuard state
|
||||
- Tailscale resumed if the skill suspended it
|
||||
|
||||
For full operator setup and troubleshooting, see:
|
||||
|
||||
- `docs/nordvpn-client.md`
|
||||
1426
skills/nordvpn-client/scripts/nordvpn-client.js
Normal file
1426
skills/nordvpn-client/scripts/nordvpn-client.js
Normal file
File diff suppressed because it is too large
Load Diff
309
skills/nordvpn-client/scripts/nordvpn-client.test.js
Normal file
309
skills/nordvpn-client/scripts/nordvpn-client.test.js
Normal file
@@ -0,0 +1,309 @@
|
||||
const test = require("node:test");
|
||||
const assert = require("node:assert/strict");
|
||||
const fs = require("node:fs");
|
||||
const path = require("node:path");
|
||||
const vm = require("node:vm");
|
||||
|
||||
function loadInternals() {
|
||||
const scriptPath = path.join(__dirname, "nordvpn-client.js");
|
||||
const source = fs.readFileSync(scriptPath, "utf8").replace(/\nmain\(\);\s*$/, "\n");
|
||||
const wrapped = `${source}
|
||||
module.exports = {
|
||||
buildMacTailscaleState:
|
||||
typeof buildMacTailscaleState === "function" ? buildMacTailscaleState : undefined,
|
||||
buildWireguardConfig:
|
||||
typeof buildWireguardConfig === "function" ? buildWireguardConfig : undefined,
|
||||
buildLookupResult:
|
||||
typeof buildLookupResult === "function" ? buildLookupResult : undefined,
|
||||
cleanupMacWireguardState:
|
||||
typeof cleanupMacWireguardState === "function" ? cleanupMacWireguardState : undefined,
|
||||
getMacTailscalePath:
|
||||
typeof getMacTailscalePath === "function" ? getMacTailscalePath : undefined,
|
||||
isMacTailscaleActive:
|
||||
typeof isMacTailscaleActive === "function" ? isMacTailscaleActive : undefined,
|
||||
normalizeSuccessfulConnectState:
|
||||
typeof normalizeSuccessfulConnectState === "function" ? normalizeSuccessfulConnectState : undefined,
|
||||
normalizeStatusState:
|
||||
typeof normalizeStatusState === "function" ? normalizeStatusState : undefined,
|
||||
sanitizeOutputPayload:
|
||||
typeof sanitizeOutputPayload === "function" ? sanitizeOutputPayload : undefined,
|
||||
shouldAttemptMacWireguardDisconnect:
|
||||
typeof shouldAttemptMacWireguardDisconnect === "function" ? shouldAttemptMacWireguardDisconnect : undefined,
|
||||
detectMacWireguardActiveFromIfconfig:
|
||||
typeof detectMacWireguardActiveFromIfconfig === "function" ? detectMacWireguardActiveFromIfconfig : undefined,
|
||||
resolveHostnameWithFallback:
|
||||
typeof resolveHostnameWithFallback === "function" ? resolveHostnameWithFallback : undefined,
|
||||
verifyConnectionWithRetry:
|
||||
typeof verifyConnectionWithRetry === "function" ? verifyConnectionWithRetry : undefined,
|
||||
};`;
|
||||
|
||||
const sandbox = {
|
||||
require,
|
||||
module: { exports: {} },
|
||||
exports: {},
|
||||
__dirname,
|
||||
__filename: scriptPath,
|
||||
process: { ...process, exit() {} },
|
||||
console,
|
||||
setTimeout,
|
||||
clearTimeout,
|
||||
Buffer,
|
||||
};
|
||||
|
||||
vm.runInNewContext(wrapped, sandbox, { filename: scriptPath });
|
||||
return sandbox.module.exports;
|
||||
}
|
||||
|
||||
test("detectMacWireguardActiveFromIfconfig detects nordvpn utun client address", () => {
|
||||
const { detectMacWireguardActiveFromIfconfig } = loadInternals();
|
||||
assert.equal(typeof detectMacWireguardActiveFromIfconfig, "function");
|
||||
|
||||
const ifconfig = `
|
||||
utun8: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
|
||||
utun9: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1420
|
||||
\tinet 10.5.0.2 --> 10.5.0.2 netmask 0xff000000
|
||||
`;
|
||||
|
||||
assert.equal(detectMacWireguardActiveFromIfconfig(ifconfig), true);
|
||||
assert.equal(detectMacWireguardActiveFromIfconfig("utun7: flags=8051\n\tinet 100.64.0.4"), false);
|
||||
});
|
||||
|
||||
test("buildLookupResult supports lookup all=true mode", () => {
|
||||
const { buildLookupResult } = loadInternals();
|
||||
assert.equal(typeof buildLookupResult, "function");
|
||||
assert.equal(
|
||||
JSON.stringify(buildLookupResult("104.26.9.44", { all: true })),
|
||||
JSON.stringify([{ address: "104.26.9.44", family: 4 }])
|
||||
);
|
||||
assert.equal(JSON.stringify(buildLookupResult("104.26.9.44", { all: false })), JSON.stringify(["104.26.9.44", 4]));
|
||||
});
|
||||
|
||||
test("buildWireguardConfig includes NordVPN DNS for the vanilla macOS config path", () => {
|
||||
const { buildWireguardConfig } = loadInternals();
|
||||
assert.equal(typeof buildWireguardConfig, "function");
|
||||
|
||||
const config = buildWireguardConfig(
|
||||
{
|
||||
hostname: "tr73.nordvpn.com",
|
||||
ips: [{ ip: { version: 4, ip: "45.89.52.1" } }],
|
||||
technologies: [{ identifier: "wireguard_udp", metadata: [{ name: "public_key", value: "PUBKEY" }] }],
|
||||
},
|
||||
"PRIVATEKEY"
|
||||
);
|
||||
|
||||
assert.equal(config.includes("DNS = 103.86.96.100, 103.86.99.100"), true);
|
||||
assert.equal(config.includes("AllowedIPs = 0.0.0.0/0"), true);
|
||||
});
|
||||
|
||||
test("getMacTailscalePath falls back to /opt/homebrew/bin/tailscale when PATH lookup is missing", () => {
|
||||
const { getMacTailscalePath } = loadInternals();
|
||||
assert.equal(typeof getMacTailscalePath, "function");
|
||||
assert.equal(
|
||||
getMacTailscalePath({
|
||||
commandExists: () => "",
|
||||
fileExists: (target) => target === "/opt/homebrew/bin/tailscale",
|
||||
}),
|
||||
"/opt/homebrew/bin/tailscale"
|
||||
);
|
||||
});
|
||||
|
||||
test("buildMacTailscaleState records whether tailscale was active", () => {
|
||||
const { buildMacTailscaleState } = loadInternals();
|
||||
assert.equal(typeof buildMacTailscaleState, "function");
|
||||
assert.equal(
|
||||
JSON.stringify(buildMacTailscaleState(true)),
|
||||
JSON.stringify({
|
||||
tailscaleWasActive: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
test("cleanupMacWireguardState removes stale config and last-connection files", () => {
|
||||
const { cleanupMacWireguardState } = loadInternals();
|
||||
assert.equal(typeof cleanupMacWireguardState, "function");
|
||||
|
||||
const tmpDir = fs.mkdtempSync(path.join(fs.mkdtempSync("/tmp/nordvpn-client-test-"), "state-"));
|
||||
const configPath = path.join(tmpDir, "nordvpnctl.conf");
|
||||
const lastConnectionPath = path.join(tmpDir, "last-connection.json");
|
||||
fs.writeFileSync(configPath, "wireguard-config");
|
||||
fs.writeFileSync(lastConnectionPath, "{\"country\":\"Germany\"}");
|
||||
|
||||
const result = cleanupMacWireguardState({
|
||||
configPath,
|
||||
lastConnectionPath,
|
||||
});
|
||||
|
||||
assert.equal(result.cleaned, true);
|
||||
assert.equal(fs.existsSync(configPath), false);
|
||||
assert.equal(fs.existsSync(lastConnectionPath), false);
|
||||
});
|
||||
|
||||
test("shouldAttemptMacWireguardDisconnect does not trust active=false when residual state exists", () => {
|
||||
const { shouldAttemptMacWireguardDisconnect } = loadInternals();
|
||||
assert.equal(typeof shouldAttemptMacWireguardDisconnect, "function");
|
||||
|
||||
assert.equal(
|
||||
shouldAttemptMacWireguardDisconnect({
|
||||
active: false,
|
||||
configPath: "/Users/stefano/.nordvpn-client/wireguard/nordvpnctl.conf",
|
||||
endpoint: null,
|
||||
lastConnection: null,
|
||||
}),
|
||||
true
|
||||
);
|
||||
|
||||
assert.equal(
|
||||
shouldAttemptMacWireguardDisconnect({
|
||||
active: false,
|
||||
configPath: null,
|
||||
endpoint: null,
|
||||
lastConnection: { country: "Italy" },
|
||||
}),
|
||||
true
|
||||
);
|
||||
|
||||
assert.equal(
|
||||
shouldAttemptMacWireguardDisconnect({
|
||||
active: false,
|
||||
configPath: null,
|
||||
endpoint: null,
|
||||
lastConnection: null,
|
||||
}),
|
||||
false
|
||||
);
|
||||
});
|
||||
|
||||
test("normalizeSuccessfulConnectState marks the connect snapshot active after verified macOS wireguard connect", () => {
|
||||
const { normalizeSuccessfulConnectState } = loadInternals();
|
||||
assert.equal(typeof normalizeSuccessfulConnectState, "function");
|
||||
|
||||
const state = normalizeSuccessfulConnectState(
|
||||
{
|
||||
platform: "darwin",
|
||||
controlMode: "wireguard",
|
||||
connected: false,
|
||||
wireguard: {
|
||||
active: false,
|
||||
endpoint: null,
|
||||
},
|
||||
},
|
||||
{
|
||||
backend: "wireguard",
|
||||
server: {
|
||||
hostname: "de1227.nordvpn.com",
|
||||
},
|
||||
},
|
||||
{
|
||||
ok: true,
|
||||
ipInfo: {
|
||||
country: "Germany",
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
assert.equal(state.connected, true);
|
||||
assert.equal(state.wireguard.active, true);
|
||||
assert.equal(state.wireguard.endpoint, "de1227.nordvpn.com:51820");
|
||||
});
|
||||
|
||||
test("normalizeStatusState marks macOS wireguard connected when public IP matches the last successful target", () => {
|
||||
const { normalizeStatusState } = loadInternals();
|
||||
assert.equal(typeof normalizeStatusState, "function");
|
||||
|
||||
const state = normalizeStatusState({
|
||||
platform: "darwin",
|
||||
controlMode: "wireguard",
|
||||
connected: false,
|
||||
wireguard: {
|
||||
active: false,
|
||||
endpoint: "tr73.nordvpn.com:51820",
|
||||
lastConnection: {
|
||||
requestedTarget: { country: "Turkey", city: "" },
|
||||
resolvedTarget: { country: "Turkey", city: "Istanbul" },
|
||||
},
|
||||
},
|
||||
publicIp: {
|
||||
ok: true,
|
||||
country: "Turkey",
|
||||
city: "Istanbul",
|
||||
},
|
||||
});
|
||||
|
||||
assert.equal(state.connected, true);
|
||||
assert.equal(state.wireguard.active, true);
|
||||
});
|
||||
|
||||
test("sanitizeOutputPayload redacts local path metadata from normal JSON output", () => {
|
||||
const { sanitizeOutputPayload } = loadInternals();
|
||||
assert.equal(typeof sanitizeOutputPayload, "function");
|
||||
|
||||
const sanitized = sanitizeOutputPayload({
|
||||
cliPath: "/opt/homebrew/bin/nordvpn",
|
||||
appPath: "/Applications/NordVPN.app",
|
||||
wireguard: {
|
||||
configPath: "/Users/stefano/.nordvpn-client/wireguard/nordvpnctl.conf",
|
||||
helperPath: "/Users/stefano/.openclaw/workspace/skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh",
|
||||
authCache: {
|
||||
tokenSource: "default:/Users/stefano/.openclaw/workspace/.clawdbot/credentials/nordvpn/token.txt",
|
||||
},
|
||||
endpoint: "jp454.nordvpn.com:51820",
|
||||
},
|
||||
});
|
||||
|
||||
assert.equal(sanitized.cliPath, null);
|
||||
assert.equal(sanitized.appPath, null);
|
||||
assert.equal(sanitized.wireguard.configPath, null);
|
||||
assert.equal(sanitized.wireguard.helperPath, null);
|
||||
assert.equal(sanitized.wireguard.authCache.tokenSource, null);
|
||||
assert.equal(sanitized.wireguard.endpoint, "jp454.nordvpn.com:51820");
|
||||
});
|
||||
|
||||
test("isMacTailscaleActive treats Running backend as active", () => {
|
||||
const { isMacTailscaleActive } = loadInternals();
|
||||
assert.equal(typeof isMacTailscaleActive, "function");
|
||||
assert.equal(isMacTailscaleActive({ BackendState: "Running" }), true);
|
||||
assert.equal(isMacTailscaleActive({ BackendState: "Stopped" }), false);
|
||||
});
|
||||
|
||||
test("verifyConnectionWithRetry retries transient reachability failures", async () => {
|
||||
const { verifyConnectionWithRetry } = loadInternals();
|
||||
assert.equal(typeof verifyConnectionWithRetry, "function");
|
||||
|
||||
let attempts = 0;
|
||||
const result = await verifyConnectionWithRetry(
|
||||
{ country: "Italy", city: "Milan" },
|
||||
{
|
||||
attempts: 3,
|
||||
delayMs: 1,
|
||||
getPublicIpInfo: async () => {
|
||||
attempts += 1;
|
||||
if (attempts === 1) {
|
||||
return { ok: false, error: "read EHOSTUNREACH" };
|
||||
}
|
||||
return { ok: true, country: "Italy", city: "Milan" };
|
||||
},
|
||||
}
|
||||
);
|
||||
|
||||
assert.equal(result.ok, true);
|
||||
assert.equal(result.ipInfo.country, "Italy");
|
||||
assert.equal(attempts, 2);
|
||||
});
|
||||
|
||||
test("resolveHostnameWithFallback uses fallback resolvers when system lookup fails", async () => {
|
||||
const { resolveHostnameWithFallback } = loadInternals();
|
||||
assert.equal(typeof resolveHostnameWithFallback, "function");
|
||||
|
||||
const calls = [];
|
||||
const address = await resolveHostnameWithFallback("ipapi.co", {
|
||||
resolvers: ["1.1.1.1", "8.8.8.8"],
|
||||
resolveWithResolver: async (hostname, resolver) => {
|
||||
calls.push(`${resolver}:${hostname}`);
|
||||
if (resolver === "1.1.1.1") return [];
|
||||
return ["104.26.9.44"];
|
||||
},
|
||||
});
|
||||
|
||||
assert.equal(address, "104.26.9.44");
|
||||
assert.deepEqual(calls, ["1.1.1.1:ipapi.co", "8.8.8.8:ipapi.co"]);
|
||||
});
|
||||
24
skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh
Executable file
24
skills/nordvpn-client/scripts/nordvpn-wireguard-helper.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
ACTION="${1:-}"
|
||||
case "$ACTION" in
|
||||
probe|up|down)
|
||||
;;
|
||||
*)
|
||||
echo "Usage: nordvpn-wireguard-helper.sh [probe|up|down]" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
WG_QUICK="/opt/homebrew/bin/wg-quick"
|
||||
WG_CONFIG="/Users/stefano/.nordvpn-client/wireguard/nordvpnctl.conf"
|
||||
PATH="/opt/homebrew/bin:/usr/bin:/bin:/usr/sbin:/sbin"
|
||||
export PATH
|
||||
|
||||
if [ "$ACTION" = "probe" ]; then
|
||||
test -x "$WG_QUICK"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exec "$WG_QUICK" "$ACTION" "$WG_CONFIG"
|
||||
80
skills/us-cpa/README.md
Normal file
80
skills/us-cpa/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# us-cpa package
|
||||
|
||||
Standalone Python CLI package for the `us-cpa` skill.
|
||||
|
||||
## Install
|
||||
|
||||
From `skills/us-cpa/`:
|
||||
|
||||
```bash
|
||||
pip install -e .[dev]
|
||||
```
|
||||
|
||||
## OpenClaw installation
|
||||
|
||||
Install the skill into the OpenClaw workspace copy, not only in the repo checkout.
|
||||
|
||||
1. Sync the skill into the workspace:
|
||||
|
||||
```bash
|
||||
rsync -a --delete \
|
||||
~/.openclaw/workspace/projects/stef-openclaw-skills/skills/us-cpa/ \
|
||||
~/.openclaw/workspace/skills/us-cpa/
|
||||
```
|
||||
|
||||
2. Create a skill-local virtualenv in the workspace copy:
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/us-cpa
|
||||
python3 -m venv .venv
|
||||
. .venv/bin/activate
|
||||
pip install -e .[dev]
|
||||
```
|
||||
|
||||
3. Run the workspace wrapper:
|
||||
|
||||
```bash
|
||||
~/.openclaw/workspace/skills/us-cpa/scripts/us-cpa --help
|
||||
```
|
||||
|
||||
The wrapper now prefers `~/.openclaw/workspace/skills/us-cpa/.venv/bin/python` when present and falls back to `python3` otherwise.
|
||||
|
||||
## Run
|
||||
|
||||
Installed entry point:
|
||||
|
||||
```bash
|
||||
us-cpa --help
|
||||
```
|
||||
|
||||
Repo-local wrapper without installation:
|
||||
|
||||
```bash
|
||||
scripts/us-cpa --help
|
||||
```
|
||||
|
||||
OpenClaw workspace wrapper:
|
||||
|
||||
```bash
|
||||
~/.openclaw/workspace/skills/us-cpa/scripts/us-cpa --help
|
||||
```
|
||||
|
||||
Module execution:
|
||||
|
||||
```bash
|
||||
python3 -m us_cpa.cli --help
|
||||
```
|
||||
|
||||
## Tests
|
||||
|
||||
From `skills/us-cpa/`:
|
||||
|
||||
```bash
|
||||
PYTHONPATH=src python3 -m unittest
|
||||
```
|
||||
|
||||
Or with the dev extra installed:
|
||||
|
||||
```bash
|
||||
python -m unittest
|
||||
```
|
||||
72
skills/us-cpa/SKILL.md
Normal file
72
skills/us-cpa/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: us-cpa
|
||||
description: Use when answering U.S. federal individual tax questions, preparing a federal Form 1040 return package, or reviewing a draft/completed federal individual return.
|
||||
---
|
||||
|
||||
# US CPA
|
||||
|
||||
`us-cpa` is a Python-first federal individual tax workflow skill. The CLI is the canonical engine. Use the skill to classify the request, gather missing inputs, and invoke the CLI.
|
||||
|
||||
## Modes
|
||||
|
||||
- `question`
|
||||
- one-off federal tax question
|
||||
- case folder optional
|
||||
- `prepare`
|
||||
- new or existing return-preparation case
|
||||
- case folder required
|
||||
- `review`
|
||||
- new or existing return-review case
|
||||
- case folder required
|
||||
|
||||
## Agent Workflow
|
||||
|
||||
1. Determine whether the request is:
|
||||
- question-only
|
||||
- a new preparation/review case
|
||||
- work on an existing case
|
||||
2. If the request is `prepare` or `review` and no case folder is supplied:
|
||||
- ask whether to create a new case
|
||||
- ask where to store it
|
||||
3. Use the bundled CLI:
|
||||
|
||||
```bash
|
||||
skills/us-cpa/scripts/us-cpa question --question "What is the standard deduction?" --tax-year 2025
|
||||
skills/us-cpa/scripts/us-cpa question --question "What is the standard deduction?" --tax-year 2025 --style memo --format markdown
|
||||
skills/us-cpa/scripts/us-cpa prepare --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa export-efile-ready --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa review --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe
|
||||
skills/us-cpa/scripts/us-cpa review --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe --style memo --format markdown
|
||||
skills/us-cpa/scripts/us-cpa extract-docs --tax-year 2025 --case-dir ~/tax-cases/2025-jane-doe --create-case --case-label "Jane Doe" --facts-json ./facts.json
|
||||
```
|
||||
|
||||
When OpenClaw is using the installed workspace copy, the entrypoint is:
|
||||
|
||||
```bash
|
||||
~/.openclaw/workspace/skills/us-cpa/scripts/us-cpa --help
|
||||
```
|
||||
|
||||
## Rules
|
||||
|
||||
- federal individual returns only in v1
|
||||
- IRS materials first; escalate to primary law only when needed
|
||||
- stop on conflicting facts and ask the user to resolve the issue before continuing
|
||||
- official IRS PDFs are the target compiled-form artifacts
|
||||
- deterministic field-fill is the preferred render path when the official PDF exposes usable fields
|
||||
- overlay-rendered forms are the fallback and must be flagged for human review
|
||||
|
||||
## Output
|
||||
|
||||
- JSON by default
|
||||
- markdown output available with `--format markdown`
|
||||
- `question` supports `--style conversation|memo`
|
||||
- `fetch-year` downloads the bootstrap IRS form/instruction corpus into `~/.cache/us-cpa` by default
|
||||
- override the cache root with `US_CPA_CACHE_DIR` when you need an isolated run or fixture generation
|
||||
- `extract-docs` creates or opens a case, registers documents, stores facts, extracts machine-usable facts from JSON/text/PDF sources where possible, and stops with a structured issue if facts conflict
|
||||
- `question` currently has explicit IRS-first answers for standard deduction, Schedule C, Schedule D, and Schedule E questions; other questions escalate to primary-law research with official IRC/regulation URLs
|
||||
- rendered form artifacts prefer fillable-field output when possible and otherwise fall back to overlay output
|
||||
- `prepare` computes the current supported federal 1040 package, preserves fact provenance in the normalized return, and writes normalized return/artifact/report files into the case directory
|
||||
- `export-efile-ready` writes a draft transmission-ready payload without transmitting anything
|
||||
- `review` recomputes the return from case facts, checks artifacts, flags source-fact mismatches and likely omissions, and returns findings-first output in conversation or memo style
|
||||
|
||||
For operator details, limitations, and the planned case structure, see `docs/us-cpa.md`.
|
||||
27
skills/us-cpa/pyproject.toml
Normal file
27
skills/us-cpa/pyproject.toml
Normal file
@@ -0,0 +1,27 @@
|
||||
[build-system]
|
||||
requires = ["setuptools>=68"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "us-cpa"
|
||||
version = "0.1.0"
|
||||
description = "US federal individual tax workflow CLI for questions, preparation, and review."
|
||||
requires-python = ">=3.9"
|
||||
dependencies = [
|
||||
"pypdf>=5.0.0",
|
||||
"reportlab>=4.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
us-cpa = "us_cpa.cli:main"
|
||||
|
||||
[tool.setuptools]
|
||||
package-dir = {"" = "src"}
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
13
skills/us-cpa/scripts/us-cpa
Executable file
13
skills/us-cpa/scripts/us-cpa
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILL_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
PYTHON_BIN="${SKILL_DIR}/.venv/bin/python"
|
||||
export PYTHONPATH="${SKILL_DIR}/src${PYTHONPATH:+:${PYTHONPATH}}"
|
||||
|
||||
if [[ ! -x "${PYTHON_BIN}" ]]; then
|
||||
PYTHON_BIN="python3"
|
||||
fi
|
||||
|
||||
exec "${PYTHON_BIN}" -m us_cpa.cli "$@"
|
||||
2
skills/us-cpa/src/us_cpa/__init__.py
Normal file
2
skills/us-cpa/src/us_cpa/__init__.py
Normal file
@@ -0,0 +1,2 @@
|
||||
"""us-cpa package."""
|
||||
|
||||
202
skills/us-cpa/src/us_cpa/cases.py
Normal file
202
skills/us-cpa/src/us_cpa/cases.py
Normal file
@@ -0,0 +1,202 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import shutil
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.document_extractors import extract_document_facts
|
||||
|
||||
|
||||
CASE_SUBDIRECTORIES = (
|
||||
"input",
|
||||
"extracted",
|
||||
"return",
|
||||
"output",
|
||||
"reports",
|
||||
"issues",
|
||||
"sources",
|
||||
)
|
||||
|
||||
|
||||
def _timestamp() -> str:
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
def _sha256_path(path: Path) -> str:
|
||||
digest = hashlib.sha256()
|
||||
with path.open("rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(65536), b""):
|
||||
digest.update(chunk)
|
||||
return digest.hexdigest()
|
||||
|
||||
|
||||
class CaseConflictError(Exception):
|
||||
def __init__(self, issue: dict[str, Any]) -> None:
|
||||
super().__init__(issue["message"])
|
||||
self.issue = issue
|
||||
|
||||
|
||||
@dataclass
|
||||
class CaseManager:
|
||||
case_dir: Path
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
self.case_dir = self.case_dir.expanduser().resolve()
|
||||
|
||||
@property
|
||||
def manifest_path(self) -> Path:
|
||||
return self.case_dir / "case-manifest.json"
|
||||
|
||||
@property
|
||||
def facts_path(self) -> Path:
|
||||
return self.case_dir / "extracted" / "facts.json"
|
||||
|
||||
@property
|
||||
def issues_path(self) -> Path:
|
||||
return self.case_dir / "issues" / "open-issues.json"
|
||||
|
||||
def create_case(self, *, case_label: str, tax_year: int) -> dict[str, Any]:
|
||||
self.case_dir.mkdir(parents=True, exist_ok=True)
|
||||
for name in CASE_SUBDIRECTORIES:
|
||||
(self.case_dir / name).mkdir(exist_ok=True)
|
||||
|
||||
manifest = {
|
||||
"caseLabel": case_label,
|
||||
"taxYear": tax_year,
|
||||
"createdAt": _timestamp(),
|
||||
"updatedAt": _timestamp(),
|
||||
"status": "open",
|
||||
"documents": [],
|
||||
}
|
||||
self.manifest_path.write_text(json.dumps(manifest, indent=2))
|
||||
if not self.facts_path.exists():
|
||||
self.facts_path.write_text(json.dumps({"facts": {}}, indent=2))
|
||||
if not self.issues_path.exists():
|
||||
self.issues_path.write_text(json.dumps({"issues": []}, indent=2))
|
||||
return manifest
|
||||
|
||||
def load_manifest(self) -> dict[str, Any]:
|
||||
return json.loads(self.manifest_path.read_text())
|
||||
|
||||
def _load_facts(self) -> dict[str, Any]:
|
||||
return json.loads(self.facts_path.read_text())
|
||||
|
||||
def _write_manifest(self, manifest: dict[str, Any]) -> None:
|
||||
manifest["updatedAt"] = _timestamp()
|
||||
self.manifest_path.write_text(json.dumps(manifest, indent=2))
|
||||
|
||||
def _write_facts(self, facts: dict[str, Any]) -> None:
|
||||
self.facts_path.write_text(json.dumps(facts, indent=2))
|
||||
|
||||
def _write_issue(self, issue: dict[str, Any]) -> None:
|
||||
current = json.loads(self.issues_path.read_text())
|
||||
current["issues"].append(issue)
|
||||
self.issues_path.write_text(json.dumps(current, indent=2))
|
||||
|
||||
def _record_fact(
|
||||
self,
|
||||
facts_payload: dict[str, Any],
|
||||
*,
|
||||
field: str,
|
||||
value: Any,
|
||||
source_type: str,
|
||||
source_name: str,
|
||||
tax_year: int,
|
||||
) -> None:
|
||||
existing = facts_payload["facts"].get(field)
|
||||
if existing and existing["value"] != value:
|
||||
issue = {
|
||||
"status": "needs_resolution",
|
||||
"issueType": "fact_conflict",
|
||||
"field": field,
|
||||
"existingValue": existing["value"],
|
||||
"newValue": value,
|
||||
"message": f"Conflicting values for {field}. Resolve before continuing.",
|
||||
"createdAt": _timestamp(),
|
||||
"taxYear": tax_year,
|
||||
}
|
||||
self._write_issue(issue)
|
||||
raise CaseConflictError(issue)
|
||||
|
||||
captured_at = _timestamp()
|
||||
source_entry = {
|
||||
"sourceType": source_type,
|
||||
"sourceName": source_name,
|
||||
"capturedAt": captured_at,
|
||||
}
|
||||
if existing:
|
||||
existing["sources"].append(source_entry)
|
||||
return
|
||||
|
||||
facts_payload["facts"][field] = {
|
||||
"value": value,
|
||||
"sourceType": source_type,
|
||||
"capturedAt": captured_at,
|
||||
"sources": [source_entry],
|
||||
}
|
||||
|
||||
def intake(
|
||||
self,
|
||||
*,
|
||||
tax_year: int,
|
||||
user_facts: dict[str, Any],
|
||||
document_paths: list[Path],
|
||||
) -> dict[str, Any]:
|
||||
manifest = self.load_manifest()
|
||||
if manifest["taxYear"] != tax_year:
|
||||
raise ValueError(
|
||||
f"Case tax year {manifest['taxYear']} does not match requested tax year {tax_year}."
|
||||
)
|
||||
|
||||
registered_documents = []
|
||||
for source_path in document_paths:
|
||||
source_path = source_path.expanduser().resolve()
|
||||
destination = self.case_dir / "input" / source_path.name
|
||||
shutil.copy2(source_path, destination)
|
||||
document_entry = {
|
||||
"name": source_path.name,
|
||||
"sourcePath": str(source_path),
|
||||
"storedPath": str(destination),
|
||||
"sha256": _sha256_path(destination),
|
||||
"registeredAt": _timestamp(),
|
||||
}
|
||||
manifest["documents"].append(document_entry)
|
||||
registered_documents.append(document_entry)
|
||||
|
||||
facts_payload = self._load_facts()
|
||||
for document_entry in registered_documents:
|
||||
extracted = extract_document_facts(Path(document_entry["storedPath"]))
|
||||
document_entry["extractedFacts"] = extracted
|
||||
for field, value in extracted.items():
|
||||
self._record_fact(
|
||||
facts_payload,
|
||||
field=field,
|
||||
value=value,
|
||||
source_type="document_extract",
|
||||
source_name=document_entry["name"],
|
||||
tax_year=tax_year,
|
||||
)
|
||||
|
||||
for field, value in user_facts.items():
|
||||
self._record_fact(
|
||||
facts_payload,
|
||||
field=field,
|
||||
value=value,
|
||||
source_type="user_statement",
|
||||
source_name="interactive-intake",
|
||||
tax_year=tax_year,
|
||||
)
|
||||
|
||||
self._write_manifest(manifest)
|
||||
self._write_facts(facts_payload)
|
||||
return {
|
||||
"status": "accepted",
|
||||
"caseDir": str(self.case_dir),
|
||||
"taxYear": tax_year,
|
||||
"registeredDocuments": registered_documents,
|
||||
"factCount": len(facts_payload["facts"]),
|
||||
}
|
||||
243
skills/us-cpa/src/us_cpa/cli.py
Normal file
243
skills/us-cpa/src/us_cpa/cli.py
Normal file
@@ -0,0 +1,243 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.cases import CaseConflictError, CaseManager
|
||||
from us_cpa.prepare import EfileExporter, PrepareEngine, render_case_forms
|
||||
from us_cpa.questions import QuestionEngine, render_analysis, render_memo
|
||||
from us_cpa.review import ReviewEngine, render_review_memo, render_review_summary
|
||||
from us_cpa.sources import TaxYearCorpus, bootstrap_irs_catalog
|
||||
|
||||
COMMANDS = (
|
||||
"question",
|
||||
"prepare",
|
||||
"review",
|
||||
"fetch-year",
|
||||
"extract-docs",
|
||||
"render-forms",
|
||||
"export-efile-ready",
|
||||
)
|
||||
|
||||
|
||||
def _add_common_arguments(
|
||||
parser: argparse.ArgumentParser, *, include_tax_year: bool = True
|
||||
) -> None:
|
||||
if include_tax_year:
|
||||
parser.add_argument("--tax-year", type=int, default=None)
|
||||
parser.add_argument("--case-dir", default=None)
|
||||
parser.add_argument("--format", choices=("json", "markdown"), default="json")
|
||||
|
||||
|
||||
def _emit(payload: dict[str, Any], output_format: str) -> int:
|
||||
if output_format == "markdown":
|
||||
lines = [f"# {payload['command']}"]
|
||||
for key, value in payload.items():
|
||||
if key == "command":
|
||||
continue
|
||||
lines.append(f"- **{key}**: {value}")
|
||||
print("\n".join(lines))
|
||||
else:
|
||||
print(json.dumps(payload, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
def _require_case_dir(args: argparse.Namespace) -> Path:
|
||||
if not args.case_dir:
|
||||
raise SystemExit("A case directory is required for this command.")
|
||||
return Path(args.case_dir).expanduser().resolve()
|
||||
|
||||
|
||||
def _load_json_file(path_value: str | None) -> dict[str, Any]:
|
||||
if not path_value:
|
||||
return {}
|
||||
return json.loads(Path(path_value).expanduser().resolve().read_text())
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="us-cpa",
|
||||
description="US federal individual tax workflow CLI.",
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
question = subparsers.add_parser("question", help="Answer a tax question.")
|
||||
_add_common_arguments(question)
|
||||
question.add_argument("--question", required=True)
|
||||
question.add_argument("--style", choices=("conversation", "memo"), default="conversation")
|
||||
|
||||
prepare = subparsers.add_parser("prepare", help="Prepare a return case.")
|
||||
_add_common_arguments(prepare)
|
||||
|
||||
review = subparsers.add_parser("review", help="Review a return case.")
|
||||
_add_common_arguments(review)
|
||||
review.add_argument("--style", choices=("conversation", "memo"), default="conversation")
|
||||
|
||||
fetch_year = subparsers.add_parser(
|
||||
"fetch-year", help="Fetch tax-year forms and instructions."
|
||||
)
|
||||
_add_common_arguments(fetch_year, include_tax_year=False)
|
||||
fetch_year.add_argument("--tax-year", type=int, required=True)
|
||||
|
||||
extract_docs = subparsers.add_parser(
|
||||
"extract-docs", help="Extract facts from case documents."
|
||||
)
|
||||
_add_common_arguments(extract_docs)
|
||||
extract_docs.add_argument("--create-case", action="store_true")
|
||||
extract_docs.add_argument("--case-label")
|
||||
extract_docs.add_argument("--facts-json")
|
||||
extract_docs.add_argument("--input-file", action="append", default=[])
|
||||
|
||||
render_forms = subparsers.add_parser(
|
||||
"render-forms", help="Render compiled IRS forms."
|
||||
)
|
||||
_add_common_arguments(render_forms)
|
||||
|
||||
export_efile = subparsers.add_parser(
|
||||
"export-efile-ready", help="Export an e-file-ready payload."
|
||||
)
|
||||
_add_common_arguments(export_efile)
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
parser = build_parser()
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.command == "question":
|
||||
corpus = TaxYearCorpus()
|
||||
engine = QuestionEngine(corpus=corpus)
|
||||
case_facts: dict[str, Any] = {}
|
||||
if args.case_dir:
|
||||
manager = CaseManager(Path(args.case_dir))
|
||||
if manager.facts_path.exists():
|
||||
case_facts = {
|
||||
key: value["value"]
|
||||
for key, value in json.loads(manager.facts_path.read_text())["facts"].items()
|
||||
}
|
||||
analysis = engine.answer(
|
||||
question=args.question,
|
||||
tax_year=args.tax_year,
|
||||
case_facts=case_facts,
|
||||
)
|
||||
payload = {
|
||||
"command": "question",
|
||||
"format": args.format,
|
||||
"style": args.style,
|
||||
"taxYear": args.tax_year,
|
||||
"caseDir": args.case_dir,
|
||||
"question": args.question,
|
||||
"status": "answered",
|
||||
"analysis": analysis,
|
||||
}
|
||||
payload["rendered"] = (
|
||||
render_memo(analysis) if args.style == "memo" else render_analysis(analysis)
|
||||
)
|
||||
if args.format == "markdown":
|
||||
print(payload["rendered"])
|
||||
return 0
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "extract-docs":
|
||||
case_dir = _require_case_dir(args)
|
||||
manager = CaseManager(case_dir)
|
||||
if args.create_case:
|
||||
if not args.case_label:
|
||||
raise SystemExit("--case-label is required when --create-case is used.")
|
||||
manager.create_case(case_label=args.case_label, tax_year=args.tax_year)
|
||||
elif not manager.manifest_path.exists():
|
||||
raise SystemExit("Case manifest not found. Use --create-case for a new case.")
|
||||
|
||||
try:
|
||||
result = manager.intake(
|
||||
tax_year=args.tax_year,
|
||||
user_facts=_load_json_file(args.facts_json),
|
||||
document_paths=[
|
||||
Path(path_value).expanduser().resolve() for path_value in args.input_file
|
||||
],
|
||||
)
|
||||
except CaseConflictError as exc:
|
||||
print(json.dumps(exc.issue, indent=2))
|
||||
return 1
|
||||
payload = {
|
||||
"command": args.command,
|
||||
"format": args.format,
|
||||
**result,
|
||||
}
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "prepare":
|
||||
case_dir = _require_case_dir(args)
|
||||
payload = {
|
||||
"command": args.command,
|
||||
"format": args.format,
|
||||
**PrepareEngine().prepare_case(case_dir),
|
||||
}
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "render-forms":
|
||||
case_dir = _require_case_dir(args)
|
||||
manager = CaseManager(case_dir)
|
||||
manifest = manager.load_manifest()
|
||||
normalized = json.loads((case_dir / "return" / "normalized-return.json").read_text())
|
||||
artifacts = render_case_forms(case_dir, TaxYearCorpus(), normalized)
|
||||
payload = {
|
||||
"command": "render-forms",
|
||||
"format": args.format,
|
||||
"taxYear": manifest["taxYear"],
|
||||
"status": "rendered",
|
||||
**artifacts,
|
||||
}
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "export-efile-ready":
|
||||
case_dir = _require_case_dir(args)
|
||||
payload = {
|
||||
"command": "export-efile-ready",
|
||||
"format": args.format,
|
||||
**EfileExporter().export_case(case_dir),
|
||||
}
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "review":
|
||||
case_dir = _require_case_dir(args)
|
||||
review_payload = ReviewEngine().review_case(case_dir)
|
||||
payload = {
|
||||
"command": "review",
|
||||
"format": args.format,
|
||||
"style": args.style,
|
||||
**review_payload,
|
||||
}
|
||||
payload["rendered"] = (
|
||||
render_review_memo(review_payload)
|
||||
if args.style == "memo"
|
||||
else render_review_summary(review_payload)
|
||||
)
|
||||
if args.format == "markdown":
|
||||
print(payload["rendered"])
|
||||
return 0
|
||||
return _emit(payload, args.format)
|
||||
|
||||
if args.command == "fetch-year":
|
||||
corpus = TaxYearCorpus()
|
||||
manifest = corpus.download_catalog(args.tax_year, bootstrap_irs_catalog(args.tax_year))
|
||||
payload = {
|
||||
"command": "fetch-year",
|
||||
"format": args.format,
|
||||
"taxYear": args.tax_year,
|
||||
"status": "downloaded",
|
||||
"sourceCount": manifest["sourceCount"],
|
||||
"manifestPath": corpus.paths_for_year(args.tax_year).manifest_path.as_posix(),
|
||||
}
|
||||
return _emit(payload, args.format)
|
||||
|
||||
parser.error(f"Unsupported command: {args.command}")
|
||||
return 2
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
74
skills/us-cpa/src/us_cpa/document_extractors.py
Normal file
74
skills/us-cpa/src/us_cpa/document_extractors.py
Normal file
@@ -0,0 +1,74 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from pypdf import PdfReader
|
||||
|
||||
|
||||
_NUMBER = r"(-?\d+(?:,\d{3})*(?:\.\d+)?)"
|
||||
|
||||
|
||||
def _parse_number(raw: str) -> float:
|
||||
return float(raw.replace(",", ""))
|
||||
|
||||
|
||||
def _extract_text(path: Path) -> str:
|
||||
suffix = path.suffix.lower()
|
||||
if suffix in {".txt", ".md"}:
|
||||
return path.read_text()
|
||||
if suffix == ".pdf":
|
||||
reader = PdfReader(str(path))
|
||||
return "\n".join((page.extract_text() or "") for page in reader.pages)
|
||||
return ""
|
||||
|
||||
|
||||
def _facts_from_text(text: str) -> dict[str, Any]:
|
||||
extracted: dict[str, Any] = {}
|
||||
|
||||
if match := re.search(r"Employee:\s*(.+)", text):
|
||||
extracted["taxpayer.fullName"] = match.group(1).strip()
|
||||
if match := re.search(r"Recipient:\s*(.+)", text):
|
||||
extracted.setdefault("taxpayer.fullName", match.group(1).strip())
|
||||
if match := re.search(r"Box 1 Wages, tips, other compensation\s+" + _NUMBER, text, re.I):
|
||||
extracted["wages"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 2 Federal income tax withheld\s+" + _NUMBER, text, re.I):
|
||||
extracted["federalWithholding"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 16 State wages, tips, etc\.\s+" + _NUMBER, text, re.I):
|
||||
extracted["stateWages"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 17 State income tax\s+" + _NUMBER, text, re.I):
|
||||
extracted["stateWithholding"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 3 Social security wages\s+" + _NUMBER, text, re.I):
|
||||
extracted["socialSecurityWages"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 5 Medicare wages and tips\s+" + _NUMBER, text, re.I):
|
||||
extracted["medicareWages"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 1 Interest Income\s+" + _NUMBER, text, re.I):
|
||||
extracted["taxableInterest"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 1a Total ordinary dividends\s+" + _NUMBER, text, re.I):
|
||||
extracted["ordinaryDividends"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 1 Gross distribution\s+" + _NUMBER, text, re.I):
|
||||
extracted["retirementDistribution"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Box 3 Other income\s+" + _NUMBER, text, re.I):
|
||||
extracted["otherIncome"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Net profit(?: or loss)?\s+" + _NUMBER, text, re.I):
|
||||
extracted["businessIncome"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Adjusted gross income\s+" + _NUMBER, text, re.I):
|
||||
extracted["priorYear.adjustedGrossIncome"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Taxable income\s+" + _NUMBER, text, re.I):
|
||||
extracted["priorYear.taxableIncome"] = _parse_number(match.group(1))
|
||||
if match := re.search(r"Refund\s+" + _NUMBER, text, re.I):
|
||||
extracted["priorYear.refund"] = _parse_number(match.group(1))
|
||||
|
||||
return extracted
|
||||
|
||||
|
||||
def extract_document_facts(path: Path) -> dict[str, Any]:
|
||||
suffix = path.suffix.lower()
|
||||
if suffix == ".json":
|
||||
payload = json.loads(path.read_text())
|
||||
if isinstance(payload, dict):
|
||||
return payload
|
||||
return {}
|
||||
return _facts_from_text(_extract_text(path))
|
||||
79
skills/us-cpa/src/us_cpa/prepare.py
Normal file
79
skills/us-cpa/src/us_cpa/prepare.py
Normal file
@@ -0,0 +1,79 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.cases import CaseManager
|
||||
from us_cpa.renderers import render_case_forms
|
||||
from us_cpa.returns import normalize_case_facts
|
||||
from us_cpa.sources import TaxYearCorpus
|
||||
|
||||
|
||||
def _load_case_facts(case_dir: Path) -> dict[str, Any]:
|
||||
facts_path = case_dir / "extracted" / "facts.json"
|
||||
payload = json.loads(facts_path.read_text())
|
||||
facts = {key: value["value"] for key, value in payload["facts"].items()}
|
||||
facts["_factMetadata"] = {
|
||||
key: {"sources": value.get("sources", [])} for key, value in payload["facts"].items()
|
||||
}
|
||||
return facts
|
||||
|
||||
|
||||
|
||||
class PrepareEngine:
|
||||
def __init__(self, *, corpus: TaxYearCorpus | None = None) -> None:
|
||||
self.corpus = corpus or TaxYearCorpus()
|
||||
|
||||
def prepare_case(self, case_dir: Path) -> dict[str, Any]:
|
||||
manager = CaseManager(case_dir)
|
||||
manifest = manager.load_manifest()
|
||||
facts = _load_case_facts(manager.case_dir)
|
||||
normalized = normalize_case_facts(facts, manifest["taxYear"])
|
||||
normalized_path = manager.case_dir / "return" / "normalized-return.json"
|
||||
normalized_path.write_text(json.dumps(normalized, indent=2))
|
||||
|
||||
artifacts = render_case_forms(manager.case_dir, self.corpus, normalized)
|
||||
unresolved_issues = json.loads(manager.issues_path.read_text())["issues"]
|
||||
|
||||
summary = {
|
||||
"requiredForms": normalized["requiredForms"],
|
||||
"reviewRequiredArtifacts": [
|
||||
artifact["formCode"] for artifact in artifacts["artifacts"] if artifact["reviewRequired"]
|
||||
],
|
||||
"refund": normalized["totals"]["refund"],
|
||||
"balanceDue": normalized["totals"]["balanceDue"],
|
||||
"unresolvedIssueCount": len(unresolved_issues),
|
||||
}
|
||||
result = {
|
||||
"status": "prepared",
|
||||
"caseDir": str(manager.case_dir),
|
||||
"taxYear": manifest["taxYear"],
|
||||
"normalizedReturnPath": str(normalized_path),
|
||||
"artifactManifestPath": str(manager.case_dir / "output" / "artifacts.json"),
|
||||
"summary": summary,
|
||||
}
|
||||
(manager.case_dir / "reports" / "prepare-summary.json").write_text(json.dumps(result, indent=2))
|
||||
return result
|
||||
|
||||
|
||||
class EfileExporter:
|
||||
def export_case(self, case_dir: Path) -> dict[str, Any]:
|
||||
case_dir = Path(case_dir).expanduser().resolve()
|
||||
normalized = json.loads((case_dir / "return" / "normalized-return.json").read_text())
|
||||
artifacts = json.loads((case_dir / "output" / "artifacts.json").read_text())
|
||||
issues = json.loads((case_dir / "issues" / "open-issues.json").read_text())["issues"]
|
||||
payload = {
|
||||
"status": "draft" if issues or any(a["reviewRequired"] for a in artifacts["artifacts"]) else "ready",
|
||||
"taxYear": normalized["taxYear"],
|
||||
"returnSummary": {
|
||||
"requiredForms": normalized["requiredForms"],
|
||||
"refund": normalized["totals"]["refund"],
|
||||
"balanceDue": normalized["totals"]["balanceDue"],
|
||||
},
|
||||
"attachments": artifacts["artifacts"],
|
||||
"unresolvedIssues": issues,
|
||||
}
|
||||
output_path = case_dir / "output" / "efile-ready.json"
|
||||
output_path.write_text(json.dumps(payload, indent=2))
|
||||
return payload
|
||||
202
skills/us-cpa/src/us_cpa/questions.py
Normal file
202
skills/us-cpa/src/us_cpa/questions.py
Normal file
@@ -0,0 +1,202 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.sources import TaxYearCorpus, build_primary_law_authorities
|
||||
|
||||
|
||||
TOPIC_RULES = [
|
||||
{
|
||||
"issue": "standard_deduction",
|
||||
"keywords": ("standard deduction",),
|
||||
"authority_slugs": ("i1040gi",),
|
||||
"answer_by_status": {
|
||||
"single": "$15,750",
|
||||
"married_filing_jointly": "$31,500",
|
||||
"head_of_household": "$23,625",
|
||||
},
|
||||
"summary_template": "{filing_status_label} filers use a {answer} standard deduction for tax year {tax_year}.",
|
||||
"confidence": "high",
|
||||
},
|
||||
{
|
||||
"issue": "schedule_c_required",
|
||||
"keywords": ("schedule c", "sole proprietor", "self-employment"),
|
||||
"authority_slugs": ("f1040sc", "i1040sc"),
|
||||
"answer": "Schedule C is generally required when a taxpayer reports sole proprietorship business income or expenses.",
|
||||
"summary": "Business income and expenses from a sole proprietorship generally belong on Schedule C.",
|
||||
"confidence": "medium",
|
||||
},
|
||||
{
|
||||
"issue": "schedule_d_required",
|
||||
"keywords": ("schedule d", "capital gains"),
|
||||
"authority_slugs": ("f1040sd", "i1040sd", "f8949", "i8949"),
|
||||
"answer": "Schedule D is generally required when a taxpayer reports capital gains or losses, often alongside Form 8949.",
|
||||
"summary": "Capital gains and losses generally flow through Schedule D, with Form 8949 supporting detail when required.",
|
||||
"confidence": "medium",
|
||||
},
|
||||
{
|
||||
"issue": "schedule_e_required",
|
||||
"keywords": ("schedule e", "rental income"),
|
||||
"authority_slugs": ("f1040se", "i1040se"),
|
||||
"answer": "Schedule E is generally required when a taxpayer reports rental real-estate income or expenses.",
|
||||
"summary": "Rental income and expenses generally belong on Schedule E.",
|
||||
"confidence": "medium",
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
RISK_BY_CONFIDENCE = {
|
||||
"high": "low",
|
||||
"medium": "medium",
|
||||
"low": "high",
|
||||
}
|
||||
|
||||
|
||||
def _normalize_question(question: str) -> str:
|
||||
return question.strip().lower()
|
||||
|
||||
|
||||
def _filing_status_label(status: str) -> str:
|
||||
return status.replace("_", " ").title()
|
||||
|
||||
|
||||
@dataclass
|
||||
class QuestionEngine:
|
||||
corpus: TaxYearCorpus
|
||||
|
||||
def _manifest(self, tax_year: int) -> dict[str, Any]:
|
||||
path = self.corpus.paths_for_year(tax_year).manifest_path
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(
|
||||
f"Tax year {tax_year} corpus not found at {path}. Run fetch-year first."
|
||||
)
|
||||
return json.loads(path.read_text())
|
||||
|
||||
def _authorities_for(self, manifest: dict[str, Any], slugs: tuple[str, ...]) -> list[dict[str, Any]]:
|
||||
found = []
|
||||
sources = {item["slug"]: item for item in manifest["sources"]}
|
||||
for slug in slugs:
|
||||
if slug in sources:
|
||||
source = sources[slug]
|
||||
found.append(
|
||||
{
|
||||
"slug": source["slug"],
|
||||
"title": source["title"],
|
||||
"sourceClass": source["sourceClass"],
|
||||
"url": source["url"],
|
||||
"localPath": source["localPath"],
|
||||
"authorityRank": source["authorityRank"],
|
||||
}
|
||||
)
|
||||
return found
|
||||
|
||||
def answer(self, *, question: str, tax_year: int, case_facts: dict[str, Any]) -> dict[str, Any]:
|
||||
manifest = self._manifest(tax_year)
|
||||
normalized = _normalize_question(question)
|
||||
facts_used = [{"field": key, "value": value} for key, value in sorted(case_facts.items())]
|
||||
|
||||
for rule in TOPIC_RULES:
|
||||
if all(keyword in normalized for keyword in rule["keywords"]):
|
||||
authorities = self._authorities_for(manifest, rule["authority_slugs"])
|
||||
if rule["issue"] == "standard_deduction":
|
||||
filing_status = case_facts.get("filingStatus", "single")
|
||||
answer = rule["answer_by_status"].get(filing_status, rule["answer_by_status"]["single"])
|
||||
summary = rule["summary_template"].format(
|
||||
filing_status_label=_filing_status_label(filing_status),
|
||||
answer=answer,
|
||||
tax_year=tax_year,
|
||||
)
|
||||
else:
|
||||
answer = rule["answer"]
|
||||
summary = rule["summary"]
|
||||
|
||||
return {
|
||||
"issue": rule["issue"],
|
||||
"taxYear": tax_year,
|
||||
"factsUsed": facts_used,
|
||||
"missingFacts": [],
|
||||
"authorities": authorities,
|
||||
"conclusion": {"answer": answer, "summary": summary},
|
||||
"confidence": rule["confidence"],
|
||||
"riskLevel": RISK_BY_CONFIDENCE[rule["confidence"]],
|
||||
"followUpQuestions": [],
|
||||
"primaryLawRequired": False,
|
||||
}
|
||||
|
||||
return {
|
||||
"issue": "requires_primary_law_escalation",
|
||||
"taxYear": tax_year,
|
||||
"factsUsed": facts_used,
|
||||
"missingFacts": [
|
||||
"Internal Revenue Code or Treasury regulation analysis is required before answering this question confidently."
|
||||
],
|
||||
"authorities": build_primary_law_authorities(question),
|
||||
"conclusion": {
|
||||
"answer": "Insufficient IRS-form and instruction support for a confident answer.",
|
||||
"summary": "This question needs primary-law analysis before a reliable answer can be given.",
|
||||
},
|
||||
"confidence": "low",
|
||||
"riskLevel": "high",
|
||||
"followUpQuestions": [
|
||||
"What facts drive the section-level issue?",
|
||||
"Is there an existing return position or drafted treatment to review?",
|
||||
],
|
||||
"primaryLawRequired": True,
|
||||
}
|
||||
|
||||
|
||||
def render_analysis(analysis: dict[str, Any]) -> str:
|
||||
lines = [analysis["conclusion"]["summary"]]
|
||||
lines.append(
|
||||
f"Confidence: {analysis['confidence']}. Risk: {analysis['riskLevel']}."
|
||||
)
|
||||
if analysis["factsUsed"]:
|
||||
facts = ", ".join(f"{item['field']}={item['value']}" for item in analysis["factsUsed"])
|
||||
lines.append(f"Facts used: {facts}.")
|
||||
if analysis["authorities"]:
|
||||
titles = "; ".join(item["title"] for item in analysis["authorities"])
|
||||
lines.append(f"Authorities: {titles}.")
|
||||
if analysis["missingFacts"]:
|
||||
lines.append(f"Open items: {' '.join(analysis['missingFacts'])}")
|
||||
return " ".join(lines)
|
||||
|
||||
|
||||
def render_memo(analysis: dict[str, Any]) -> str:
|
||||
lines = [
|
||||
"# Tax Memo",
|
||||
"",
|
||||
f"## Issue\n{analysis['issue']}",
|
||||
"",
|
||||
"## Facts",
|
||||
]
|
||||
if analysis["factsUsed"]:
|
||||
for item in analysis["factsUsed"]:
|
||||
lines.append(f"- {item['field']}: {item['value']}")
|
||||
else:
|
||||
lines.append("- No case-specific facts supplied.")
|
||||
lines.extend(["", "## Authorities"])
|
||||
if analysis["authorities"]:
|
||||
for authority in analysis["authorities"]:
|
||||
lines.append(f"- {authority['title']}")
|
||||
else:
|
||||
lines.append("- Primary-law escalation required.")
|
||||
lines.extend(
|
||||
[
|
||||
"",
|
||||
"## Analysis",
|
||||
analysis["conclusion"]["summary"],
|
||||
f"Confidence: {analysis['confidence']}",
|
||||
f"Risk level: {analysis['riskLevel']}",
|
||||
"",
|
||||
"## Conclusion",
|
||||
analysis["conclusion"]["answer"],
|
||||
]
|
||||
)
|
||||
if analysis["missingFacts"]:
|
||||
lines.extend(["", "## Open Items"])
|
||||
for item in analysis["missingFacts"]:
|
||||
lines.append(f"- {item}")
|
||||
return "\n".join(lines)
|
||||
120
skills/us-cpa/src/us_cpa/renderers.py
Normal file
120
skills/us-cpa/src/us_cpa/renderers.py
Normal file
@@ -0,0 +1,120 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from pypdf import PdfReader, PdfWriter
|
||||
from reportlab.pdfgen import canvas
|
||||
|
||||
from us_cpa.sources import TaxYearCorpus
|
||||
|
||||
|
||||
FORM_TEMPLATES = {
|
||||
"f1040": "f1040",
|
||||
"f1040sb": "f1040sb",
|
||||
"f1040sc": "f1040sc",
|
||||
"f1040se": "f1040se",
|
||||
"f1040s1": "f1040s1",
|
||||
}
|
||||
|
||||
|
||||
OVERLAY_FIELDS = {
|
||||
"f1040": [
|
||||
(72, 725, lambda data: f"Taxpayer: {data['taxpayer']['fullName']}"),
|
||||
(72, 705, lambda data: f"Filing status: {data['filingStatus']}"),
|
||||
(72, 685, lambda data: f"Wages: {data['income']['wages']:.2f}"),
|
||||
(72, 665, lambda data: f"Taxable interest: {data['income']['taxableInterest']:.2f}"),
|
||||
(72, 645, lambda data: f"AGI: {data['totals']['adjustedGrossIncome']:.2f}"),
|
||||
(72, 625, lambda data: f"Standard deduction: {data['deductions']['standardDeduction']:.2f}"),
|
||||
(72, 605, lambda data: f"Taxable income: {data['totals']['taxableIncome']:.2f}"),
|
||||
(72, 585, lambda data: f"Total tax: {data['taxes']['totalTax']:.2f}"),
|
||||
(72, 565, lambda data: f"Withholding: {data['payments']['federalWithholding']:.2f}"),
|
||||
(72, 545, lambda data: f"Refund: {data['totals']['refund']:.2f}"),
|
||||
(72, 525, lambda data: f"Balance due: {data['totals']['balanceDue']:.2f}"),
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
FIELD_FILL_VALUES = {
|
||||
"f1040": lambda data: {
|
||||
"taxpayer_full_name": data["taxpayer"]["fullName"],
|
||||
"filing_status": data["filingStatus"],
|
||||
"wages": f"{data['income']['wages']:.2f}",
|
||||
"taxable_interest": f"{data['income']['taxableInterest']:.2f}",
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def _field_fill_page(template_path: Path, output_path: Path, form_code: str, normalized: dict[str, Any]) -> bool:
|
||||
reader = PdfReader(str(template_path))
|
||||
fields = reader.get_fields() or {}
|
||||
values = FIELD_FILL_VALUES.get(form_code, lambda _: {})(normalized)
|
||||
matched = {key: value for key, value in values.items() if key in fields}
|
||||
if not matched:
|
||||
return False
|
||||
|
||||
writer = PdfWriter(clone_from=str(template_path))
|
||||
writer.update_page_form_field_values(writer.pages[0], matched, auto_regenerate=False)
|
||||
writer.set_need_appearances_writer()
|
||||
with output_path.open("wb") as handle:
|
||||
writer.write(handle)
|
||||
return True
|
||||
|
||||
|
||||
def _overlay_page(template_path: Path, output_path: Path, form_code: str, normalized: dict[str, Any]) -> None:
|
||||
reader = PdfReader(str(template_path))
|
||||
writer = PdfWriter(clone_from=str(template_path))
|
||||
|
||||
page = writer.pages[0]
|
||||
width = float(page.mediabox.width)
|
||||
height = float(page.mediabox.height)
|
||||
buffer = BytesIO()
|
||||
pdf = canvas.Canvas(buffer, pagesize=(width, height))
|
||||
for x, y, getter in OVERLAY_FIELDS.get(form_code, []):
|
||||
pdf.drawString(x, y, getter(normalized))
|
||||
pdf.save()
|
||||
buffer.seek(0)
|
||||
overlay = PdfReader(buffer)
|
||||
page.merge_page(overlay.pages[0])
|
||||
with output_path.open("wb") as handle:
|
||||
writer.write(handle)
|
||||
|
||||
|
||||
def render_case_forms(case_dir: Path, corpus: TaxYearCorpus, normalized: dict[str, Any]) -> dict[str, Any]:
|
||||
output_dir = case_dir / "output" / "forms"
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
irs_dir = corpus.paths_for_year(normalized["taxYear"]).irs_dir
|
||||
|
||||
artifacts = []
|
||||
for form_code in normalized["requiredForms"]:
|
||||
template_slug = FORM_TEMPLATES.get(form_code)
|
||||
if template_slug is None:
|
||||
continue
|
||||
template_path = irs_dir / f"{template_slug}.pdf"
|
||||
output_path = output_dir / f"{form_code}.pdf"
|
||||
render_method = "overlay"
|
||||
review_required = True
|
||||
if _field_fill_page(template_path, output_path, form_code, normalized):
|
||||
render_method = "field_fill"
|
||||
review_required = False
|
||||
else:
|
||||
_overlay_page(template_path, output_path, form_code, normalized)
|
||||
artifacts.append(
|
||||
{
|
||||
"formCode": form_code,
|
||||
"templatePath": str(template_path),
|
||||
"outputPath": str(output_path),
|
||||
"renderMethod": render_method,
|
||||
"reviewRequired": review_required,
|
||||
}
|
||||
)
|
||||
|
||||
artifact_manifest = {
|
||||
"taxYear": normalized["taxYear"],
|
||||
"artifactCount": len(artifacts),
|
||||
"artifacts": artifacts,
|
||||
}
|
||||
(case_dir / "output" / "artifacts.json").write_text(json.dumps(artifact_manifest, indent=2))
|
||||
return artifact_manifest
|
||||
194
skills/us-cpa/src/us_cpa/returns.py
Normal file
194
skills/us-cpa/src/us_cpa/returns.py
Normal file
@@ -0,0 +1,194 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.tax_years import tax_year_rules
|
||||
|
||||
|
||||
def _as_float(value: Any) -> float:
|
||||
if value in (None, ""):
|
||||
return 0.0
|
||||
return float(value)
|
||||
|
||||
|
||||
def _fact_metadata(facts: dict[str, Any]) -> dict[str, Any]:
|
||||
return facts.get("_factMetadata", {})
|
||||
|
||||
|
||||
def _provenance_for(field: str, metadata: dict[str, Any]) -> dict[str, Any]:
|
||||
entry = metadata.get(field, {})
|
||||
return {"sources": list(entry.get("sources", []))}
|
||||
|
||||
|
||||
def tax_on_ordinary_income(amount: float, filing_status: str, tax_year: int) -> float:
|
||||
taxable = max(0.0, amount)
|
||||
brackets = tax_year_rules(tax_year)["ordinaryIncomeBrackets"][filing_status]
|
||||
lower = 0.0
|
||||
tax = 0.0
|
||||
for upper, rate in brackets:
|
||||
if taxable <= lower:
|
||||
break
|
||||
portion = min(taxable, upper) - lower
|
||||
tax += portion * rate
|
||||
lower = upper
|
||||
return round(tax, 2)
|
||||
|
||||
|
||||
def resolve_required_forms(normalized: dict[str, Any]) -> list[str]:
|
||||
forms = ["f1040"]
|
||||
if normalized["income"]["taxableInterest"] > 1500:
|
||||
forms.append("f1040sb")
|
||||
if normalized["income"]["businessIncome"] != 0:
|
||||
forms.extend(["f1040sc", "f1040sse", "f1040s1", "f8995"])
|
||||
if normalized["income"]["capitalGainLoss"] != 0:
|
||||
forms.extend(["f1040sd", "f8949"])
|
||||
if normalized["income"]["rentalIncome"] != 0:
|
||||
forms.extend(["f1040se", "f1040s1"])
|
||||
if normalized["deductions"]["deductionType"] == "itemized":
|
||||
forms.append("f1040sa")
|
||||
if normalized["adjustments"]["hsaContribution"] != 0:
|
||||
forms.append("f8889")
|
||||
if normalized["credits"]["educationCredit"] != 0:
|
||||
forms.append("f8863")
|
||||
if normalized["credits"]["foreignTaxCredit"] != 0:
|
||||
forms.append("f1116")
|
||||
if normalized["business"]["qualifiedBusinessIncome"] != 0 and "f8995" not in forms:
|
||||
forms.append("f8995")
|
||||
if normalized["basis"]["traditionalIraBasis"] != 0:
|
||||
forms.append("f8606")
|
||||
if normalized["taxes"]["additionalMedicareTax"] != 0:
|
||||
forms.append("f8959")
|
||||
if normalized["taxes"]["netInvestmentIncomeTax"] != 0:
|
||||
forms.append("f8960")
|
||||
if normalized["taxes"]["alternativeMinimumTax"] != 0:
|
||||
forms.append("f6251")
|
||||
if normalized["taxes"]["additionalTaxPenalty"] != 0:
|
||||
forms.append("f5329")
|
||||
if normalized["credits"]["energyCredit"] != 0:
|
||||
forms.append("f5695")
|
||||
if normalized["depreciation"]["depreciationExpense"] != 0:
|
||||
forms.append("f4562")
|
||||
if normalized["assetSales"]["section1231GainLoss"] != 0:
|
||||
forms.append("f4797")
|
||||
return list(dict.fromkeys(forms))
|
||||
|
||||
|
||||
def normalize_case_facts(facts: dict[str, Any], tax_year: int) -> dict[str, Any]:
|
||||
rules = tax_year_rules(tax_year)
|
||||
metadata = _fact_metadata(facts)
|
||||
filing_status = facts.get("filingStatus", "single")
|
||||
wages = _as_float(facts.get("wages"))
|
||||
interest = _as_float(facts.get("taxableInterest"))
|
||||
business_income = _as_float(facts.get("businessIncome"))
|
||||
capital_gain_loss = _as_float(facts.get("capitalGainLoss"))
|
||||
rental_income = _as_float(facts.get("rentalIncome"))
|
||||
withholding = _as_float(facts.get("federalWithholding"))
|
||||
itemized_deductions = _as_float(facts.get("itemizedDeductions"))
|
||||
hsa_contribution = _as_float(facts.get("hsaContribution"))
|
||||
education_credit = _as_float(facts.get("educationCredit"))
|
||||
foreign_tax_credit = _as_float(facts.get("foreignTaxCredit"))
|
||||
qualified_business_income = _as_float(facts.get("qualifiedBusinessIncome"))
|
||||
traditional_ira_basis = _as_float(facts.get("traditionalIraBasis"))
|
||||
additional_medicare_tax = _as_float(facts.get("additionalMedicareTax"))
|
||||
net_investment_income_tax = _as_float(facts.get("netInvestmentIncomeTax"))
|
||||
alternative_minimum_tax = _as_float(facts.get("alternativeMinimumTax"))
|
||||
additional_tax_penalty = _as_float(facts.get("additionalTaxPenalty"))
|
||||
energy_credit = _as_float(facts.get("energyCredit"))
|
||||
depreciation_expense = _as_float(facts.get("depreciationExpense"))
|
||||
section1231_gain_loss = _as_float(facts.get("section1231GainLoss"))
|
||||
|
||||
adjusted_gross_income = wages + interest + business_income + capital_gain_loss + rental_income
|
||||
standard_deduction = rules["standardDeduction"][filing_status]
|
||||
deduction_type = "itemized" if itemized_deductions > standard_deduction else "standard"
|
||||
deduction_amount = itemized_deductions if deduction_type == "itemized" else standard_deduction
|
||||
taxable_income = max(0.0, adjusted_gross_income - deduction_amount)
|
||||
income_tax = tax_on_ordinary_income(taxable_income, filing_status, tax_year)
|
||||
self_employment_tax = round(max(0.0, business_income) * 0.9235 * 0.153, 2)
|
||||
total_tax = round(
|
||||
income_tax
|
||||
+ self_employment_tax
|
||||
+ additional_medicare_tax
|
||||
+ net_investment_income_tax
|
||||
+ alternative_minimum_tax
|
||||
+ additional_tax_penalty,
|
||||
2,
|
||||
)
|
||||
total_payments = withholding
|
||||
total_credits = round(education_credit + foreign_tax_credit + energy_credit, 2)
|
||||
refund = round(max(0.0, total_payments + total_credits - total_tax), 2)
|
||||
balance_due = round(max(0.0, total_tax - total_payments - total_credits), 2)
|
||||
|
||||
normalized = {
|
||||
"taxYear": tax_year,
|
||||
"taxpayer": {
|
||||
"fullName": facts.get("taxpayer.fullName", "Unknown Taxpayer"),
|
||||
},
|
||||
"spouse": {
|
||||
"fullName": facts.get("spouse.fullName", ""),
|
||||
},
|
||||
"dependents": list(facts.get("dependents", [])),
|
||||
"filingStatus": filing_status,
|
||||
"income": {
|
||||
"wages": wages,
|
||||
"taxableInterest": interest,
|
||||
"businessIncome": business_income,
|
||||
"capitalGainLoss": capital_gain_loss,
|
||||
"rentalIncome": rental_income,
|
||||
},
|
||||
"adjustments": {
|
||||
"hsaContribution": hsa_contribution,
|
||||
},
|
||||
"payments": {
|
||||
"federalWithholding": withholding,
|
||||
},
|
||||
"deductions": {
|
||||
"standardDeduction": standard_deduction,
|
||||
"itemizedDeductions": itemized_deductions,
|
||||
"deductionType": deduction_type,
|
||||
"deductionAmount": deduction_amount,
|
||||
},
|
||||
"credits": {
|
||||
"educationCredit": education_credit,
|
||||
"foreignTaxCredit": foreign_tax_credit,
|
||||
"energyCredit": energy_credit,
|
||||
},
|
||||
"taxes": {
|
||||
"incomeTax": income_tax,
|
||||
"selfEmploymentTax": self_employment_tax,
|
||||
"additionalMedicareTax": additional_medicare_tax,
|
||||
"netInvestmentIncomeTax": net_investment_income_tax,
|
||||
"alternativeMinimumTax": alternative_minimum_tax,
|
||||
"additionalTaxPenalty": additional_tax_penalty,
|
||||
"totalTax": total_tax,
|
||||
},
|
||||
"business": {
|
||||
"qualifiedBusinessIncome": qualified_business_income,
|
||||
},
|
||||
"basis": {
|
||||
"traditionalIraBasis": traditional_ira_basis,
|
||||
},
|
||||
"depreciation": {
|
||||
"depreciationExpense": depreciation_expense,
|
||||
},
|
||||
"assetSales": {
|
||||
"section1231GainLoss": section1231_gain_loss,
|
||||
},
|
||||
"totals": {
|
||||
"adjustedGrossIncome": round(adjusted_gross_income, 2),
|
||||
"taxableIncome": round(taxable_income, 2),
|
||||
"totalPayments": round(total_payments, 2),
|
||||
"totalCredits": total_credits,
|
||||
"refund": refund,
|
||||
"balanceDue": balance_due,
|
||||
},
|
||||
"provenance": {
|
||||
"income.wages": _provenance_for("wages", metadata),
|
||||
"income.taxableInterest": _provenance_for("taxableInterest", metadata),
|
||||
"income.businessIncome": _provenance_for("businessIncome", metadata),
|
||||
"income.capitalGainLoss": _provenance_for("capitalGainLoss", metadata),
|
||||
"income.rentalIncome": _provenance_for("rentalIncome", metadata),
|
||||
"payments.federalWithholding": _provenance_for("federalWithholding", metadata),
|
||||
},
|
||||
}
|
||||
normalized["requiredForms"] = resolve_required_forms(normalized)
|
||||
return normalized
|
||||
162
skills/us-cpa/src/us_cpa/review.py
Normal file
162
skills/us-cpa/src/us_cpa/review.py
Normal file
@@ -0,0 +1,162 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from us_cpa.returns import normalize_case_facts
|
||||
from us_cpa.sources import TaxYearCorpus
|
||||
|
||||
|
||||
def _severity_rank(severity: str) -> int:
|
||||
return {"high": 0, "medium": 1, "low": 2}[severity]
|
||||
|
||||
|
||||
class ReviewEngine:
|
||||
def __init__(self, *, corpus: TaxYearCorpus | None = None) -> None:
|
||||
self.corpus = corpus or TaxYearCorpus()
|
||||
|
||||
def review_case(self, case_dir: Path) -> dict[str, Any]:
|
||||
case_dir = Path(case_dir).expanduser().resolve()
|
||||
manifest = json.loads((case_dir / "case-manifest.json").read_text())
|
||||
stored_return = json.loads((case_dir / "return" / "normalized-return.json").read_text())
|
||||
facts_payload = json.loads((case_dir / "extracted" / "facts.json").read_text())
|
||||
facts = {key: value["value"] for key, value in facts_payload["facts"].items()}
|
||||
facts["_factMetadata"] = {
|
||||
key: {"sources": value.get("sources", [])} for key, value in facts_payload["facts"].items()
|
||||
}
|
||||
recomputed = normalize_case_facts(facts, manifest["taxYear"])
|
||||
artifacts_payload = json.loads((case_dir / "output" / "artifacts.json").read_text())
|
||||
|
||||
findings: list[dict[str, Any]] = []
|
||||
if stored_return["totals"]["adjustedGrossIncome"] != recomputed["totals"]["adjustedGrossIncome"]:
|
||||
findings.append(
|
||||
{
|
||||
"severity": "high",
|
||||
"title": "Adjusted gross income mismatch",
|
||||
"explanation": "Stored adjusted gross income does not match the recomputed return from case facts.",
|
||||
"suggestedAction": f"Update AGI to {recomputed['totals']['adjustedGrossIncome']:.2f} on Form 1040 line 11.",
|
||||
"authorities": [
|
||||
{"title": "Instructions for Form 1040 and Schedules 1-3", "sourceClass": "irs_instructions"}
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
for field, label in (
|
||||
("wages", "wages"),
|
||||
("taxableInterest", "taxable interest"),
|
||||
("businessIncome", "business income"),
|
||||
("capitalGainLoss", "capital gains or losses"),
|
||||
("rentalIncome", "rental income"),
|
||||
):
|
||||
stored_value = stored_return["income"].get(field, 0.0)
|
||||
recomputed_value = recomputed["income"].get(field, 0.0)
|
||||
sources = recomputed.get("provenance", {}).get(f"income.{field}", {}).get("sources", [])
|
||||
has_document_source = any(item.get("sourceType") == "document_extract" for item in sources)
|
||||
if stored_value != recomputed_value:
|
||||
findings.append(
|
||||
{
|
||||
"severity": "high" if has_document_source else "medium",
|
||||
"title": f"Source fact mismatch for {label}",
|
||||
"explanation": f"Stored return reports {stored_value:.2f} for {label}, but case facts support {recomputed_value:.2f}.",
|
||||
"suggestedAction": f"Reconcile {label} to {recomputed_value:.2f} before treating the return as final.",
|
||||
"authorities": [
|
||||
{"title": "Case fact registry", "sourceClass": "irs_form"}
|
||||
],
|
||||
}
|
||||
)
|
||||
if stored_value == 0 and recomputed_value > 0 and has_document_source:
|
||||
findings.append(
|
||||
{
|
||||
"severity": "high",
|
||||
"title": f"Likely omitted {label}",
|
||||
"explanation": f"Document-extracted facts support {recomputed_value:.2f} of {label}, but the stored return reports none.",
|
||||
"suggestedAction": f"Add {label} to the return and regenerate the required forms.",
|
||||
"authorities": [
|
||||
{"title": "Case document extraction", "sourceClass": "irs_form"}
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
rendered_forms = {artifact["formCode"] for artifact in artifacts_payload["artifacts"]}
|
||||
for required_form in recomputed["requiredForms"]:
|
||||
if required_form not in rendered_forms:
|
||||
findings.append(
|
||||
{
|
||||
"severity": "high",
|
||||
"title": f"Missing rendered artifact for {required_form}",
|
||||
"explanation": "The return requires this form, but no rendered artifact is present in the artifact manifest.",
|
||||
"suggestedAction": f"Render and review {required_form} before treating the package as complete.",
|
||||
"authorities": [{"title": "Supported form manifest", "sourceClass": "irs_form"}],
|
||||
}
|
||||
)
|
||||
|
||||
for artifact in artifacts_payload["artifacts"]:
|
||||
if artifact.get("reviewRequired"):
|
||||
findings.append(
|
||||
{
|
||||
"severity": "medium",
|
||||
"title": f"Human review required for {artifact['formCode']}",
|
||||
"explanation": "The form was overlay-rendered on the official IRS PDF and must be reviewed before filing.",
|
||||
"suggestedAction": f"Review the rendered {artifact['formCode']} artifact visually before any filing/export handoff.",
|
||||
"authorities": [{"title": "Artifact render policy", "sourceClass": "irs_form"}],
|
||||
}
|
||||
)
|
||||
|
||||
required_forms_union = set(recomputed["requiredForms"]) | set(stored_return.get("requiredForms", []))
|
||||
if any(form in required_forms_union for form in ("f6251", "f8960", "f8959", "f1116")):
|
||||
findings.append(
|
||||
{
|
||||
"severity": "medium",
|
||||
"title": "High-complexity tax position requires specialist follow-up",
|
||||
"explanation": "The return includes forms or computations that usually require deeper technical support and careful authority review.",
|
||||
"suggestedAction": "Review the supporting authority and computations for the high-complexity forms before treating the return as filing-ready.",
|
||||
"authorities": [{"title": "Required form analysis", "sourceClass": "irs_instructions"}],
|
||||
}
|
||||
)
|
||||
|
||||
findings.sort(key=lambda item: (_severity_rank(item["severity"]), item["title"]))
|
||||
review = {
|
||||
"status": "reviewed",
|
||||
"taxYear": manifest["taxYear"],
|
||||
"caseDir": str(case_dir),
|
||||
"findingCount": len(findings),
|
||||
"findings": findings,
|
||||
}
|
||||
(case_dir / "reports" / "review-report.json").write_text(json.dumps(review, indent=2))
|
||||
return review
|
||||
|
||||
|
||||
def render_review_summary(review: dict[str, Any]) -> str:
|
||||
if not review["findings"]:
|
||||
return "No findings detected in the reviewed return package."
|
||||
lines = ["Review findings:"]
|
||||
for finding in review["findings"]:
|
||||
lines.append(f"- [{finding['severity'].upper()}] {finding['title']}: {finding['explanation']}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def render_review_memo(review: dict[str, Any]) -> str:
|
||||
lines = ["# Review Memo", ""]
|
||||
if not review["findings"]:
|
||||
lines.append("No findings detected.")
|
||||
return "\n".join(lines)
|
||||
for index, finding in enumerate(review["findings"], start=1):
|
||||
lines.extend(
|
||||
[
|
||||
f"## Finding {index}: {finding['title']}",
|
||||
f"Severity: {finding['severity']}",
|
||||
"",
|
||||
"### Explanation",
|
||||
finding["explanation"],
|
||||
"",
|
||||
"### Suggested correction",
|
||||
finding["suggestedAction"],
|
||||
"",
|
||||
"### Authorities",
|
||||
]
|
||||
)
|
||||
for authority in finding["authorities"]:
|
||||
lines.append(f"- {authority['title']}")
|
||||
lines.append("")
|
||||
return "\n".join(lines).rstrip()
|
||||
238
skills/us-cpa/src/us_cpa/sources.py
Normal file
238
skills/us-cpa/src/us_cpa/sources.py
Normal file
@@ -0,0 +1,238 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from enum import IntEnum
|
||||
from pathlib import Path
|
||||
from typing import Callable
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
class AuthorityRank(IntEnum):
|
||||
IRS_FORM = 10
|
||||
IRS_INSTRUCTIONS = 20
|
||||
IRS_PUBLICATION = 30
|
||||
IRS_FAQ = 40
|
||||
INTERNAL_REVENUE_CODE = 100
|
||||
TREASURY_REGULATION = 110
|
||||
OTHER_PRIMARY_AUTHORITY = 120
|
||||
|
||||
|
||||
AUTHORITY_RANKS: dict[str, AuthorityRank] = {
|
||||
"irs_form": AuthorityRank.IRS_FORM,
|
||||
"irs_instructions": AuthorityRank.IRS_INSTRUCTIONS,
|
||||
"irs_publication": AuthorityRank.IRS_PUBLICATION,
|
||||
"irs_faq": AuthorityRank.IRS_FAQ,
|
||||
"internal_revenue_code": AuthorityRank.INTERNAL_REVENUE_CODE,
|
||||
"treasury_regulation": AuthorityRank.TREASURY_REGULATION,
|
||||
"other_primary_authority": AuthorityRank.OTHER_PRIMARY_AUTHORITY,
|
||||
}
|
||||
|
||||
|
||||
def authority_rank_for(source_class: str) -> AuthorityRank:
|
||||
return AUTHORITY_RANKS[source_class]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class SourceDescriptor:
|
||||
slug: str
|
||||
title: str
|
||||
source_class: str
|
||||
media_type: str
|
||||
url: str
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class TaxYearPaths:
|
||||
year_dir: Path
|
||||
irs_dir: Path
|
||||
manifest_path: Path
|
||||
|
||||
|
||||
def default_cache_root() -> Path:
|
||||
override = os.getenv("US_CPA_CACHE_DIR")
|
||||
if override:
|
||||
return Path(override).expanduser().resolve()
|
||||
return (Path.home() / ".cache" / "us-cpa").resolve()
|
||||
|
||||
|
||||
def build_irs_prior_pdf_url(slug: str, tax_year: int) -> str:
|
||||
return f"https://www.irs.gov/pub/irs-prior/{slug}--{tax_year}.pdf"
|
||||
|
||||
|
||||
def build_primary_law_authorities(question: str) -> list[dict[str, str | int]]:
|
||||
authorities: list[dict[str, str | int]] = []
|
||||
normalized = question.lower()
|
||||
|
||||
for match in re.finditer(r"(?:section|sec\.)\s+(\d+[a-z0-9-]*)", normalized):
|
||||
section = match.group(1)
|
||||
authorities.append(
|
||||
{
|
||||
"slug": f"irc-{section}",
|
||||
"title": f"Internal Revenue Code section {section}",
|
||||
"sourceClass": "internal_revenue_code",
|
||||
"url": f"https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title26-section{section}&num=0&edition=prelim",
|
||||
"authorityRank": int(AuthorityRank.INTERNAL_REVENUE_CODE),
|
||||
}
|
||||
)
|
||||
|
||||
for match in re.finditer(r"(?:treas(?:ury)?\.?\s+reg(?:ulation)?\.?\s*)([\d.]+-\d+)", normalized):
|
||||
section = match.group(1)
|
||||
authorities.append(
|
||||
{
|
||||
"slug": f"reg-{section}",
|
||||
"title": f"Treasury Regulation {section}",
|
||||
"sourceClass": "treasury_regulation",
|
||||
"url": f"https://www.ecfr.gov/current/title-26/section-{section}",
|
||||
"authorityRank": int(AuthorityRank.TREASURY_REGULATION),
|
||||
}
|
||||
)
|
||||
|
||||
return authorities
|
||||
|
||||
|
||||
def bootstrap_irs_catalog(tax_year: int) -> list[SourceDescriptor]:
|
||||
entries = [
|
||||
("f1040", "Form 1040", "irs_form"),
|
||||
("f1040s1", "Schedule 1 (Form 1040)", "irs_form"),
|
||||
("f1040s2", "Schedule 2 (Form 1040)", "irs_form"),
|
||||
("f1040s3", "Schedule 3 (Form 1040)", "irs_form"),
|
||||
("f1040sa", "Schedule A (Form 1040)", "irs_form"),
|
||||
("f1040sb", "Schedule B (Form 1040)", "irs_form"),
|
||||
("f1040sc", "Schedule C (Form 1040)", "irs_form"),
|
||||
("f1040sd", "Schedule D (Form 1040)", "irs_form"),
|
||||
("f1040se", "Schedule E (Form 1040)", "irs_form"),
|
||||
("f1040sse", "Schedule SE (Form 1040)", "irs_form"),
|
||||
("f1040s8", "Schedule 8812 (Form 1040)", "irs_form"),
|
||||
("f8949", "Form 8949", "irs_form"),
|
||||
("f4562", "Form 4562", "irs_form"),
|
||||
("f4797", "Form 4797", "irs_form"),
|
||||
("f6251", "Form 6251", "irs_form"),
|
||||
("f8606", "Form 8606", "irs_form"),
|
||||
("f8863", "Form 8863", "irs_form"),
|
||||
("f8889", "Form 8889", "irs_form"),
|
||||
("f8959", "Form 8959", "irs_form"),
|
||||
("f8960", "Form 8960", "irs_form"),
|
||||
("f8995", "Form 8995", "irs_form"),
|
||||
("f8995a", "Form 8995-A", "irs_form"),
|
||||
("f5329", "Form 5329", "irs_form"),
|
||||
("f5695", "Form 5695", "irs_form"),
|
||||
("f1116", "Form 1116", "irs_form"),
|
||||
("i1040gi", "Instructions for Form 1040 and Schedules 1-3", "irs_instructions"),
|
||||
("i1040sca", "Instructions for Schedule A", "irs_instructions"),
|
||||
("i1040sc", "Instructions for Schedule C", "irs_instructions"),
|
||||
("i1040sd", "Instructions for Schedule D", "irs_instructions"),
|
||||
("i1040se", "Instructions for Schedule E (Form 1040)", "irs_instructions"),
|
||||
("i1040sse", "Instructions for Schedule SE", "irs_instructions"),
|
||||
("i1040s8", "Instructions for Schedule 8812 (Form 1040)", "irs_instructions"),
|
||||
("i8949", "Instructions for Form 8949", "irs_instructions"),
|
||||
("i4562", "Instructions for Form 4562", "irs_instructions"),
|
||||
("i4797", "Instructions for Form 4797", "irs_instructions"),
|
||||
("i6251", "Instructions for Form 6251", "irs_instructions"),
|
||||
("i8606", "Instructions for Form 8606", "irs_instructions"),
|
||||
("i8863", "Instructions for Form 8863", "irs_instructions"),
|
||||
("i8889", "Instructions for Form 8889", "irs_instructions"),
|
||||
("i8959", "Instructions for Form 8959", "irs_instructions"),
|
||||
("i8960", "Instructions for Form 8960", "irs_instructions"),
|
||||
("i8995", "Instructions for Form 8995", "irs_instructions"),
|
||||
("i8995a", "Instructions for Form 8995-A", "irs_instructions"),
|
||||
("i5329", "Instructions for Form 5329", "irs_instructions"),
|
||||
("i5695", "Instructions for Form 5695", "irs_instructions"),
|
||||
("i1116", "Instructions for Form 1116", "irs_instructions"),
|
||||
]
|
||||
return [
|
||||
SourceDescriptor(
|
||||
slug=slug,
|
||||
title=title,
|
||||
source_class=source_class,
|
||||
media_type="application/pdf",
|
||||
url=build_irs_prior_pdf_url(slug, tax_year),
|
||||
)
|
||||
for slug, title, source_class in entries
|
||||
]
|
||||
|
||||
|
||||
def _sha256_bytes(payload: bytes) -> str:
|
||||
return hashlib.sha256(payload).hexdigest()
|
||||
|
||||
|
||||
def _http_fetch(url: str) -> bytes:
|
||||
with urlopen(url) as response:
|
||||
return response.read()
|
||||
|
||||
|
||||
class TaxYearCorpus:
|
||||
def __init__(self, cache_root: Path | None = None) -> None:
|
||||
self.cache_root = cache_root or default_cache_root()
|
||||
|
||||
def paths_for_year(self, tax_year: int) -> TaxYearPaths:
|
||||
year_dir = self.cache_root / "tax-years" / str(tax_year)
|
||||
return TaxYearPaths(
|
||||
year_dir=year_dir,
|
||||
irs_dir=year_dir / "irs",
|
||||
manifest_path=year_dir / "manifest.json",
|
||||
)
|
||||
|
||||
def download_catalog(
|
||||
self,
|
||||
tax_year: int,
|
||||
catalog: list[SourceDescriptor],
|
||||
*,
|
||||
fetcher: Callable[[str], bytes] = _http_fetch,
|
||||
) -> dict:
|
||||
paths = self.paths_for_year(tax_year)
|
||||
paths.irs_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
fetched_at = datetime.now(timezone.utc).isoformat()
|
||||
sources: list[dict] = []
|
||||
for descriptor in catalog:
|
||||
payload = fetcher(descriptor.url)
|
||||
destination = paths.irs_dir / f"{descriptor.slug}.pdf"
|
||||
destination.write_bytes(payload)
|
||||
sources.append(
|
||||
{
|
||||
"slug": descriptor.slug,
|
||||
"title": descriptor.title,
|
||||
"sourceClass": descriptor.source_class,
|
||||
"mediaType": descriptor.media_type,
|
||||
"url": descriptor.url,
|
||||
"localPath": str(destination),
|
||||
"sha256": _sha256_bytes(payload),
|
||||
"fetchedAt": fetched_at,
|
||||
"authorityRank": int(authority_rank_for(descriptor.source_class)),
|
||||
}
|
||||
)
|
||||
|
||||
manifest = {
|
||||
"taxYear": tax_year,
|
||||
"fetchedAt": fetched_at,
|
||||
"cacheRoot": str(self.cache_root),
|
||||
"sourceCount": len(sources),
|
||||
"sources": sources,
|
||||
"indexes": self.index_manifest(sources),
|
||||
"primaryLawHooks": [
|
||||
{
|
||||
"sourceClass": "internal_revenue_code",
|
||||
"authorityRank": int(AuthorityRank.INTERNAL_REVENUE_CODE),
|
||||
},
|
||||
{
|
||||
"sourceClass": "treasury_regulation",
|
||||
"authorityRank": int(AuthorityRank.TREASURY_REGULATION),
|
||||
},
|
||||
],
|
||||
}
|
||||
paths.manifest_path.write_text(json.dumps(manifest, indent=2))
|
||||
return manifest
|
||||
|
||||
@staticmethod
|
||||
def index_manifest(sources: list[dict]) -> dict[str, dict[str, list[str]]]:
|
||||
by_class: dict[str, list[str]] = {}
|
||||
by_slug: dict[str, list[str]] = {}
|
||||
for source in sources:
|
||||
by_class.setdefault(source["sourceClass"], []).append(source["slug"])
|
||||
by_slug.setdefault(source["slug"], []).append(source["localPath"])
|
||||
return {"bySourceClass": by_class, "bySlug": by_slug}
|
||||
101
skills/us-cpa/src/us_cpa/tax_years.py
Normal file
101
skills/us-cpa/src/us_cpa/tax_years.py
Normal file
@@ -0,0 +1,101 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
|
||||
TAX_YEAR_DATA: dict[int, dict[str, Any]] = {
|
||||
2024: {
|
||||
"standardDeduction": {
|
||||
"single": 14600.0,
|
||||
"married_filing_jointly": 29200.0,
|
||||
"head_of_household": 21900.0,
|
||||
},
|
||||
"ordinaryIncomeBrackets": {
|
||||
"single": [
|
||||
(11600.0, 0.10),
|
||||
(47150.0, 0.12),
|
||||
(100525.0, 0.22),
|
||||
(191950.0, 0.24),
|
||||
(243725.0, 0.32),
|
||||
(609350.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
"married_filing_jointly": [
|
||||
(23200.0, 0.10),
|
||||
(94300.0, 0.12),
|
||||
(201050.0, 0.22),
|
||||
(383900.0, 0.24),
|
||||
(487450.0, 0.32),
|
||||
(731200.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
"head_of_household": [
|
||||
(16550.0, 0.10),
|
||||
(63100.0, 0.12),
|
||||
(100500.0, 0.22),
|
||||
(191950.0, 0.24),
|
||||
(243700.0, 0.32),
|
||||
(609350.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
},
|
||||
"sourceCitations": {
|
||||
"standardDeduction": "IRS Rev. Proc. 2023-34, section 3.01; 2024 Form 1040 instructions.",
|
||||
"ordinaryIncomeBrackets": "IRS Rev. Proc. 2023-34, section 3.01; 2024 Form 1040 instructions.",
|
||||
},
|
||||
},
|
||||
2025: {
|
||||
"standardDeduction": {
|
||||
"single": 15750.0,
|
||||
"married_filing_jointly": 31500.0,
|
||||
"head_of_household": 23625.0,
|
||||
},
|
||||
"ordinaryIncomeBrackets": {
|
||||
"single": [
|
||||
(11925.0, 0.10),
|
||||
(48475.0, 0.12),
|
||||
(103350.0, 0.22),
|
||||
(197300.0, 0.24),
|
||||
(250525.0, 0.32),
|
||||
(626350.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
"married_filing_jointly": [
|
||||
(23850.0, 0.10),
|
||||
(96950.0, 0.12),
|
||||
(206700.0, 0.22),
|
||||
(394600.0, 0.24),
|
||||
(501050.0, 0.32),
|
||||
(751600.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
"head_of_household": [
|
||||
(17000.0, 0.10),
|
||||
(64850.0, 0.12),
|
||||
(103350.0, 0.22),
|
||||
(197300.0, 0.24),
|
||||
(250500.0, 0.32),
|
||||
(626350.0, 0.35),
|
||||
(float("inf"), 0.37),
|
||||
],
|
||||
},
|
||||
"sourceCitations": {
|
||||
"standardDeduction": "IRS Rev. Proc. 2024-40, section 3.01; 2025 Form 1040 instructions.",
|
||||
"ordinaryIncomeBrackets": "IRS Rev. Proc. 2024-40, section 3.01; 2025 Form 1040 instructions.",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def supported_tax_years() -> list[int]:
|
||||
return sorted(TAX_YEAR_DATA)
|
||||
|
||||
|
||||
def tax_year_rules(tax_year: int) -> dict[str, Any]:
|
||||
try:
|
||||
return TAX_YEAR_DATA[tax_year]
|
||||
except KeyError as exc:
|
||||
years = ", ".join(str(year) for year in supported_tax_years())
|
||||
raise ValueError(
|
||||
f"Unsupported tax year {tax_year}. Supported tax years: {years}."
|
||||
) from exc
|
||||
1
skills/us-cpa/tests/fixtures/documents/.gitkeep
vendored
Normal file
1
skills/us-cpa/tests/fixtures/documents/.gitkeep
vendored
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
3
skills/us-cpa/tests/fixtures/documents/interest-1099.txt
vendored
Normal file
3
skills/us-cpa/tests/fixtures/documents/interest-1099.txt
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
Form 1099-INT
|
||||
Recipient: Jane Doe
|
||||
Box 1 Interest Income 1750
|
||||
4
skills/us-cpa/tests/fixtures/documents/simple-w2.txt
vendored
Normal file
4
skills/us-cpa/tests/fixtures/documents/simple-w2.txt
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
Form W-2 Wage and Tax Statement
|
||||
Employee: Jane Doe
|
||||
Box 1 Wages, tips, other compensation 50000
|
||||
Box 2 Federal income tax withheld 6000
|
||||
1
skills/us-cpa/tests/fixtures/facts/.gitkeep
vendored
Normal file
1
skills/us-cpa/tests/fixtures/facts/.gitkeep
vendored
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
6
skills/us-cpa/tests/fixtures/facts/overlay-case-2025.json
vendored
Normal file
6
skills/us-cpa/tests/fixtures/facts/overlay-case-2025.json
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"taxpayer.fullName": "Olivia Overlay",
|
||||
"filingStatus": "single",
|
||||
"wages": 42000,
|
||||
"federalWithholding": 5000
|
||||
}
|
||||
8
skills/us-cpa/tests/fixtures/facts/review-mismatch-2025.json
vendored
Normal file
8
skills/us-cpa/tests/fixtures/facts/review-mismatch-2025.json
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
"expectedIssue": "agi_mismatch"
|
||||
}
|
||||
6
skills/us-cpa/tests/fixtures/facts/schedule-c-2025.json
vendored
Normal file
6
skills/us-cpa/tests/fixtures/facts/schedule-c-2025.json
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"taxpayer.fullName": "Jamie Owner",
|
||||
"filingStatus": "single",
|
||||
"businessIncome": 12000,
|
||||
"federalWithholding": 0
|
||||
}
|
||||
7
skills/us-cpa/tests/fixtures/facts/simple-w2-interest-2025.json
vendored
Normal file
7
skills/us-cpa/tests/fixtures/facts/simple-w2-interest-2025.json
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000
|
||||
}
|
||||
1
skills/us-cpa/tests/fixtures/irs/.gitkeep
vendored
Normal file
1
skills/us-cpa/tests/fixtures/irs/.gitkeep
vendored
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
1
skills/us-cpa/tests/fixtures/returns/.gitkeep
vendored
Normal file
1
skills/us-cpa/tests/fixtures/returns/.gitkeep
vendored
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
16
skills/us-cpa/tests/fixtures/returns/simple-w2-interest-2025-normalized.json
vendored
Normal file
16
skills/us-cpa/tests/fixtures/returns/simple-w2-interest-2025-normalized.json
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"taxYear": 2025,
|
||||
"filingStatus": "single",
|
||||
"requiredForms": ["f1040", "f1040sb"],
|
||||
"income": {
|
||||
"wages": 50000.0,
|
||||
"taxableInterest": 1750.0,
|
||||
"businessIncome": 0.0,
|
||||
"capitalGainLoss": 0.0,
|
||||
"rentalIncome": 0.0
|
||||
},
|
||||
"totals": {
|
||||
"adjustedGrossIncome": 51750.0,
|
||||
"taxableIncome": 36000.0
|
||||
}
|
||||
}
|
||||
113
skills/us-cpa/tests/test_cases.py
Normal file
113
skills/us-cpa/tests/test_cases.py
Normal file
@@ -0,0 +1,113 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from us_cpa.cases import CaseConflictError, CaseManager
|
||||
|
||||
|
||||
class CaseManagerTests(unittest.TestCase):
|
||||
def test_create_case_builds_expected_layout(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
manager = CaseManager(case_dir)
|
||||
|
||||
manifest = manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
|
||||
self.assertEqual(manifest["caseLabel"], "Jane Doe")
|
||||
self.assertEqual(manifest["taxYear"], 2025)
|
||||
for name in (
|
||||
"input",
|
||||
"extracted",
|
||||
"return",
|
||||
"output",
|
||||
"reports",
|
||||
"issues",
|
||||
"sources",
|
||||
):
|
||||
self.assertTrue((case_dir / name).is_dir())
|
||||
self.assertTrue((case_dir / "case-manifest.json").exists())
|
||||
|
||||
def test_intake_registers_documents_and_user_facts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
root = Path(temp_dir)
|
||||
case_dir = root / "2025-jane-doe"
|
||||
document = root / "w2.txt"
|
||||
document.write_text("sample w2")
|
||||
manager = CaseManager(case_dir)
|
||||
manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
|
||||
result = manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={"filingStatus": "single", "taxpayer.ssnLast4": "1234"},
|
||||
document_paths=[document],
|
||||
)
|
||||
|
||||
self.assertEqual(result["status"], "accepted")
|
||||
self.assertEqual(len(result["registeredDocuments"]), 1)
|
||||
self.assertTrue((case_dir / "input" / "w2.txt").exists())
|
||||
facts = json.loads((case_dir / "extracted" / "facts.json").read_text())
|
||||
self.assertEqual(facts["facts"]["filingStatus"]["value"], "single")
|
||||
|
||||
def test_intake_extracts_machine_usable_facts_from_text_documents(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
root = Path(temp_dir)
|
||||
case_dir = root / "2025-jane-doe"
|
||||
w2 = root / "w2.txt"
|
||||
w2.write_text(
|
||||
"Form W-2 Wage and Tax Statement\n"
|
||||
"Employee: Jane Doe\n"
|
||||
"Box 1 Wages, tips, other compensation 50000\n"
|
||||
"Box 2 Federal income tax withheld 6000\n"
|
||||
)
|
||||
interest = root / "1099-int.txt"
|
||||
interest.write_text(
|
||||
"Form 1099-INT\n"
|
||||
"Recipient: Jane Doe\n"
|
||||
"Box 1 Interest Income 1750\n"
|
||||
)
|
||||
manager = CaseManager(case_dir)
|
||||
manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
|
||||
result = manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={"filingStatus": "single"},
|
||||
document_paths=[w2, interest],
|
||||
)
|
||||
|
||||
self.assertEqual(result["status"], "accepted")
|
||||
facts = json.loads((case_dir / "extracted" / "facts.json").read_text())
|
||||
self.assertEqual(facts["facts"]["wages"]["value"], 50000.0)
|
||||
self.assertEqual(facts["facts"]["federalWithholding"]["value"], 6000.0)
|
||||
self.assertEqual(facts["facts"]["taxableInterest"]["value"], 1750.0)
|
||||
self.assertEqual(facts["facts"]["wages"]["sources"][0]["sourceType"], "document_extract")
|
||||
|
||||
def test_conflicting_facts_raise_structured_issue(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
manager = CaseManager(case_dir)
|
||||
manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={"filingStatus": "single"},
|
||||
document_paths=[],
|
||||
)
|
||||
|
||||
with self.assertRaises(CaseConflictError) as context:
|
||||
manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={"filingStatus": "married_filing_jointly"},
|
||||
document_paths=[],
|
||||
)
|
||||
|
||||
issue = context.exception.issue
|
||||
self.assertEqual(issue["status"], "needs_resolution")
|
||||
self.assertEqual(issue["issueType"], "fact_conflict")
|
||||
self.assertEqual(issue["field"], "filingStatus")
|
||||
self.assertTrue((case_dir / "issues" / "open-issues.json").exists())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
392
skills/us-cpa/tests/test_cli.py
Normal file
392
skills/us-cpa/tests/test_cli.py
Normal file
@@ -0,0 +1,392 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
SKILL_DIR = Path(__file__).resolve().parents[1]
|
||||
SRC_DIR = SKILL_DIR / "src"
|
||||
|
||||
|
||||
def _pyproject_text() -> str:
|
||||
return (SKILL_DIR / "pyproject.toml").read_text()
|
||||
|
||||
|
||||
class UsCpaCliSmokeTests(unittest.TestCase):
|
||||
def test_skill_scaffold_files_exist(self) -> None:
|
||||
self.assertTrue((SKILL_DIR / "SKILL.md").exists())
|
||||
self.assertTrue((SKILL_DIR / "pyproject.toml").exists())
|
||||
self.assertTrue((SKILL_DIR / "README.md").exists())
|
||||
self.assertTrue((SKILL_DIR / "scripts" / "us-cpa").exists())
|
||||
self.assertTrue(
|
||||
(SKILL_DIR.parent.parent / "docs" / "us-cpa.md").exists()
|
||||
)
|
||||
|
||||
def test_pyproject_declares_runtime_and_dev_dependencies(self) -> None:
|
||||
pyproject = _pyproject_text()
|
||||
self.assertIn('"pypdf>=', pyproject)
|
||||
self.assertIn('"reportlab>=', pyproject)
|
||||
self.assertIn("[project.optional-dependencies]", pyproject)
|
||||
self.assertIn('"pytest>=', pyproject)
|
||||
|
||||
def test_readme_documents_install_and_script_usage(self) -> None:
|
||||
readme = (SKILL_DIR / "README.md").read_text()
|
||||
self.assertIn("pip install -e .[dev]", readme)
|
||||
self.assertIn("scripts/us-cpa", readme)
|
||||
self.assertIn("python -m unittest", readme)
|
||||
|
||||
def test_docs_explain_openclaw_installation_flow(self) -> None:
|
||||
readme = (SKILL_DIR / "README.md").read_text()
|
||||
operator_doc = (SKILL_DIR.parent.parent / "docs" / "us-cpa.md").read_text()
|
||||
skill_doc = (SKILL_DIR / "SKILL.md").read_text()
|
||||
|
||||
self.assertIn("OpenClaw installation", readme)
|
||||
self.assertIn("~/.openclaw/workspace/skills/us-cpa", readme)
|
||||
self.assertIn(".venv/bin/python", readme)
|
||||
self.assertNotIn("/Users/stefano/", readme)
|
||||
self.assertIn("OpenClaw installation", operator_doc)
|
||||
self.assertIn("rsync -a --delete", operator_doc)
|
||||
self.assertIn("~/", operator_doc)
|
||||
self.assertNotIn("/Users/stefano/", operator_doc)
|
||||
self.assertIn("~/.openclaw/workspace/skills/us-cpa/scripts/us-cpa", skill_doc)
|
||||
|
||||
def test_wrapper_prefers_local_virtualenv_python(self) -> None:
|
||||
wrapper = (SKILL_DIR / "scripts" / "us-cpa").read_text()
|
||||
self.assertIn('.venv/bin/python', wrapper)
|
||||
self.assertIn('PYTHON_BIN', wrapper)
|
||||
|
||||
def test_fixture_directories_exist(self) -> None:
|
||||
fixtures_dir = SKILL_DIR / "tests" / "fixtures"
|
||||
for name in ("irs", "facts", "documents", "returns"):
|
||||
self.assertTrue((fixtures_dir / name).exists())
|
||||
|
||||
def run_cli(self, *args: str) -> subprocess.CompletedProcess[str]:
|
||||
env = os.environ.copy()
|
||||
env["PYTHONPATH"] = str(SRC_DIR)
|
||||
return subprocess.run(
|
||||
[sys.executable, "-m", "us_cpa.cli", *args],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
)
|
||||
|
||||
def test_help_lists_all_commands(self) -> None:
|
||||
result = self.run_cli("--help")
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
for command in (
|
||||
"question",
|
||||
"prepare",
|
||||
"review",
|
||||
"fetch-year",
|
||||
"extract-docs",
|
||||
"render-forms",
|
||||
"export-efile-ready",
|
||||
):
|
||||
self.assertIn(command, result.stdout)
|
||||
|
||||
def test_question_command_emits_json_by_default(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
env = os.environ.copy()
|
||||
env["PYTHONPATH"] = str(SRC_DIR)
|
||||
env["US_CPA_CACHE_DIR"] = temp_dir
|
||||
subprocess.run(
|
||||
[sys.executable, "-m", "us_cpa.cli", "fetch-year", "--tax-year", "2025"],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
result = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"question",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--question",
|
||||
"What is the standard deduction?",
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
)
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
payload = json.loads(result.stdout)
|
||||
self.assertEqual(payload["command"], "question")
|
||||
self.assertEqual(payload["format"], "json")
|
||||
self.assertEqual(payload["question"], "What is the standard deduction?")
|
||||
self.assertEqual(payload["status"], "answered")
|
||||
self.assertIn("analysis", payload)
|
||||
|
||||
def test_prepare_requires_case_dir(self) -> None:
|
||||
result = self.run_cli("prepare", "--tax-year", "2025")
|
||||
|
||||
self.assertNotEqual(result.returncode, 0)
|
||||
self.assertIn("case directory", result.stderr.lower())
|
||||
|
||||
def test_extract_docs_can_create_case_and_register_facts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
facts_path = Path(temp_dir) / "facts.json"
|
||||
facts_path.write_text(json.dumps({"filingStatus": "single"}))
|
||||
|
||||
result = self.run_cli(
|
||||
"extract-docs",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
"--create-case",
|
||||
"--case-label",
|
||||
"Jane Doe",
|
||||
"--facts-json",
|
||||
str(facts_path),
|
||||
)
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
payload = json.loads(result.stdout)
|
||||
self.assertEqual(payload["status"], "accepted")
|
||||
self.assertEqual(payload["factCount"], 1)
|
||||
self.assertTrue((case_dir / "case-manifest.json").exists())
|
||||
|
||||
def test_extract_docs_stops_on_conflicts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
first_facts = Path(temp_dir) / "facts-1.json"
|
||||
second_facts = Path(temp_dir) / "facts-2.json"
|
||||
first_facts.write_text(json.dumps({"filingStatus": "single"}))
|
||||
second_facts.write_text(json.dumps({"filingStatus": "married_filing_jointly"}))
|
||||
|
||||
first = self.run_cli(
|
||||
"extract-docs",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
"--create-case",
|
||||
"--case-label",
|
||||
"Jane Doe",
|
||||
"--facts-json",
|
||||
str(first_facts),
|
||||
)
|
||||
self.assertEqual(first.returncode, 0, first.stderr)
|
||||
|
||||
second = self.run_cli(
|
||||
"extract-docs",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
"--facts-json",
|
||||
str(second_facts),
|
||||
)
|
||||
self.assertNotEqual(second.returncode, 0)
|
||||
payload = json.loads(second.stdout)
|
||||
self.assertEqual(payload["status"], "needs_resolution")
|
||||
self.assertEqual(payload["issueType"], "fact_conflict")
|
||||
|
||||
def test_question_markdown_memo_mode_renders_tax_memo(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
env = os.environ.copy()
|
||||
env["PYTHONPATH"] = str(SRC_DIR)
|
||||
env["US_CPA_CACHE_DIR"] = temp_dir
|
||||
subprocess.run(
|
||||
[sys.executable, "-m", "us_cpa.cli", "fetch-year", "--tax-year", "2025"],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
result = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"question",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--format",
|
||||
"markdown",
|
||||
"--style",
|
||||
"memo",
|
||||
"--question",
|
||||
"What is the standard deduction?",
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
)
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
self.assertIn("# Tax Memo", result.stdout)
|
||||
self.assertIn("## Conclusion", result.stdout)
|
||||
|
||||
def test_prepare_command_generates_return_package(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
env = os.environ.copy()
|
||||
env["PYTHONPATH"] = str(SRC_DIR)
|
||||
env["US_CPA_CACHE_DIR"] = str(Path(temp_dir) / "cache")
|
||||
subprocess.run(
|
||||
[sys.executable, "-m", "us_cpa.cli", "fetch-year", "--tax-year", "2025"],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
facts_path = Path(temp_dir) / "facts.json"
|
||||
facts_path.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
}
|
||||
)
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"extract-docs",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
"--create-case",
|
||||
"--case-label",
|
||||
"Jane Doe",
|
||||
"--facts-json",
|
||||
str(facts_path),
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
|
||||
result = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"prepare",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
)
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
payload = json.loads(result.stdout)
|
||||
self.assertEqual(payload["status"], "prepared")
|
||||
self.assertEqual(payload["summary"]["requiredForms"], ["f1040"])
|
||||
self.assertTrue((case_dir / "output" / "artifacts.json").exists())
|
||||
|
||||
def test_review_command_returns_findings(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
env = os.environ.copy()
|
||||
env["PYTHONPATH"] = str(SRC_DIR)
|
||||
env["US_CPA_CACHE_DIR"] = str(Path(temp_dir) / "cache")
|
||||
subprocess.run(
|
||||
[sys.executable, "-m", "us_cpa.cli", "fetch-year", "--tax-year", "2025"],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
facts_path = Path(temp_dir) / "facts.json"
|
||||
facts_path.write_text(
|
||||
json.dumps(
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
}
|
||||
)
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"extract-docs",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
"--create-case",
|
||||
"--case-label",
|
||||
"Jane Doe",
|
||||
"--facts-json",
|
||||
str(facts_path),
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"prepare",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
check=True,
|
||||
)
|
||||
normalized_path = case_dir / "return" / "normalized-return.json"
|
||||
normalized = json.loads(normalized_path.read_text())
|
||||
normalized["totals"]["adjustedGrossIncome"] = 99999.0
|
||||
normalized_path.write_text(json.dumps(normalized, indent=2))
|
||||
|
||||
result = subprocess.run(
|
||||
[
|
||||
sys.executable,
|
||||
"-m",
|
||||
"us_cpa.cli",
|
||||
"review",
|
||||
"--tax-year",
|
||||
"2025",
|
||||
"--case-dir",
|
||||
str(case_dir),
|
||||
],
|
||||
text=True,
|
||||
capture_output=True,
|
||||
env=env,
|
||||
)
|
||||
|
||||
self.assertEqual(result.returncode, 0, result.stderr)
|
||||
payload = json.loads(result.stdout)
|
||||
self.assertEqual(payload["status"], "reviewed")
|
||||
self.assertEqual(payload["findingCount"], 2)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
66
skills/us-cpa/tests/test_document_extractors.py
Normal file
66
skills/us-cpa/tests/test_document_extractors.py
Normal file
@@ -0,0 +1,66 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from us_cpa.document_extractors import extract_document_facts
|
||||
|
||||
|
||||
class DocumentExtractorTests(unittest.TestCase):
|
||||
def test_extracts_common_w2_fields(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
path = Path(temp_dir) / "w2.txt"
|
||||
path.write_text(
|
||||
"Form W-2 Wage and Tax Statement\n"
|
||||
"Employee: Jane Doe\n"
|
||||
"Box 1 Wages, tips, other compensation 50000\n"
|
||||
"Box 2 Federal income tax withheld 6000\n"
|
||||
"Box 16 State wages, tips, etc. 50000\n"
|
||||
"Box 17 State income tax 1200\n"
|
||||
"Box 3 Social security wages 50000\n"
|
||||
"Box 5 Medicare wages and tips 50000\n"
|
||||
)
|
||||
|
||||
extracted = extract_document_facts(path)
|
||||
|
||||
self.assertEqual(extracted["taxpayer.fullName"], "Jane Doe")
|
||||
self.assertEqual(extracted["wages"], 50000.0)
|
||||
self.assertEqual(extracted["federalWithholding"], 6000.0)
|
||||
self.assertEqual(extracted["stateWages"], 50000.0)
|
||||
self.assertEqual(extracted["stateWithholding"], 1200.0)
|
||||
self.assertEqual(extracted["socialSecurityWages"], 50000.0)
|
||||
self.assertEqual(extracted["medicareWages"], 50000.0)
|
||||
|
||||
def test_extracts_common_1099_patterns(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
div_path = Path(temp_dir) / "1099-div.txt"
|
||||
div_path.write_text("Form 1099-DIV\nRecipient: Jane Doe\nBox 1a Total ordinary dividends 250\n")
|
||||
ret_path = Path(temp_dir) / "1099-r.txt"
|
||||
ret_path.write_text("Form 1099-R\nRecipient: Jane Doe\nBox 1 Gross distribution 10000\n")
|
||||
misc_path = Path(temp_dir) / "1099-misc.txt"
|
||||
misc_path.write_text("Form 1099-MISC\nRecipient: Jane Doe\nBox 3 Other income 900\n")
|
||||
|
||||
self.assertEqual(extract_document_facts(div_path)["ordinaryDividends"], 250.0)
|
||||
self.assertEqual(extract_document_facts(ret_path)["retirementDistribution"], 10000.0)
|
||||
self.assertEqual(extract_document_facts(misc_path)["otherIncome"], 900.0)
|
||||
|
||||
def test_extracts_prior_year_return_summary_values(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
path = Path(temp_dir) / "prior-return.txt"
|
||||
path.write_text(
|
||||
"2024 Form 1040 Summary\n"
|
||||
"Adjusted gross income 72100\n"
|
||||
"Taxable income 49800\n"
|
||||
"Refund 2100\n"
|
||||
)
|
||||
|
||||
extracted = extract_document_facts(path)
|
||||
|
||||
self.assertEqual(extracted["priorYear.adjustedGrossIncome"], 72100.0)
|
||||
self.assertEqual(extracted["priorYear.taxableIncome"], 49800.0)
|
||||
self.assertEqual(extracted["priorYear.refund"], 2100.0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
75
skills/us-cpa/tests/test_prepare.py
Normal file
75
skills/us-cpa/tests/test_prepare.py
Normal file
@@ -0,0 +1,75 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
|
||||
from reportlab.pdfgen import canvas
|
||||
|
||||
from us_cpa.cases import CaseManager
|
||||
from us_cpa.prepare import EfileExporter, PrepareEngine
|
||||
from us_cpa.sources import TaxYearCorpus, bootstrap_irs_catalog
|
||||
|
||||
|
||||
class PrepareEngineTests(unittest.TestCase):
|
||||
def build_case(self, temp_dir: str) -> tuple[CaseManager, TaxYearCorpus]:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
manager = CaseManager(case_dir)
|
||||
manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
},
|
||||
document_paths=[],
|
||||
)
|
||||
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir) / "cache")
|
||||
|
||||
def fake_fetch(url: str) -> bytes:
|
||||
buffer = BytesIO()
|
||||
pdf = canvas.Canvas(buffer)
|
||||
pdf.drawString(72, 720, f"Template for {url}")
|
||||
pdf.save()
|
||||
return buffer.getvalue()
|
||||
|
||||
corpus.download_catalog(2025, bootstrap_irs_catalog(2025), fetcher=fake_fetch)
|
||||
return manager, corpus
|
||||
|
||||
def test_prepare_creates_normalized_return_and_artifacts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
manager, corpus = self.build_case(temp_dir)
|
||||
engine = PrepareEngine(corpus=corpus)
|
||||
|
||||
result = engine.prepare_case(manager.case_dir)
|
||||
|
||||
self.assertEqual(result["status"], "prepared")
|
||||
self.assertEqual(result["summary"]["requiredForms"], ["f1040"])
|
||||
self.assertEqual(result["summary"]["reviewRequiredArtifacts"], ["f1040"])
|
||||
self.assertTrue((manager.case_dir / "return" / "normalized-return.json").exists())
|
||||
self.assertTrue((manager.case_dir / "output" / "artifacts.json").exists())
|
||||
normalized = json.loads((manager.case_dir / "return" / "normalized-return.json").read_text())
|
||||
self.assertEqual(normalized["totals"]["adjustedGrossIncome"], 50100.0)
|
||||
self.assertEqual(normalized["totals"]["taxableIncome"], 34350.0)
|
||||
|
||||
def test_exporter_writes_efile_ready_payload(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
manager, corpus = self.build_case(temp_dir)
|
||||
engine = PrepareEngine(corpus=corpus)
|
||||
engine.prepare_case(manager.case_dir)
|
||||
|
||||
export = EfileExporter().export_case(manager.case_dir)
|
||||
|
||||
self.assertEqual(export["status"], "draft")
|
||||
self.assertTrue((manager.case_dir / "output" / "efile-ready.json").exists())
|
||||
self.assertEqual(export["returnSummary"]["requiredForms"], ["f1040"])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
108
skills/us-cpa/tests/test_questions.py
Normal file
108
skills/us-cpa/tests/test_questions.py
Normal file
@@ -0,0 +1,108 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from us_cpa.questions import QuestionEngine, render_analysis, render_memo
|
||||
from us_cpa.sources import TaxYearCorpus, bootstrap_irs_catalog
|
||||
|
||||
|
||||
class QuestionEngineTests(unittest.TestCase):
|
||||
def build_engine(self, temp_dir: str) -> QuestionEngine:
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir))
|
||||
|
||||
def fake_fetch(url: str) -> bytes:
|
||||
return f"source for {url}".encode()
|
||||
|
||||
corpus.download_catalog(2025, bootstrap_irs_catalog(2025), fetcher=fake_fetch)
|
||||
return QuestionEngine(corpus=corpus)
|
||||
|
||||
def test_standard_deduction_question_returns_structured_analysis(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
engine = self.build_engine(temp_dir)
|
||||
|
||||
analysis = engine.answer(
|
||||
question="What is the standard deduction for single filers?",
|
||||
tax_year=2025,
|
||||
case_facts={"filingStatus": "single"},
|
||||
)
|
||||
|
||||
self.assertEqual(analysis["issue"], "standard_deduction")
|
||||
self.assertEqual(analysis["taxYear"], 2025)
|
||||
self.assertEqual(analysis["conclusion"]["answer"], "$15,750")
|
||||
self.assertEqual(analysis["confidence"], "high")
|
||||
self.assertEqual(analysis["riskLevel"], "low")
|
||||
self.assertTrue(analysis["authorities"])
|
||||
self.assertEqual(analysis["authorities"][0]["sourceClass"], "irs_instructions")
|
||||
|
||||
def test_complex_question_flags_primary_law_escalation(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
engine = self.build_engine(temp_dir)
|
||||
|
||||
analysis = engine.answer(
|
||||
question="Does section 469 passive activity loss limitation apply here?",
|
||||
tax_year=2025,
|
||||
case_facts={},
|
||||
)
|
||||
|
||||
self.assertEqual(analysis["confidence"], "low")
|
||||
self.assertEqual(analysis["riskLevel"], "high")
|
||||
self.assertTrue(analysis["primaryLawRequired"])
|
||||
self.assertIn("Internal Revenue Code", analysis["missingFacts"][0])
|
||||
self.assertTrue(any(item["sourceClass"] == "internal_revenue_code" for item in analysis["authorities"]))
|
||||
|
||||
def test_capital_gains_question_returns_schedule_d_guidance(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
engine = self.build_engine(temp_dir)
|
||||
|
||||
analysis = engine.answer(
|
||||
question="Do I need Schedule D for capital gains?",
|
||||
tax_year=2025,
|
||||
case_facts={"capitalGainLoss": 400},
|
||||
)
|
||||
|
||||
self.assertEqual(analysis["issue"], "schedule_d_required")
|
||||
self.assertEqual(analysis["confidence"], "medium")
|
||||
self.assertFalse(analysis["primaryLawRequired"])
|
||||
self.assertTrue(any(item["slug"] == "f1040sd" for item in analysis["authorities"]))
|
||||
|
||||
def test_schedule_e_question_returns_rental_guidance(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
engine = self.build_engine(temp_dir)
|
||||
|
||||
analysis = engine.answer(
|
||||
question="Do I need Schedule E for rental income?",
|
||||
tax_year=2025,
|
||||
case_facts={"rentalIncome": 1200},
|
||||
)
|
||||
|
||||
self.assertEqual(analysis["issue"], "schedule_e_required")
|
||||
self.assertFalse(analysis["primaryLawRequired"])
|
||||
self.assertTrue(any(item["slug"] == "f1040se" for item in analysis["authorities"]))
|
||||
|
||||
def test_renderers_produce_conversation_and_memo(self) -> None:
|
||||
analysis = {
|
||||
"issue": "standard_deduction",
|
||||
"taxYear": 2025,
|
||||
"factsUsed": [{"field": "filingStatus", "value": "single"}],
|
||||
"missingFacts": [],
|
||||
"authorities": [{"title": "Instructions for Form 1040 and Schedules 1-3"}],
|
||||
"conclusion": {"answer": "$15,750", "summary": "Single filers use a $15,750 standard deduction for tax year 2025."},
|
||||
"confidence": "high",
|
||||
"riskLevel": "low",
|
||||
"followUpQuestions": [],
|
||||
"primaryLawRequired": False,
|
||||
}
|
||||
|
||||
conversation = render_analysis(analysis)
|
||||
memo = render_memo(analysis)
|
||||
|
||||
self.assertIn("$15,750", conversation)
|
||||
self.assertIn("Issue", memo)
|
||||
self.assertIn("Authorities", memo)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
100
skills/us-cpa/tests/test_renderers.py
Normal file
100
skills/us-cpa/tests/test_renderers.py
Normal file
@@ -0,0 +1,100 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
|
||||
from reportlab.pdfgen import canvas
|
||||
|
||||
from us_cpa.renderers import render_case_forms
|
||||
from us_cpa.sources import TaxYearCorpus
|
||||
|
||||
|
||||
class RendererTests(unittest.TestCase):
|
||||
def test_render_case_forms_prefers_fillable_pdf_fields_when_available(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "case"
|
||||
(case_dir / "output").mkdir(parents=True)
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir) / "cache")
|
||||
irs_dir = corpus.paths_for_year(2025).irs_dir
|
||||
irs_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
buffer = BytesIO()
|
||||
pdf = canvas.Canvas(buffer)
|
||||
form = pdf.acroForm
|
||||
pdf.drawString(72, 720, "Name")
|
||||
form.textfield(name="taxpayer_full_name", x=120, y=710, width=200, height=20)
|
||||
pdf.drawString(72, 680, "Wages")
|
||||
form.textfield(name="wages", x=120, y=670, width=200, height=20)
|
||||
pdf.save()
|
||||
(irs_dir / "f1040.pdf").write_bytes(buffer.getvalue())
|
||||
|
||||
normalized = {
|
||||
"taxYear": 2025,
|
||||
"requiredForms": ["f1040"],
|
||||
"taxpayer": {"fullName": "Jane Doe"},
|
||||
"filingStatus": "single",
|
||||
"income": {"wages": 50000.0, "taxableInterest": 100.0, "businessIncome": 0.0, "capitalGainLoss": 0.0, "rentalIncome": 0.0},
|
||||
"deductions": {"standardDeduction": 15750.0, "deductionType": "standard", "deductionAmount": 15750.0},
|
||||
"adjustments": {"hsaContribution": 0.0},
|
||||
"credits": {"educationCredit": 0.0, "foreignTaxCredit": 0.0, "energyCredit": 0.0},
|
||||
"taxes": {"totalTax": 3883.5, "additionalMedicareTax": 0.0, "netInvestmentIncomeTax": 0.0, "alternativeMinimumTax": 0.0, "additionalTaxPenalty": 0.0},
|
||||
"payments": {"federalWithholding": 6000.0},
|
||||
"business": {"qualifiedBusinessIncome": 0.0},
|
||||
"basis": {"traditionalIraBasis": 0.0},
|
||||
"depreciation": {"depreciationExpense": 0.0},
|
||||
"assetSales": {"section1231GainLoss": 0.0},
|
||||
"totals": {"adjustedGrossIncome": 50100.0, "taxableIncome": 34350.0, "refund": 2116.5, "balanceDue": 0.0},
|
||||
}
|
||||
|
||||
artifacts = render_case_forms(case_dir, corpus, normalized)
|
||||
|
||||
self.assertEqual(artifacts["artifacts"][0]["renderMethod"], "field_fill")
|
||||
self.assertFalse(artifacts["artifacts"][0]["reviewRequired"])
|
||||
|
||||
def test_render_case_forms_writes_overlay_artifacts_and_flags_review(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir = Path(temp_dir) / "case"
|
||||
(case_dir / "output").mkdir(parents=True)
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir) / "cache")
|
||||
irs_dir = corpus.paths_for_year(2025).irs_dir
|
||||
irs_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
buffer = BytesIO()
|
||||
pdf = canvas.Canvas(buffer)
|
||||
pdf.drawString(72, 720, "Template")
|
||||
pdf.save()
|
||||
(irs_dir / "f1040.pdf").write_bytes(buffer.getvalue())
|
||||
|
||||
normalized = {
|
||||
"taxYear": 2025,
|
||||
"requiredForms": ["f1040"],
|
||||
"taxpayer": {"fullName": "Jane Doe"},
|
||||
"filingStatus": "single",
|
||||
"income": {"wages": 50000.0, "taxableInterest": 100.0, "businessIncome": 0.0, "capitalGainLoss": 0.0, "rentalIncome": 0.0},
|
||||
"deductions": {"standardDeduction": 15750.0, "deductionType": "standard", "deductionAmount": 15750.0},
|
||||
"adjustments": {"hsaContribution": 0.0},
|
||||
"credits": {"educationCredit": 0.0, "foreignTaxCredit": 0.0, "energyCredit": 0.0},
|
||||
"taxes": {"totalTax": 3883.5, "additionalMedicareTax": 0.0, "netInvestmentIncomeTax": 0.0, "alternativeMinimumTax": 0.0, "additionalTaxPenalty": 0.0},
|
||||
"payments": {"federalWithholding": 6000.0},
|
||||
"business": {"qualifiedBusinessIncome": 0.0},
|
||||
"basis": {"traditionalIraBasis": 0.0},
|
||||
"depreciation": {"depreciationExpense": 0.0},
|
||||
"assetSales": {"section1231GainLoss": 0.0},
|
||||
"totals": {"adjustedGrossIncome": 50100.0, "taxableIncome": 34350.0, "refund": 2116.5, "balanceDue": 0.0},
|
||||
}
|
||||
|
||||
artifacts = render_case_forms(case_dir, corpus, normalized)
|
||||
|
||||
self.assertEqual(artifacts["artifactCount"], 1)
|
||||
self.assertEqual(artifacts["artifacts"][0]["renderMethod"], "overlay")
|
||||
self.assertTrue(artifacts["artifacts"][0]["reviewRequired"])
|
||||
self.assertTrue((case_dir / "output" / "forms" / "f1040.pdf").exists())
|
||||
manifest = json.loads((case_dir / "output" / "artifacts.json").read_text())
|
||||
self.assertEqual(manifest["artifacts"][0]["formCode"], "f1040")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
102
skills/us-cpa/tests/test_returns.py
Normal file
102
skills/us-cpa/tests/test_returns.py
Normal file
@@ -0,0 +1,102 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import unittest
|
||||
|
||||
from us_cpa.returns import normalize_case_facts, resolve_required_forms, tax_on_ordinary_income
|
||||
|
||||
|
||||
class ReturnModelTests(unittest.TestCase):
|
||||
def test_normalize_case_facts_computes_basic_1040_totals(self) -> None:
|
||||
normalized = normalize_case_facts(
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
},
|
||||
2025,
|
||||
)
|
||||
|
||||
self.assertEqual(normalized["requiredForms"], ["f1040"])
|
||||
self.assertEqual(normalized["deductions"]["standardDeduction"], 15750.0)
|
||||
self.assertEqual(normalized["totals"]["adjustedGrossIncome"], 50100.0)
|
||||
self.assertEqual(normalized["totals"]["taxableIncome"], 34350.0)
|
||||
self.assertEqual(normalized["totals"]["refund"], 2116.5)
|
||||
|
||||
def test_resolve_required_forms_adds_business_and_interest_forms(self) -> None:
|
||||
normalized = normalize_case_facts(
|
||||
{
|
||||
"filingStatus": "single",
|
||||
"wages": 0,
|
||||
"taxableInterest": 2000,
|
||||
"businessIncome": 12000,
|
||||
},
|
||||
2025,
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
resolve_required_forms(normalized),
|
||||
["f1040", "f1040sb", "f1040sc", "f1040sse", "f1040s1", "f8995"],
|
||||
)
|
||||
|
||||
def test_tax_bracket_calculation_uses_2025_single_rates(self) -> None:
|
||||
self.assertEqual(tax_on_ordinary_income(34350.0, "single", 2025), 3883.5)
|
||||
|
||||
def test_tax_bracket_calculation_uses_selected_tax_year(self) -> None:
|
||||
self.assertEqual(tax_on_ordinary_income(33650.0, "single", 2024), 3806.0)
|
||||
|
||||
def test_normalize_case_facts_rejects_unsupported_tax_year(self) -> None:
|
||||
with self.assertRaisesRegex(ValueError, "Unsupported tax year"):
|
||||
normalize_case_facts({"filingStatus": "single"}, 2023)
|
||||
|
||||
def test_normalize_case_facts_preserves_provenance_and_expands_form_resolution(self) -> None:
|
||||
normalized = normalize_case_facts(
|
||||
{
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"spouse.fullName": "John Doe",
|
||||
"dependents": [{"fullName": "Kid Doe", "ssnLast4": "4321"}],
|
||||
"filingStatus": "married_filing_jointly",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 2001,
|
||||
"capitalGainLoss": 400,
|
||||
"rentalIncome": 1200,
|
||||
"itemizedDeductions": 40000,
|
||||
"hsaContribution": 1000,
|
||||
"educationCredit": 500,
|
||||
"foreignTaxCredit": 250,
|
||||
"qualifiedBusinessIncome": 12000,
|
||||
"traditionalIraBasis": 6000,
|
||||
"additionalMedicareTax": 100,
|
||||
"netInvestmentIncomeTax": 200,
|
||||
"alternativeMinimumTax": 300,
|
||||
"additionalTaxPenalty": 50,
|
||||
"energyCredit": 600,
|
||||
"_factMetadata": {
|
||||
"wages": {"sources": [{"sourceType": "document_extract", "documentName": "w2.txt"}]},
|
||||
},
|
||||
},
|
||||
2025,
|
||||
)
|
||||
|
||||
self.assertEqual(normalized["spouse"]["fullName"], "John Doe")
|
||||
self.assertEqual(normalized["dependents"][0]["fullName"], "Kid Doe")
|
||||
self.assertEqual(normalized["provenance"]["income.wages"]["sources"][0]["documentName"], "w2.txt")
|
||||
self.assertIn("f1040sa", normalized["requiredForms"])
|
||||
self.assertIn("f1040sd", normalized["requiredForms"])
|
||||
self.assertIn("f8949", normalized["requiredForms"])
|
||||
self.assertIn("f1040se", normalized["requiredForms"])
|
||||
self.assertIn("f8889", normalized["requiredForms"])
|
||||
self.assertIn("f8863", normalized["requiredForms"])
|
||||
self.assertIn("f1116", normalized["requiredForms"])
|
||||
self.assertIn("f8995", normalized["requiredForms"])
|
||||
self.assertIn("f8606", normalized["requiredForms"])
|
||||
self.assertIn("f8959", normalized["requiredForms"])
|
||||
self.assertIn("f8960", normalized["requiredForms"])
|
||||
self.assertIn("f6251", normalized["requiredForms"])
|
||||
self.assertIn("f5329", normalized["requiredForms"])
|
||||
self.assertIn("f5695", normalized["requiredForms"])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
128
skills/us-cpa/tests/test_review.py
Normal file
128
skills/us-cpa/tests/test_review.py
Normal file
@@ -0,0 +1,128 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
|
||||
from reportlab.pdfgen import canvas
|
||||
|
||||
from us_cpa.cases import CaseManager
|
||||
from us_cpa.prepare import PrepareEngine
|
||||
from us_cpa.review import ReviewEngine, render_review_memo, render_review_summary
|
||||
from us_cpa.sources import TaxYearCorpus, bootstrap_irs_catalog
|
||||
|
||||
|
||||
class ReviewEngineTests(unittest.TestCase):
|
||||
def build_prepared_case(self, temp_dir: str) -> tuple[Path, TaxYearCorpus]:
|
||||
case_dir = Path(temp_dir) / "2025-jane-doe"
|
||||
manager = CaseManager(case_dir)
|
||||
manager.create_case(case_label="Jane Doe", tax_year=2025)
|
||||
manager.intake(
|
||||
tax_year=2025,
|
||||
user_facts={
|
||||
"taxpayer.fullName": "Jane Doe",
|
||||
"filingStatus": "single",
|
||||
"wages": 50000,
|
||||
"taxableInterest": 100,
|
||||
"federalWithholding": 6000,
|
||||
},
|
||||
document_paths=[],
|
||||
)
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir) / "cache")
|
||||
|
||||
def fake_fetch(url: str) -> bytes:
|
||||
buffer = BytesIO()
|
||||
pdf = canvas.Canvas(buffer)
|
||||
pdf.drawString(72, 720, f"Template for {url}")
|
||||
pdf.save()
|
||||
return buffer.getvalue()
|
||||
|
||||
corpus.download_catalog(2025, bootstrap_irs_catalog(2025), fetcher=fake_fetch)
|
||||
PrepareEngine(corpus=corpus).prepare_case(case_dir)
|
||||
return case_dir, corpus
|
||||
|
||||
def test_review_detects_mismatched_return_and_missing_artifacts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir, corpus = self.build_prepared_case(temp_dir)
|
||||
normalized_path = case_dir / "return" / "normalized-return.json"
|
||||
normalized = json.loads(normalized_path.read_text())
|
||||
normalized["totals"]["adjustedGrossIncome"] = 99999.0
|
||||
normalized_path.write_text(json.dumps(normalized, indent=2))
|
||||
|
||||
artifacts_path = case_dir / "output" / "artifacts.json"
|
||||
artifacts = json.loads(artifacts_path.read_text())
|
||||
artifacts["artifacts"] = []
|
||||
artifacts["artifactCount"] = 0
|
||||
artifacts_path.write_text(json.dumps(artifacts, indent=2))
|
||||
|
||||
review = ReviewEngine(corpus=corpus).review_case(case_dir)
|
||||
|
||||
self.assertEqual(review["status"], "reviewed")
|
||||
self.assertEqual(review["findings"][0]["severity"], "high")
|
||||
self.assertIn("adjusted gross income", review["findings"][0]["title"].lower())
|
||||
self.assertTrue(any("missing rendered artifact" in item["title"].lower() for item in review["findings"]))
|
||||
|
||||
def test_review_detects_reporting_omissions_from_source_facts(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir, corpus = self.build_prepared_case(temp_dir)
|
||||
normalized_path = case_dir / "return" / "normalized-return.json"
|
||||
normalized = json.loads(normalized_path.read_text())
|
||||
normalized["income"]["taxableInterest"] = 0.0
|
||||
normalized["totals"]["adjustedGrossIncome"] = 50000.0
|
||||
normalized_path.write_text(json.dumps(normalized, indent=2))
|
||||
|
||||
facts_path = case_dir / "extracted" / "facts.json"
|
||||
facts_payload = json.loads(facts_path.read_text())
|
||||
facts_payload["facts"]["taxableInterest"] = {
|
||||
"value": 1750.0,
|
||||
"sources": [{"sourceType": "document_extract", "sourceName": "1099-int.txt"}],
|
||||
}
|
||||
facts_path.write_text(json.dumps(facts_payload, indent=2))
|
||||
|
||||
review = ReviewEngine(corpus=corpus).review_case(case_dir)
|
||||
|
||||
self.assertTrue(
|
||||
any("likely omitted taxable interest" in item["title"].lower() for item in review["findings"])
|
||||
)
|
||||
|
||||
def test_review_flags_high_complexity_positions_for_specialist_follow_up(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
case_dir, corpus = self.build_prepared_case(temp_dir)
|
||||
normalized_path = case_dir / "return" / "normalized-return.json"
|
||||
normalized = json.loads(normalized_path.read_text())
|
||||
normalized["requiredForms"].append("f6251")
|
||||
normalized["taxes"]["alternativeMinimumTax"] = 300.0
|
||||
normalized_path.write_text(json.dumps(normalized, indent=2))
|
||||
|
||||
review = ReviewEngine(corpus=corpus).review_case(case_dir)
|
||||
|
||||
self.assertTrue(
|
||||
any("high-complexity tax position" in item["title"].lower() for item in review["findings"])
|
||||
)
|
||||
|
||||
def test_review_renderers_produce_summary_and_memo(self) -> None:
|
||||
review = {
|
||||
"status": "reviewed",
|
||||
"findings": [
|
||||
{
|
||||
"severity": "high",
|
||||
"title": "Adjusted gross income mismatch",
|
||||
"explanation": "Stored AGI does not match recomputed AGI.",
|
||||
"suggestedAction": "Update Form 1040 line 11.",
|
||||
"authorities": [{"title": "Instructions for Form 1040 and Schedules 1-3"}],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
summary = render_review_summary(review)
|
||||
memo = render_review_memo(review)
|
||||
|
||||
self.assertIn("Adjusted gross income mismatch", summary)
|
||||
self.assertIn("# Review Memo", memo)
|
||||
self.assertIn("Suggested correction", memo)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
109
skills/us-cpa/tests/test_sources.py
Normal file
109
skills/us-cpa/tests/test_sources.py
Normal file
@@ -0,0 +1,109 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from us_cpa.sources import (
|
||||
AuthorityRank,
|
||||
SourceDescriptor,
|
||||
TaxYearCorpus,
|
||||
authority_rank_for,
|
||||
bootstrap_irs_catalog,
|
||||
build_irs_prior_pdf_url,
|
||||
build_primary_law_authorities,
|
||||
)
|
||||
|
||||
|
||||
class SourceCatalogTests(unittest.TestCase):
|
||||
def test_build_irs_prior_pdf_url_uses_expected_pattern(self) -> None:
|
||||
self.assertEqual(
|
||||
build_irs_prior_pdf_url("f1040", 2025),
|
||||
"https://www.irs.gov/pub/irs-prior/f1040--2025.pdf",
|
||||
)
|
||||
self.assertEqual(
|
||||
build_irs_prior_pdf_url("i1040gi", 2025),
|
||||
"https://www.irs.gov/pub/irs-prior/i1040gi--2025.pdf",
|
||||
)
|
||||
|
||||
def test_authority_ranking_orders_irs_before_primary_law(self) -> None:
|
||||
self.assertEqual(authority_rank_for("irs_form"), AuthorityRank.IRS_FORM)
|
||||
self.assertEqual(
|
||||
authority_rank_for("treasury_regulation"),
|
||||
AuthorityRank.TREASURY_REGULATION,
|
||||
)
|
||||
self.assertLess(
|
||||
authority_rank_for("irs_form"), authority_rank_for("internal_revenue_code")
|
||||
)
|
||||
|
||||
def test_bootstrap_catalog_builds_tax_year_specific_urls(self) -> None:
|
||||
catalog = bootstrap_irs_catalog(2025)
|
||||
|
||||
self.assertGreaterEqual(len(catalog), 5)
|
||||
self.assertEqual(catalog[0].url, "https://www.irs.gov/pub/irs-prior/f1040--2025.pdf")
|
||||
self.assertTrue(any(item.slug == "i1040gi" for item in catalog))
|
||||
self.assertTrue(any(item.slug == "f1040sse" for item in catalog))
|
||||
|
||||
def test_primary_law_authorities_build_official_urls(self) -> None:
|
||||
authorities = build_primary_law_authorities(
|
||||
"Does section 469 apply and what does Treas. Reg. 1.469-1 say?"
|
||||
)
|
||||
|
||||
self.assertTrue(any(item["sourceClass"] == "internal_revenue_code" for item in authorities))
|
||||
self.assertTrue(any(item["sourceClass"] == "treasury_regulation" for item in authorities))
|
||||
self.assertTrue(any("uscode.house.gov" in item["url"] for item in authorities))
|
||||
self.assertTrue(any("ecfr.gov" in item["url"] for item in authorities))
|
||||
|
||||
|
||||
class TaxYearCorpusTests(unittest.TestCase):
|
||||
def test_tax_year_layout_is_deterministic(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir))
|
||||
paths = corpus.paths_for_year(2025)
|
||||
|
||||
self.assertEqual(paths.year_dir, Path(temp_dir) / "tax-years" / "2025")
|
||||
self.assertEqual(paths.irs_dir, paths.year_dir / "irs")
|
||||
self.assertEqual(paths.manifest_path, paths.year_dir / "manifest.json")
|
||||
|
||||
def test_download_catalog_writes_files_and_manifest(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
corpus = TaxYearCorpus(cache_root=Path(temp_dir))
|
||||
catalog = [
|
||||
SourceDescriptor(
|
||||
slug="f1040",
|
||||
title="Form 1040",
|
||||
source_class="irs_form",
|
||||
media_type="application/pdf",
|
||||
url=build_irs_prior_pdf_url("f1040", 2025),
|
||||
),
|
||||
SourceDescriptor(
|
||||
slug="i1040gi",
|
||||
title="Instructions for Form 1040",
|
||||
source_class="irs_instructions",
|
||||
media_type="application/pdf",
|
||||
url=build_irs_prior_pdf_url("i1040gi", 2025),
|
||||
),
|
||||
]
|
||||
|
||||
def fake_fetch(url: str) -> bytes:
|
||||
return f"downloaded:{url}".encode()
|
||||
|
||||
manifest = corpus.download_catalog(2025, catalog, fetcher=fake_fetch)
|
||||
|
||||
self.assertEqual(manifest["taxYear"], 2025)
|
||||
self.assertEqual(manifest["sourceCount"], 2)
|
||||
self.assertTrue(corpus.paths_for_year(2025).manifest_path.exists())
|
||||
|
||||
first = manifest["sources"][0]
|
||||
self.assertEqual(first["slug"], "f1040")
|
||||
self.assertEqual(first["authorityRank"], int(AuthorityRank.IRS_FORM))
|
||||
self.assertTrue(Path(first["localPath"]).exists())
|
||||
|
||||
saved = json.loads(corpus.paths_for_year(2025).manifest_path.read_text())
|
||||
self.assertEqual(saved["sourceCount"], 2)
|
||||
self.assertEqual(saved["sources"][1]["slug"], "i1040gi")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
25
skills/us-cpa/tests/test_tax_years.py
Normal file
25
skills/us-cpa/tests/test_tax_years.py
Normal file
@@ -0,0 +1,25 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import unittest
|
||||
|
||||
from us_cpa.tax_years import supported_tax_years, tax_year_rules
|
||||
|
||||
|
||||
class TaxYearRuleTests(unittest.TestCase):
|
||||
def test_supported_years_are_listed(self) -> None:
|
||||
self.assertEqual(supported_tax_years(), [2024, 2025])
|
||||
|
||||
def test_tax_year_rules_include_source_citations(self) -> None:
|
||||
rules = tax_year_rules(2025)
|
||||
|
||||
self.assertIn("sourceCitations", rules)
|
||||
self.assertIn("standardDeduction", rules["sourceCitations"])
|
||||
self.assertIn("ordinaryIncomeBrackets", rules["sourceCitations"])
|
||||
|
||||
def test_unsupported_tax_year_raises_clear_error(self) -> None:
|
||||
with self.assertRaisesRegex(ValueError, "Unsupported tax year 2023"):
|
||||
tax_year_rules(2023)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -1,14 +1,14 @@
|
||||
---
|
||||
name: web-automation
|
||||
description: Browse and scrape web pages using Playwright with Camoufox anti-detection browser. Use when automating web workflows, extracting rendered page content, handling authenticated sessions, or scraping websites with bot protection.
|
||||
description: Browse and scrape web pages using Playwright-compatible CloakBrowser. Use when automating web workflows, extracting rendered page content, handling authenticated sessions, or scraping websites with bot protection.
|
||||
---
|
||||
|
||||
# Web Automation with Camoufox (Codex)
|
||||
# Web Automation with CloakBrowser (Codex)
|
||||
|
||||
Automated web browsing and scraping using Playwright with two execution paths under one skill:
|
||||
Automated web browsing and scraping using Playwright-compatible CloakBrowser with two execution paths under one skill:
|
||||
|
||||
- one-shot extraction via `extract.js`
|
||||
- broader stateful automation via Camoufox and the existing `auth.ts`, `browse.ts`, `flow.ts`, and `scrape.ts`
|
||||
- broader stateful automation via CloakBrowser and the existing `auth.ts`, `browse.ts`, `flow.ts`, and `scrape.ts`
|
||||
|
||||
## When To Use Which Command
|
||||
|
||||
@@ -20,32 +20,41 @@ Automated web browsing and scraping using Playwright with two execution paths un
|
||||
|
||||
- Node.js 20+
|
||||
- pnpm
|
||||
- Network access to download browser binaries
|
||||
- Network access to download the CloakBrowser binary on first use or via preinstall
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/web-automation/scripts
|
||||
pnpm install
|
||||
npx playwright install chromium
|
||||
npx camoufox-js fetch
|
||||
npx cloakbrowser install
|
||||
pnpm approve-builds
|
||||
pnpm rebuild better-sqlite3 esbuild
|
||||
```
|
||||
|
||||
## Updating CloakBrowser
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/web-automation/scripts
|
||||
pnpm up cloakbrowser playwright-core
|
||||
npx cloakbrowser install
|
||||
pnpm approve-builds
|
||||
pnpm rebuild better-sqlite3 esbuild
|
||||
```
|
||||
|
||||
## Prerequisite Check (MANDATORY)
|
||||
|
||||
Before running any automation, verify Playwright + Camoufox dependencies are installed and scripts are configured to use Camoufox.
|
||||
Before running any automation, verify CloakBrowser and Playwright Core dependencies are installed and scripts are configured to use CloakBrowser.
|
||||
|
||||
```bash
|
||||
cd ~/.openclaw/workspace/skills/web-automation/scripts
|
||||
node -e "require.resolve('playwright/package.json');require.resolve('playwright-core/package.json');require.resolve('camoufox-js/package.json');console.log('OK: playwright + playwright-core + camoufox-js installed')"
|
||||
node -e "const fs=require('fs');const t=fs.readFileSync('browse.ts','utf8');if(!/camoufox-js/.test(t)){throw new Error('browse.ts is not configured for Camoufox')}console.log('OK: Camoufox integration detected in browse.ts')"
|
||||
node --input-type=module -e "await import('cloakbrowser');import 'playwright-core';console.log('OK: cloakbrowser + playwright-core installed')"
|
||||
node -e "const fs=require('fs');const t=fs.readFileSync('browse.ts','utf8');if(!/import\s*\{[^}]*launchPersistentContext[^}]*\}\s*from\s*['\"]cloakbrowser['\"]/.test(t)){throw new Error('browse.ts is not configured for CloakBrowser')}console.log('OK: CloakBrowser integration detected in browse.ts')"
|
||||
```
|
||||
|
||||
If any check fails, stop and return:
|
||||
|
||||
"Missing dependency/config: web-automation requires `playwright`, `playwright-core`, and `camoufox-js` with Camoufox-based scripts. Run setup in this skill, then retry."
|
||||
"Missing dependency/config: web-automation requires `cloakbrowser` and `playwright-core` with CloakBrowser-based scripts. Run setup in this skill, then retry."
|
||||
|
||||
If runtime fails with missing native bindings for `better-sqlite3` or `esbuild`, run:
|
||||
|
||||
@@ -96,9 +105,15 @@ Example:
|
||||
npx tsx flow.ts --instruction 'go to https://search.fiorinis.com then type "pippo" then press enter then wait 2s'
|
||||
```
|
||||
|
||||
## Compatibility Aliases
|
||||
|
||||
- `CAMOUFOX_PROFILE_PATH` still works as a legacy alias for `CLOAKBROWSER_PROFILE_PATH`
|
||||
- `CAMOUFOX_HEADLESS` still works as a legacy alias for `CLOAKBROWSER_HEADLESS`
|
||||
- `CAMOUFOX_USERNAME` and `CAMOUFOX_PASSWORD` still work as legacy aliases for `CLOAKBROWSER_USERNAME` and `CLOAKBROWSER_PASSWORD`
|
||||
|
||||
## Notes
|
||||
|
||||
- Sessions persist in Camoufox profile storage.
|
||||
- Sessions persist in CloakBrowser profile storage.
|
||||
- Use `--wait` for dynamic pages.
|
||||
- Use `--mode selector --selector "..."` for targeted extraction.
|
||||
- `extract.js` keeps stealth and bounded anti-bot shaping while keeping the Chromium sandbox enabled.
|
||||
- `extract.js` keeps stealth and bounded anti-bot shaping while keeping the browser sandbox enabled.
|
||||
|
||||
@@ -41,8 +41,8 @@ function getCredentials(options?: {
|
||||
username?: string;
|
||||
password?: string;
|
||||
}): { username: string; password: string } | null {
|
||||
const username = options?.username || process.env.CAMOUFOX_USERNAME;
|
||||
const password = options?.password || process.env.CAMOUFOX_PASSWORD;
|
||||
const username = options?.username || process.env.CLOAKBROWSER_USERNAME || process.env.CAMOUFOX_USERNAME;
|
||||
const password = options?.password || process.env.CLOAKBROWSER_PASSWORD || process.env.CAMOUFOX_PASSWORD;
|
||||
|
||||
if (!username || !password) {
|
||||
return null;
|
||||
@@ -450,7 +450,7 @@ export async function navigateAuthenticated(
|
||||
if (!credentials) {
|
||||
throw new Error(
|
||||
'Authentication required but no credentials provided. ' +
|
||||
'Set CAMOUFOX_USERNAME and CAMOUFOX_PASSWORD environment variables.'
|
||||
'Set CLOAKBROWSER_USERNAME and CLOAKBROWSER_PASSWORD environment variables. Legacy aliases CAMOUFOX_USERNAME and CAMOUFOX_PASSWORD are also supported.'
|
||||
);
|
||||
}
|
||||
|
||||
@@ -504,8 +504,8 @@ Usage:
|
||||
Options:
|
||||
-u, --url <url> URL to authenticate (required)
|
||||
-t, --type <type> Auth type: auto, form, or msal (default: auto)
|
||||
--username <user> Username/email (or set CAMOUFOX_USERNAME env var)
|
||||
--password <pass> Password (or set CAMOUFOX_PASSWORD env var)
|
||||
--username <user> Username/email (or set CLOAKBROWSER_USERNAME env var)
|
||||
--password <pass> Password (or set CLOAKBROWSER_PASSWORD env var)
|
||||
--headless <bool> Run in headless mode (default: false for auth)
|
||||
-h, --help Show this help message
|
||||
|
||||
@@ -515,8 +515,12 @@ Auth Types:
|
||||
msal Microsoft SSO (login.microsoftonline.com)
|
||||
|
||||
Environment Variables:
|
||||
CAMOUFOX_USERNAME Default username/email for authentication
|
||||
CAMOUFOX_PASSWORD Default password for authentication
|
||||
CLOAKBROWSER_USERNAME Default username/email for authentication
|
||||
CLOAKBROWSER_PASSWORD Default password for authentication
|
||||
|
||||
Compatibility Aliases:
|
||||
CAMOUFOX_USERNAME Legacy alias for CLOAKBROWSER_USERNAME
|
||||
CAMOUFOX_PASSWORD Legacy alias for CLOAKBROWSER_PASSWORD
|
||||
|
||||
Examples:
|
||||
# Interactive login (no credentials, opens browser)
|
||||
@@ -527,11 +531,11 @@ Examples:
|
||||
--username "user@example.com" --password "secret"
|
||||
|
||||
# Microsoft SSO login
|
||||
CAMOUFOX_USERNAME=user@company.com CAMOUFOX_PASSWORD=secret \\
|
||||
CLOAKBROWSER_USERNAME=user@company.com CLOAKBROWSER_PASSWORD=secret \\
|
||||
npx tsx auth.ts --url "https://internal.company.com" --type msal
|
||||
|
||||
Notes:
|
||||
- Session is saved to ~/.camoufox-profile/ for persistence
|
||||
- Session is saved to ~/.cloakbrowser-profile/ for persistence
|
||||
- After successful auth, subsequent browses will be authenticated
|
||||
- Use --headless false if you need to handle MFA manually
|
||||
`);
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env npx tsx
|
||||
|
||||
/**
|
||||
* Browser launcher using Camoufox with persistent profile
|
||||
* Browser launcher using CloakBrowser with persistent profile
|
||||
*
|
||||
* Usage:
|
||||
* npx tsx browse.ts --url "https://example.com"
|
||||
@@ -9,14 +9,13 @@
|
||||
* npx tsx browse.ts --url "https://example.com" --headless false --wait 5000
|
||||
*/
|
||||
|
||||
import { Camoufox } from 'camoufox-js';
|
||||
import { launchPersistentContext } from 'cloakbrowser';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
import { existsSync, mkdirSync } from 'fs';
|
||||
import parseArgs from 'minimist';
|
||||
import type { Page, BrowserContext } from 'playwright-core';
|
||||
|
||||
// Types
|
||||
interface BrowseOptions {
|
||||
url: string;
|
||||
headless?: boolean;
|
||||
@@ -33,55 +32,54 @@ interface BrowseResult {
|
||||
screenshotPath?: string;
|
||||
}
|
||||
|
||||
// Get profile directory
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
const getProfilePath = (): string => {
|
||||
const customPath = process.env.CAMOUFOX_PROFILE_PATH;
|
||||
const customPath = process.env.CLOAKBROWSER_PROFILE_PATH || process.env.CAMOUFOX_PROFILE_PATH;
|
||||
if (customPath) return customPath;
|
||||
|
||||
const profileDir = join(homedir(), '.camoufox-profile');
|
||||
const profileDir = join(homedir(), '.cloakbrowser-profile');
|
||||
if (!existsSync(profileDir)) {
|
||||
mkdirSync(profileDir, { recursive: true });
|
||||
}
|
||||
return profileDir;
|
||||
};
|
||||
|
||||
// Launch browser with persistent profile
|
||||
export async function launchBrowser(options: {
|
||||
headless?: boolean;
|
||||
}): Promise<BrowserContext> {
|
||||
const profilePath = getProfilePath();
|
||||
const headless =
|
||||
options.headless ??
|
||||
(process.env.CAMOUFOX_HEADLESS ? process.env.CAMOUFOX_HEADLESS === 'true' : true);
|
||||
const envHeadless = process.env.CLOAKBROWSER_HEADLESS ?? process.env.CAMOUFOX_HEADLESS;
|
||||
const headless = options.headless ?? (envHeadless ? envHeadless === 'true' : true);
|
||||
|
||||
console.log(`Using profile: ${profilePath}`);
|
||||
console.log(`Headless mode: ${headless}`);
|
||||
|
||||
const browser = await Camoufox({
|
||||
user_data_dir: profilePath,
|
||||
const context = await launchPersistentContext({
|
||||
userDataDir: profilePath,
|
||||
headless,
|
||||
humanize: true,
|
||||
});
|
||||
|
||||
return browser;
|
||||
return context;
|
||||
}
|
||||
|
||||
// Browse to URL and optionally take screenshot
|
||||
export async function browse(options: BrowseOptions): Promise<BrowseResult> {
|
||||
const browser = await launchBrowser({ headless: options.headless });
|
||||
const page = await browser.newPage();
|
||||
const page = browser.pages()[0] || await browser.newPage();
|
||||
|
||||
try {
|
||||
// Navigate to URL
|
||||
console.log(`Navigating to: ${options.url}`);
|
||||
await page.goto(options.url, {
|
||||
timeout: options.timeout ?? 60000,
|
||||
waitUntil: 'domcontentloaded',
|
||||
});
|
||||
|
||||
// Wait if specified
|
||||
if (options.wait) {
|
||||
console.log(`Waiting ${options.wait}ms...`);
|
||||
await page.waitForTimeout(options.wait);
|
||||
await sleep(options.wait);
|
||||
}
|
||||
|
||||
const result: BrowseResult = {
|
||||
@@ -92,7 +90,6 @@ export async function browse(options: BrowseOptions): Promise<BrowseResult> {
|
||||
console.log(`Page title: ${result.title}`);
|
||||
console.log(`Final URL: ${result.url}`);
|
||||
|
||||
// Take screenshot if requested
|
||||
if (options.screenshot) {
|
||||
const outputPath = options.output ?? 'screenshot.png';
|
||||
await page.screenshot({ path: outputPath, fullPage: true });
|
||||
@@ -100,11 +97,10 @@ export async function browse(options: BrowseOptions): Promise<BrowseResult> {
|
||||
console.log(`Screenshot saved: ${outputPath}`);
|
||||
}
|
||||
|
||||
// If interactive mode, keep browser open
|
||||
if (options.interactive) {
|
||||
console.log('\nInteractive mode - browser will stay open.');
|
||||
console.log('Press Ctrl+C to close.');
|
||||
await new Promise(() => {}); // Keep running
|
||||
await new Promise(() => {});
|
||||
}
|
||||
|
||||
return result;
|
||||
@@ -115,16 +111,14 @@ export async function browse(options: BrowseOptions): Promise<BrowseResult> {
|
||||
}
|
||||
}
|
||||
|
||||
// Export page for use in other scripts
|
||||
export async function getPage(options?: {
|
||||
headless?: boolean;
|
||||
}): Promise<{ page: Page; browser: BrowserContext }> {
|
||||
const browser = await launchBrowser({ headless: options?.headless });
|
||||
const page = await browser.newPage();
|
||||
const page = browser.pages()[0] || await browser.newPage();
|
||||
return { page, browser };
|
||||
}
|
||||
|
||||
// CLI entry point
|
||||
async function main() {
|
||||
const args = parseArgs(process.argv.slice(2), {
|
||||
string: ['url', 'output'],
|
||||
@@ -145,7 +139,7 @@ async function main() {
|
||||
|
||||
if (args.help || !args.url) {
|
||||
console.log(`
|
||||
Web Browser with Camoufox
|
||||
Web Browser with CloakBrowser
|
||||
|
||||
Usage:
|
||||
npx tsx browse.ts --url <url> [options]
|
||||
@@ -166,8 +160,12 @@ Examples:
|
||||
npx tsx browse.ts --url "https://example.com" --headless false --interactive
|
||||
|
||||
Environment Variables:
|
||||
CAMOUFOX_PROFILE_PATH Custom profile directory (default: ~/.camoufox-profile/)
|
||||
CAMOUFOX_HEADLESS Default headless mode (true/false)
|
||||
CLOAKBROWSER_PROFILE_PATH Custom profile directory (default: ~/.cloakbrowser-profile/)
|
||||
CLOAKBROWSER_HEADLESS Default headless mode (true/false)
|
||||
|
||||
Compatibility Aliases:
|
||||
CAMOUFOX_PROFILE_PATH Legacy alias for CLOAKBROWSER_PROFILE_PATH
|
||||
CAMOUFOX_HEADLESS Legacy alias for CLOAKBROWSER_HEADLESS
|
||||
`);
|
||||
process.exit(args.help ? 0 : 1);
|
||||
}
|
||||
@@ -188,7 +186,6 @@ Environment Variables:
|
||||
}
|
||||
}
|
||||
|
||||
// Run if executed directly
|
||||
const isMainModule = process.argv[1]?.includes('browse.ts');
|
||||
if (isMainModule) {
|
||||
main();
|
||||
|
||||
@@ -9,8 +9,6 @@ const MAX_WAIT_MS = 20000;
|
||||
const NAV_TIMEOUT_MS = 30000;
|
||||
const EXTRA_CHALLENGE_WAIT_MS = 8000;
|
||||
const CONTENT_LIMIT = 12000;
|
||||
const DEFAULT_USER_AGENT =
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
@@ -52,6 +50,10 @@ function ensureParentDir(filePath) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
async function detectChallenge(page) {
|
||||
try {
|
||||
return await page.evaluate(() => {
|
||||
@@ -70,73 +72,51 @@ async function detectChallenge(page) {
|
||||
}
|
||||
}
|
||||
|
||||
async function loadPlaywright() {
|
||||
async function loadCloakBrowser() {
|
||||
try {
|
||||
return await import("playwright");
|
||||
return await import("cloakbrowser");
|
||||
} catch (error) {
|
||||
fail(
|
||||
"Playwright is not installed for this skill. Run pnpm install and npx playwright install chromium in skills/web-automation/scripts first.",
|
||||
"CloakBrowser is not installed for this skill. Run pnpm install in skills/web-automation/scripts first.",
|
||||
error.message
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
async function runWithStderrLogs(fn) {
|
||||
const originalLog = console.log;
|
||||
const originalError = console.error;
|
||||
console.log = (...args) => process.stderr.write(`${args.join(" ")}\n`);
|
||||
console.error = (...args) => process.stderr.write(`${args.join(" ")}\n`);
|
||||
try {
|
||||
return await fn();
|
||||
} finally {
|
||||
console.log = originalLog;
|
||||
console.error = originalError;
|
||||
}
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const requestedUrl = parseTarget(process.argv[2]);
|
||||
const waitTime = parseWaitTime(process.env.WAIT_TIME);
|
||||
const screenshotPath = process.env.SCREENSHOT_PATH || "";
|
||||
const saveHtml = process.env.SAVE_HTML === "true";
|
||||
const headless = process.env.HEADLESS !== "false";
|
||||
const userAgent = process.env.USER_AGENT || DEFAULT_USER_AGENT;
|
||||
const userAgent = process.env.USER_AGENT || undefined;
|
||||
const startedAt = Date.now();
|
||||
const { chromium } = await loadPlaywright();
|
||||
const { ensureBinary, launchContext } = await loadCloakBrowser();
|
||||
|
||||
let browser;
|
||||
let context;
|
||||
try {
|
||||
browser = await chromium.launch({
|
||||
headless,
|
||||
ignoreDefaultArgs: ["--enable-automation"],
|
||||
args: [
|
||||
"--disable-blink-features=AutomationControlled",
|
||||
"--disable-features=IsolateOrigins,site-per-process"
|
||||
]
|
||||
});
|
||||
await runWithStderrLogs(() => ensureBinary());
|
||||
|
||||
const context = await browser.newContext({
|
||||
context = await runWithStderrLogs(() => launchContext({
|
||||
headless,
|
||||
userAgent,
|
||||
locale: "en-US",
|
||||
viewport: { width: 1440, height: 900 },
|
||||
extraHTTPHeaders: {
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
Accept: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
|
||||
}
|
||||
});
|
||||
|
||||
await context.addInitScript(() => {
|
||||
Object.defineProperty(navigator, "webdriver", {
|
||||
get: () => false
|
||||
});
|
||||
|
||||
Object.defineProperty(navigator, "languages", {
|
||||
get: () => ["en-US", "en"]
|
||||
});
|
||||
|
||||
Object.defineProperty(navigator, "plugins", {
|
||||
get: () => [1, 2, 3, 4, 5]
|
||||
});
|
||||
|
||||
window.chrome = window.chrome || { runtime: {} };
|
||||
|
||||
const originalQuery = window.navigator.permissions?.query?.bind(window.navigator.permissions);
|
||||
if (originalQuery) {
|
||||
window.navigator.permissions.query = (parameters) => {
|
||||
if (parameters?.name === "notifications") {
|
||||
return Promise.resolve({ state: Notification.permission });
|
||||
}
|
||||
return originalQuery(parameters);
|
||||
};
|
||||
}
|
||||
});
|
||||
humanize: true,
|
||||
}));
|
||||
|
||||
const page = await context.newPage();
|
||||
const response = await page.goto(requestedUrl, {
|
||||
@@ -144,11 +124,11 @@ async function main() {
|
||||
timeout: NAV_TIMEOUT_MS
|
||||
});
|
||||
|
||||
await page.waitForTimeout(waitTime);
|
||||
await sleep(waitTime);
|
||||
|
||||
let challengeDetected = await detectChallenge(page);
|
||||
if (challengeDetected) {
|
||||
await page.waitForTimeout(EXTRA_CHALLENGE_WAIT_MS);
|
||||
await sleep(EXTRA_CHALLENGE_WAIT_MS);
|
||||
challengeDetected = await detectChallenge(page);
|
||||
}
|
||||
|
||||
@@ -192,11 +172,11 @@ async function main() {
|
||||
}
|
||||
|
||||
process.stdout.write(`${JSON.stringify(result, null, 2)}\n`);
|
||||
await browser.close();
|
||||
await context.close();
|
||||
} catch (error) {
|
||||
if (browser) {
|
||||
if (context) {
|
||||
try {
|
||||
await browser.close();
|
||||
await context.close();
|
||||
} catch {
|
||||
// Ignore close errors after the primary failure.
|
||||
}
|
||||
|
||||
@@ -254,7 +254,7 @@ async function main() {
|
||||
|
||||
if (args.help || (!args.instruction && !args.steps)) {
|
||||
console.log(`
|
||||
General Web Flow Runner (Camoufox)
|
||||
General Web Flow Runner (CloakBrowser)
|
||||
|
||||
Usage:
|
||||
npx tsx flow.ts --instruction "go to https://example.com then type \"hello\" then press enter"
|
||||
|
||||
@@ -1,22 +1,21 @@
|
||||
{
|
||||
"name": "web-automation-scripts",
|
||||
"version": "1.0.0",
|
||||
"description": "Web browsing and scraping scripts using Camoufox",
|
||||
"description": "Web browsing and scraping scripts using CloakBrowser",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"extract": "node extract.js",
|
||||
"browse": "tsx browse.ts",
|
||||
"scrape": "tsx scrape.ts",
|
||||
"fetch-browser": "npx camoufox-js fetch"
|
||||
"fetch-browser": "npx cloakbrowser install"
|
||||
},
|
||||
"dependencies": {
|
||||
"@mozilla/readability": "^0.5.0",
|
||||
"better-sqlite3": "^12.6.2",
|
||||
"camoufox-js": "^0.8.5",
|
||||
"cloakbrowser": "^0.3.14",
|
||||
"jsdom": "^24.0.0",
|
||||
"minimist": "^1.2.8",
|
||||
"playwright": "^1.58.2",
|
||||
"playwright-core": "^1.40.0",
|
||||
"playwright-core": "^1.58.2",
|
||||
"turndown": "^7.1.2",
|
||||
"turndown-plugin-gfm": "^1.0.2"
|
||||
},
|
||||
|
||||
482
skills/web-automation/scripts/pnpm-lock.yaml
generated
482
skills/web-automation/scripts/pnpm-lock.yaml
generated
@@ -14,21 +14,18 @@ importers:
|
||||
better-sqlite3:
|
||||
specifier: ^12.6.2
|
||||
version: 12.6.2
|
||||
camoufox-js:
|
||||
specifier: ^0.8.5
|
||||
version: 0.8.5(playwright-core@1.57.0)
|
||||
cloakbrowser:
|
||||
specifier: ^0.3.14
|
||||
version: 0.3.14(mmdb-lib@3.0.1)(playwright-core@1.58.2)
|
||||
jsdom:
|
||||
specifier: ^24.0.0
|
||||
version: 24.1.3
|
||||
minimist:
|
||||
specifier: ^1.2.8
|
||||
version: 1.2.8
|
||||
playwright:
|
||||
playwright-core:
|
||||
specifier: ^1.58.2
|
||||
version: 1.58.2
|
||||
playwright-core:
|
||||
specifier: ^1.40.0
|
||||
version: 1.57.0
|
||||
turndown:
|
||||
specifier: ^7.1.2
|
||||
version: 7.2.2
|
||||
@@ -244,13 +241,9 @@ packages:
|
||||
cpu: [x64]
|
||||
os: [win32]
|
||||
|
||||
'@isaacs/balanced-match@4.0.1':
|
||||
resolution: {integrity: sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ==}
|
||||
engines: {node: 20 || >=22}
|
||||
|
||||
'@isaacs/brace-expansion@5.0.0':
|
||||
resolution: {integrity: sha512-ZT55BDLV0yv0RBm2czMiZ+SqCGO7AvmOM3G/w2xhVPH+te0aKgFjmBvGlL1dH+ql2tgGO3MVrbb3jCKyvpgnxA==}
|
||||
engines: {node: 20 || >=22}
|
||||
'@isaacs/fs-minipass@4.0.1':
|
||||
resolution: {integrity: sha512-wgm9Ehl2jpeqP3zw/7mo3kRHFp5MEDhqAdwy1fTGkHAwnkGOVsgpvQhL8B5n1qlb01jV3n/bI0ZfZp5lWA1k4w==}
|
||||
engines: {node: '>=18.0.0'}
|
||||
|
||||
'@mixmark-io/domino@2.2.0':
|
||||
resolution: {integrity: sha512-Y28PR25bHXUg88kCV7nivXrP2Nj2RueZ3/l/jdx6J9f8J4nsEGcgX0Qe6lt7Pa+J79+kPiJU3LguR6O/6zrLOw==}
|
||||
@@ -259,10 +252,6 @@ packages:
|
||||
resolution: {integrity: sha512-Z+CZ3QaosfFaTqvhQsIktyGrjFjSC0Fa4EMph4mqKnWhmyoGICsV/8QK+8HpXut6zV7zwfWwqDmEjtk1Qf6EgQ==}
|
||||
engines: {node: '>=14.0.0'}
|
||||
|
||||
'@sindresorhus/is@4.6.0':
|
||||
resolution: {integrity: sha512-t09vSN3MdfsyCHoFcTRCH/iUtG7OJ0CsjzB8cjAmKc/va/kIgeDI/TxsigdncE/4be734m0cvIYwNaV4i2XqAw==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
'@types/jsdom@21.1.7':
|
||||
resolution: {integrity: sha512-yOriVnggzrnQ3a9OKOCxaVuSug3w3/SbOj5i7VwXWZEyUNl3bLF9V3MfxGbZKuwqJOQyRfqXyROBB1CoZLFWzA==}
|
||||
|
||||
@@ -278,10 +267,6 @@ packages:
|
||||
'@types/turndown@5.0.6':
|
||||
resolution: {integrity: sha512-ru00MoyeeouE5BX4gRL+6m/BsDfbRayOskWqUvh7CLGW+UXxHQItqALa38kKnOiZPqJrtzJUgAC2+F0rL1S4Pg==}
|
||||
|
||||
adm-zip@0.5.16:
|
||||
resolution: {integrity: sha512-TGw5yVi4saajsSEgz25grObGHEUaDrniwvA2qwSC060KfqGPdglhvPMA2lPIoxs3PQIItj2iag35fONcQqgUaQ==}
|
||||
engines: {node: '>=12.0'}
|
||||
|
||||
agent-base@7.1.4:
|
||||
resolution: {integrity: sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==}
|
||||
engines: {node: '>= 14'}
|
||||
@@ -292,10 +277,6 @@ packages:
|
||||
base64-js@1.5.1:
|
||||
resolution: {integrity: sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==}
|
||||
|
||||
baseline-browser-mapping@2.9.14:
|
||||
resolution: {integrity: sha512-B0xUquLkiGLgHhpPBqvl7GWegWBUNuujQ6kXd/r1U38ElPT6Ok8KZ8e+FpUGEc2ZoRQUzq/aUnaKFc/svWUGSg==}
|
||||
hasBin: true
|
||||
|
||||
better-sqlite3@12.6.2:
|
||||
resolution: {integrity: sha512-8VYKM3MjCa9WcaSAI3hzwhmyHVlH8tiGFwf0RlTsZPWJ1I5MkzjiudCo4KC4DxOaL/53A5B1sI/IbldNFDbsKA==}
|
||||
engines: {node: 20.x || 22.x || 23.x || 24.x || 25.x}
|
||||
@@ -306,11 +287,6 @@ packages:
|
||||
bl@4.1.0:
|
||||
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
|
||||
|
||||
browserslist@4.28.1:
|
||||
resolution: {integrity: sha512-ZC5Bd0LgJXgwGqUknZY/vkUQ04r8NXnJZ3yYi4vDmSiZmC/pdSN0NbNRPxZpbtO4uAfDUAFffO8IZoM3Gj8IkA==}
|
||||
engines: {node: ^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7}
|
||||
hasBin: true
|
||||
|
||||
buffer@5.7.1:
|
||||
resolution: {integrity: sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==}
|
||||
|
||||
@@ -318,31 +294,33 @@ packages:
|
||||
resolution: {integrity: sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==}
|
||||
engines: {node: '>= 0.4'}
|
||||
|
||||
callsites@3.1.0:
|
||||
resolution: {integrity: sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
camoufox-js@0.8.5:
|
||||
resolution: {integrity: sha512-20ihPbspAcOVSUTX9Drxxp0C116DON1n8OVA1eUDglWZiHwiHwFVFOMrIEBwAHMZpU11mIEH/kawJtstRIrDPA==}
|
||||
engines: {node: '>= 20'}
|
||||
hasBin: true
|
||||
peerDependencies:
|
||||
playwright-core: '*'
|
||||
|
||||
caniuse-lite@1.0.30001764:
|
||||
resolution: {integrity: sha512-9JGuzl2M+vPL+pz70gtMF9sHdMFbY9FJaQBi186cHKH3pSzDvzoUJUPV6fqiKIMyXbud9ZLg4F3Yza1vJ1+93g==}
|
||||
|
||||
chownr@1.1.4:
|
||||
resolution: {integrity: sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==}
|
||||
|
||||
chownr@3.0.0:
|
||||
resolution: {integrity: sha512-+IxzY9BZOQd/XuYPRmrvEVjF/nqj5kgT4kEq7VofrDoM1MxoRjEWkrCC3EtLi59TVawxTAn+orJwFQcrqEN1+g==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
cloakbrowser@0.3.14:
|
||||
resolution: {integrity: sha512-8mcEVxfiNbAMHNa0B2IZKPtMDQ2peZlrScfQDJW+C9tjKG/P5Bg9wCweI0hnbaWR2ulG1MrxiEvTMMjz/SgmLw==}
|
||||
engines: {node: '>=18.0.0'}
|
||||
hasBin: true
|
||||
peerDependencies:
|
||||
mmdb-lib: '>=2.0.0'
|
||||
playwright-core: '>=1.40.0'
|
||||
puppeteer-core: '>=21.0.0'
|
||||
peerDependenciesMeta:
|
||||
mmdb-lib:
|
||||
optional: true
|
||||
playwright-core:
|
||||
optional: true
|
||||
puppeteer-core:
|
||||
optional: true
|
||||
|
||||
combined-stream@1.0.8:
|
||||
resolution: {integrity: sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==}
|
||||
engines: {node: '>= 0.8'}
|
||||
|
||||
commander@14.0.2:
|
||||
resolution: {integrity: sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ==}
|
||||
engines: {node: '>=20'}
|
||||
|
||||
cssstyle@4.6.0:
|
||||
resolution: {integrity: sha512-2z+rWdzbbSZv6/rhtvzvqeZQHrBaqgogqt85sqFNbabZOuFbCVFb8kPeEtZjiKkbrm395irpNKiYeFeLiQnFPg==}
|
||||
engines: {node: '>=18'}
|
||||
@@ -375,24 +353,14 @@ packages:
|
||||
resolution: {integrity: sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==}
|
||||
engines: {node: '>=0.4.0'}
|
||||
|
||||
detect-europe-js@0.1.2:
|
||||
resolution: {integrity: sha512-lgdERlL3u0aUdHocoouzT10d9I89VVhk0qNRmll7mXdGfJT1/wqZ2ZLA4oJAjeACPY5fT1wsbq2AT+GkuInsow==}
|
||||
|
||||
detect-libc@2.1.2:
|
||||
resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
dot-prop@6.0.1:
|
||||
resolution: {integrity: sha512-tE7ztYzXHIeyvc7N+hR3oi7FIbf/NIjVP9hmAt3yMXzrQ072/fpjGLx2GxNxGxUl5V73MEqYzioOMoVhGMJ5cA==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
dunder-proto@1.0.1:
|
||||
resolution: {integrity: sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==}
|
||||
engines: {node: '>= 0.4'}
|
||||
|
||||
electron-to-chromium@1.5.267:
|
||||
resolution: {integrity: sha512-0Drusm6MVRXSOJpGbaSVgcQsuB4hEkMpHXaVstcPmhu5LIedxs1xNK/nIxmQIU/RPC0+1/o0AVZfBTkTNJOdUw==}
|
||||
|
||||
end-of-stream@1.4.5:
|
||||
resolution: {integrity: sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==}
|
||||
|
||||
@@ -421,10 +389,6 @@ packages:
|
||||
engines: {node: '>=18'}
|
||||
hasBin: true
|
||||
|
||||
escalade@3.2.0:
|
||||
resolution: {integrity: sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
expand-template@2.0.3:
|
||||
resolution: {integrity: sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==}
|
||||
engines: {node: '>=6'}
|
||||
@@ -432,10 +396,6 @@ packages:
|
||||
file-uri-to-path@1.0.0:
|
||||
resolution: {integrity: sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw==}
|
||||
|
||||
fingerprint-generator@2.1.79:
|
||||
resolution: {integrity: sha512-0dr3kTgvRYHleRPp6OBDcPb8amJmOyFr9aOuwnpN6ooWJ5XyT+/aL/SZ6CU4ZrEtzV26EyJ2Lg7PT32a0NdrRA==}
|
||||
engines: {node: '>=16.0.0'}
|
||||
|
||||
form-data@4.0.5:
|
||||
resolution: {integrity: sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==}
|
||||
engines: {node: '>= 6'}
|
||||
@@ -443,11 +403,6 @@ packages:
|
||||
fs-constants@1.0.0:
|
||||
resolution: {integrity: sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==}
|
||||
|
||||
fsevents@2.3.2:
|
||||
resolution: {integrity: sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==}
|
||||
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
|
||||
os: [darwin]
|
||||
|
||||
fsevents@2.3.3:
|
||||
resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==}
|
||||
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
|
||||
@@ -456,9 +411,6 @@ packages:
|
||||
function-bind@1.1.2:
|
||||
resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==}
|
||||
|
||||
generative-bayesian-network@2.1.79:
|
||||
resolution: {integrity: sha512-aPH+V2wO+HE0BUX1LbsM8Ak99gmV43lgh+D7GDteM0zgnPqiAwcK9JZPxMPZa3aJUleFtFaL1lAei8g9zNrDIA==}
|
||||
|
||||
get-intrinsic@1.3.0:
|
||||
resolution: {integrity: sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==}
|
||||
engines: {node: '>= 0.4'}
|
||||
@@ -473,10 +425,6 @@ packages:
|
||||
github-from-package@0.0.0:
|
||||
resolution: {integrity: sha512-SyHy3T1v2NUXn29OsWdxmK6RwHD+vkj3v8en8AOBZ1wBQ/hCAQ5bAQTD02kW4W9tUp/3Qh6J8r9EvntiyCmOOw==}
|
||||
|
||||
glob@13.0.0:
|
||||
resolution: {integrity: sha512-tvZgpqk6fz4BaNZ66ZsRaZnbHvP/jG3uKJvAZOwEVUL4RTA5nJeeLYfyN9/VA8NX/V3IBG+hkeuGpKjvELkVhA==}
|
||||
engines: {node: 20 || >=22}
|
||||
|
||||
gopd@1.2.0:
|
||||
resolution: {integrity: sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==}
|
||||
engines: {node: '>= 0.4'}
|
||||
@@ -493,10 +441,6 @@ packages:
|
||||
resolution: {integrity: sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==}
|
||||
engines: {node: '>= 0.4'}
|
||||
|
||||
header-generator@2.1.79:
|
||||
resolution: {integrity: sha512-YvHx8teq4QmV5mz7wdPMsj9n1OZBPnZxA4QE+EOrtx7xbmGvd1gBvDNKCb5XqS4GR/TL75MU5hqMqqqANdILRg==}
|
||||
engines: {node: '>=16.0.0'}
|
||||
|
||||
html-encoding-sniffer@4.0.0:
|
||||
resolution: {integrity: sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ==}
|
||||
engines: {node: '>=18'}
|
||||
@@ -516,74 +460,15 @@ packages:
|
||||
ieee754@1.2.1:
|
||||
resolution: {integrity: sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==}
|
||||
|
||||
impit-darwin-arm64@0.7.6:
|
||||
resolution: {integrity: sha512-M7NQXkttyzqilWfzVkNCp7hApT69m0etyJkVpHze4bR5z1kJnHhdsb8BSdDv2dzvZL4u1JyqZNxq+qoMn84eUw==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [darwin]
|
||||
|
||||
impit-darwin-x64@0.7.6:
|
||||
resolution: {integrity: sha512-kikTesWirAwJp9JPxzGLoGVc+heBlEabWS5AhTkQedACU153vmuL90OBQikVr3ul2N0LPImvnuB+51wV0zDE6g==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [darwin]
|
||||
|
||||
impit-linux-arm64-gnu@0.7.6:
|
||||
resolution: {integrity: sha512-H6GHjVr/0lG9VEJr6IHF8YLq+YkSIOF4k7Dfue2ygzUAj1+jZ5ZwnouhG/XrZHYW6EWsZmEAjjRfWE56Q0wDRQ==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [linux]
|
||||
|
||||
impit-linux-arm64-musl@0.7.6:
|
||||
resolution: {integrity: sha512-1sCB/UBVXLZTpGJsXRdNNSvhN9xmmQcYLMWAAB4Itb7w684RHX1pLoCb6ichv7bfAf6tgaupcFIFZNBp3ghmQA==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [linux]
|
||||
|
||||
impit-linux-x64-gnu@0.7.6:
|
||||
resolution: {integrity: sha512-yYhlRnZ4fhKt8kuGe0JK2WSHc8TkR6BEH0wn+guevmu8EOn9Xu43OuRvkeOyVAkRqvFnlZtMyySUo/GuSLz9Gw==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [linux]
|
||||
|
||||
impit-linux-x64-musl@0.7.6:
|
||||
resolution: {integrity: sha512-sdGWyu+PCLmaOXy7Mzo4WP61ZLl5qpZ1L+VeXW+Ycazgu0e7ox0NZLdiLRunIrEzD+h0S+e4CyzNwaiP3yIolg==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [linux]
|
||||
|
||||
impit-win32-arm64-msvc@0.7.6:
|
||||
resolution: {integrity: sha512-sM5deBqo0EuXg5GACBUMKEua9jIau/i34bwNlfrf/Amnw1n0GB4/RkuUh+sKiUcbNAntrRq+YhCq8qDP8IW19w==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [win32]
|
||||
|
||||
impit-win32-x64-msvc@0.7.6:
|
||||
resolution: {integrity: sha512-ry63ADGLCB/PU/vNB1VioRt2V+klDJ34frJUXUZBEv1kA96HEAg9AxUk+604o+UHS3ttGH2rkLmrbwHOdAct5Q==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [win32]
|
||||
|
||||
impit@0.7.6:
|
||||
resolution: {integrity: sha512-AkS6Gv63+E6GMvBrcRhMmOREKpq5oJ0J5m3xwfkHiEs97UIsbpEqFmW3sFw/sdyOTDGRF5q4EjaLxtb922Ta8g==}
|
||||
engines: {node: '>= 20'}
|
||||
|
||||
inherits@2.0.4:
|
||||
resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==}
|
||||
|
||||
ini@1.3.8:
|
||||
resolution: {integrity: sha512-JV/yugV2uzW5iMRSiZAyDtQd+nxtUnjeLt0acNdw98kKLrvuRVyB80tsREOE7yvGVgalhZ6RNXCmEHkUKBKxew==}
|
||||
|
||||
is-obj@2.0.0:
|
||||
resolution: {integrity: sha512-drqDG3cbczxxEJRoOXcOjtdp1J/lyp1mNn0xaznRs8+muBhgQcrnbspox5X5fOw0HnMnbfDzvnEMEtqDEJEo8w==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
is-potential-custom-element-name@1.0.1:
|
||||
resolution: {integrity: sha512-bCYeRA2rVibKZd+s2625gGnGF/t7DSqDs4dP7CrLA1m7jKWz6pps0LpYLJN8Q64HtmPKJ1hrN3nzPNKFEKOUiQ==}
|
||||
|
||||
is-standalone-pwa@0.1.1:
|
||||
resolution: {integrity: sha512-9Cbovsa52vNQCjdXOzeQq5CnCbAcRk05aU62K20WO372NrTv0NxibLFCK6lQ4/iZEFdEA3p3t2VNOn8AJ53F5g==}
|
||||
|
||||
jsdom@24.1.3:
|
||||
resolution: {integrity: sha512-MyL55p3Ut3cXbeBEG7Hcv0mVM8pp8PBNWxRqchZnSfAiES1v1mRnMeFfaHWIPULpwsYfvO+ZmMZz5tGCnjzDUQ==}
|
||||
engines: {node: '>=18'}
|
||||
@@ -593,32 +478,13 @@ packages:
|
||||
canvas:
|
||||
optional: true
|
||||
|
||||
language-subtag-registry@0.3.23:
|
||||
resolution: {integrity: sha512-0K65Lea881pHotoGEa5gDlMxt3pctLi2RplBb7Ezh4rRdLEOtgi7n4EwK9lamnUCkKBqaeKRVebTq6BAxSkpXQ==}
|
||||
|
||||
language-tags@2.1.0:
|
||||
resolution: {integrity: sha512-D4CgpyCt+61f6z2jHjJS1OmZPviAWM57iJ9OKdFFWSNgS7Udj9QVWqyGs/cveVNF57XpZmhSvMdVIV5mjLA7Vg==}
|
||||
engines: {node: '>=22'}
|
||||
|
||||
lodash.isequal@4.5.0:
|
||||
resolution: {integrity: sha512-pDo3lu8Jhfjqls6GkMgpahsF9kCyayhgykjyLMNFTKWrpVdAQtYyB4muAMWozBB4ig/dtWAmsMxLEI8wuz+DYQ==}
|
||||
deprecated: This package is deprecated. Use require('node:util').isDeepStrictEqual instead.
|
||||
|
||||
lru-cache@10.4.3:
|
||||
resolution: {integrity: sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==}
|
||||
|
||||
lru-cache@11.2.4:
|
||||
resolution: {integrity: sha512-B5Y16Jr9LB9dHVkh6ZevG+vAbOsNOYCX+sXvFWFu7B3Iz5mijW3zdbMyhsh8ANd2mSWBYdJgnqi+mL7/LrOPYg==}
|
||||
engines: {node: 20 || >=22}
|
||||
|
||||
math-intrinsics@1.1.0:
|
||||
resolution: {integrity: sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==}
|
||||
engines: {node: '>= 0.4'}
|
||||
|
||||
maxmind@5.0.3:
|
||||
resolution: {integrity: sha512-oMtZwLrsp0LcZehfYKIirtwKMBycMMqMA1/Dc9/BlUqIEtXO75mIzMJ3PYCV1Ji+BpoUCk+lTzRfh9c+ptGdyQ==}
|
||||
engines: {node: '>=12', npm: '>=6'}
|
||||
|
||||
mime-db@1.52.0:
|
||||
resolution: {integrity: sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==}
|
||||
engines: {node: '>= 0.6'}
|
||||
@@ -631,10 +497,6 @@ packages:
|
||||
resolution: {integrity: sha512-z0yWI+4FDrrweS8Zmt4Ej5HdJmky15+L2e6Wgn3+iK5fWzb6T3fhNFq2+MeTRb064c6Wr4N/wv0DzQTjNzHNGQ==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
minimatch@10.1.1:
|
||||
resolution: {integrity: sha512-enIvLvRAFZYXJzkCYG5RKmPfrFArdLv+R+lbQ53BmIMLIry74bjKzX6iHAm8WYamJkhSSEabrWN5D97XnKObjQ==}
|
||||
engines: {node: 20 || >=22}
|
||||
|
||||
minimist@1.2.8:
|
||||
resolution: {integrity: sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==}
|
||||
|
||||
@@ -642,6 +504,10 @@ packages:
|
||||
resolution: {integrity: sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==}
|
||||
engines: {node: '>=16 || 14 >=14.17'}
|
||||
|
||||
minizlib@3.1.0:
|
||||
resolution: {integrity: sha512-KZxYo1BUkWD2TVFLr0MQoM8vUUigWD3LlD83a/75BqC+4qE0Hb1Vo5v1FgcfaNXvfXzr+5EhQ6ing/CaBijTlw==}
|
||||
engines: {node: '>= 18'}
|
||||
|
||||
mkdirp-classic@0.5.3:
|
||||
resolution: {integrity: sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A==}
|
||||
|
||||
@@ -659,53 +525,25 @@ packages:
|
||||
resolution: {integrity: sha512-zsFhmbkAzwhTft6nd3VxcG0cvJsT70rL+BIGHWVq5fi6MwGrHwzqKaxXE+Hl2GmnGItnDKPPkO5/LQqjVkIdFg==}
|
||||
engines: {node: '>=10'}
|
||||
|
||||
node-releases@2.0.27:
|
||||
resolution: {integrity: sha512-nmh3lCkYZ3grZvqcCH+fjmQ7X+H0OeZgP40OierEaAptX4XofMh5kwNbWh7lBduUzCcV/8kZ+NDLCwm2iorIlA==}
|
||||
|
||||
nwsapi@2.2.23:
|
||||
resolution: {integrity: sha512-7wfH4sLbt4M0gCDzGE6vzQBo0bfTKjU7Sfpqy/7gs1qBfYz2vEJH6vXcBKpO3+6Yu1telwd0t9HpyOoLEQQbIQ==}
|
||||
|
||||
once@1.4.0:
|
||||
resolution: {integrity: sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==}
|
||||
|
||||
ow@0.28.2:
|
||||
resolution: {integrity: sha512-dD4UpyBh/9m4X2NVjA+73/ZPBRF+uF4zIMFvvQsabMiEK8x41L3rQ8EENOi35kyyoaJwNxEeJcP6Fj1H4U409Q==}
|
||||
engines: {node: '>=12'}
|
||||
|
||||
parse5@7.3.0:
|
||||
resolution: {integrity: sha512-IInvU7fabl34qmi9gY8XOVxhYyMyuH2xUNpb2q8/Y+7552KlejkRvqvD19nMoUW/uQGGbqNpA6Tufu5FL5BZgw==}
|
||||
|
||||
path-scurry@2.0.1:
|
||||
resolution: {integrity: sha512-oWyT4gICAu+kaA7QWk/jvCHWarMKNs6pXOGWKDTr7cw4IGcUbW+PeTfbaQiLGheFRpjo6O9J0PmyMfQPjH71oA==}
|
||||
engines: {node: 20 || >=22}
|
||||
|
||||
picocolors@1.1.1:
|
||||
resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==}
|
||||
|
||||
playwright-core@1.57.0:
|
||||
resolution: {integrity: sha512-agTcKlMw/mjBWOnD6kFZttAAGHgi/Nw0CZ2o6JqWSbMlI219lAFLZZCyqByTsvVAJq5XA5H8cA6PrvBRpBWEuQ==}
|
||||
engines: {node: '>=18'}
|
||||
hasBin: true
|
||||
|
||||
playwright-core@1.58.2:
|
||||
resolution: {integrity: sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==}
|
||||
engines: {node: '>=18'}
|
||||
hasBin: true
|
||||
|
||||
playwright@1.58.2:
|
||||
resolution: {integrity: sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==}
|
||||
engines: {node: '>=18'}
|
||||
hasBin: true
|
||||
|
||||
prebuild-install@7.1.3:
|
||||
resolution: {integrity: sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==}
|
||||
engines: {node: '>=10'}
|
||||
hasBin: true
|
||||
|
||||
progress@2.0.3:
|
||||
resolution: {integrity: sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==}
|
||||
engines: {node: '>=0.4.0'}
|
||||
|
||||
psl@1.15.0:
|
||||
resolution: {integrity: sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w==}
|
||||
|
||||
@@ -745,10 +583,6 @@ packages:
|
||||
safer-buffer@2.1.2:
|
||||
resolution: {integrity: sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==}
|
||||
|
||||
sax@1.4.4:
|
||||
resolution: {integrity: sha512-1n3r/tGXO6b6VXMdFT54SHzT9ytu9yr7TaELowdYpMqY/Ao7EnlQGmAQ1+RatX7Tkkdm6hONI2owqNx2aZj5Sw==}
|
||||
engines: {node: '>=11.0.0'}
|
||||
|
||||
saxes@6.0.0:
|
||||
resolution: {integrity: sha512-xAg7SOnEhrm5zI3puOOKyy1OMcMlIJZYNJY7xLBwSze0UjhPLnWfj2GF2EpT0jmzaJKIWKHLsaSSajf35bcYnA==}
|
||||
engines: {node: '>=v12.22.7'}
|
||||
@@ -781,9 +615,9 @@ packages:
|
||||
resolution: {integrity: sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==}
|
||||
engines: {node: '>=6'}
|
||||
|
||||
tiny-lru@11.4.5:
|
||||
resolution: {integrity: sha512-hkcz3FjNJfKXjV4mjQ1OrXSLAehg8Hw+cEZclOVT+5c/cWQWImQ9wolzTjth+dmmDe++p3bme3fTxz6Q4Etsqw==}
|
||||
engines: {node: '>=12'}
|
||||
tar@7.5.11:
|
||||
resolution: {integrity: sha512-ChjMH33/KetonMTAtpYdgUFr0tbz69Fp2v7zWxQfYZX4g5ZN2nOBXm1R2xyA+lMIKrLKIoKAwFj93jE/avX9cQ==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
tough-cookie@4.1.4:
|
||||
resolution: {integrity: sha512-Loo5UUvLD9ScZ6jh8beX1T6sO1w2/MpCRpEP7V280GKMVUQ0Jzar2U3UJPsrdbziLEMMhu3Ujnq//rhiFuIeag==}
|
||||
@@ -793,9 +627,6 @@ packages:
|
||||
resolution: {integrity: sha512-hdF5ZgjTqgAntKkklYw0R03MG2x/bSzTtkxmIRw/sTNV8YXsCJ1tfLAX23lhxhHJlEf3CRCOCGGWw3vI3GaSPw==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
tslib@2.8.1:
|
||||
resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==}
|
||||
|
||||
tsx@4.21.0:
|
||||
resolution: {integrity: sha512-5C1sg4USs1lfG0GFb2RLXsdpXqBSEhAaA/0kPL01wxzpMqLILNxIxIOKiILz+cdg/pLnOUxFYOR5yhHU666wbw==}
|
||||
engines: {node: '>=18.0.0'}
|
||||
@@ -815,13 +646,6 @@ packages:
|
||||
engines: {node: '>=14.17'}
|
||||
hasBin: true
|
||||
|
||||
ua-is-frozen@0.1.2:
|
||||
resolution: {integrity: sha512-RwKDW2p3iyWn4UbaxpP2+VxwqXh0jpvdxsYpZ5j/MLLiQOfbsV5shpgQiw93+KMYQPcteeMQ289MaAFzs3G9pw==}
|
||||
|
||||
ua-parser-js@2.0.7:
|
||||
resolution: {integrity: sha512-CFdHVHr+6YfbktNZegH3qbYvYgC7nRNEUm2tk7nSFXSODUu4tDBpaFpP1jdXBUOKKwapVlWRfTtS8bCPzsQ47w==}
|
||||
hasBin: true
|
||||
|
||||
undici-types@7.16.0:
|
||||
resolution: {integrity: sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==}
|
||||
|
||||
@@ -829,22 +653,12 @@ packages:
|
||||
resolution: {integrity: sha512-CJ1QgKmNg3CwvAv/kOFmtnEN05f0D/cn9QntgNOQlQF9dgvVTHj3t+8JPdjqawCHk7V/KA+fbUqzZ9XWhcqPUg==}
|
||||
engines: {node: '>= 4.0.0'}
|
||||
|
||||
update-browserslist-db@1.2.3:
|
||||
resolution: {integrity: sha512-Js0m9cx+qOgDxo0eMiFGEueWztz+d4+M3rGlmKPT+T4IS/jP4ylw3Nwpu6cpTTP8R1MAC1kF4VbdLt3ARf209w==}
|
||||
hasBin: true
|
||||
peerDependencies:
|
||||
browserslist: '>= 4.21.0'
|
||||
|
||||
url-parse@1.5.10:
|
||||
resolution: {integrity: sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==}
|
||||
|
||||
util-deprecate@1.0.2:
|
||||
resolution: {integrity: sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==}
|
||||
|
||||
vali-date@1.0.0:
|
||||
resolution: {integrity: sha512-sgECfZthyaCKW10N0fm27cg8HYTFK5qMWgypqkXMQ4Wbl/zZKx7xZICgcoxIIE+WFAP/MBL2EFwC/YvLxw3Zeg==}
|
||||
engines: {node: '>=0.10.0'}
|
||||
|
||||
w3c-xmlserializer@5.0.0:
|
||||
resolution: {integrity: sha512-o8qghlI8NZHU1lLPrpi2+Uq7abh4GGPpYANlalzWxyWteJOCsr/P+oPBA49TOLu5FTZO4d3F9MnWJfiMo4BkmA==}
|
||||
engines: {node: '>=18'}
|
||||
@@ -885,17 +699,13 @@ packages:
|
||||
resolution: {integrity: sha512-EvGK8EJ3DhaHfbRlETOWAS5pO9MZITeauHKJyb8wyajUfQUenkIg2MvLDTZ4T/TgIcm3HU0TFBgWWboAZ30UHg==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
xml2js@0.6.2:
|
||||
resolution: {integrity: sha512-T4rieHaC1EXcES0Kxxj4JWgaUQHDk+qwHcYOCFHfiwKz7tOVPLq7Hjq9dM1WCMhylqMEfP7hMcOIChvotiZegA==}
|
||||
engines: {node: '>=4.0.0'}
|
||||
|
||||
xmlbuilder@11.0.1:
|
||||
resolution: {integrity: sha512-fDlsI/kFEx7gLvbecc0/ohLG50fugQp8ryHzMTuW9vSa1GJ0XYWKnhsUx7oie3G98+r56aTQIUB4kht42R3JvA==}
|
||||
engines: {node: '>=4.0'}
|
||||
|
||||
xmlchars@2.2.0:
|
||||
resolution: {integrity: sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==}
|
||||
|
||||
yallist@5.0.0:
|
||||
resolution: {integrity: sha512-YgvUTfwqyc7UXVMrB+SImsVYSmTS8X/tSrtdNZMImM+n7+QTriRXyXim0mBrTXNeqzVF0KWGgHPeiyViFFrNDw==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
snapshots:
|
||||
|
||||
'@asamuzakjp/css-color@3.2.0':
|
||||
@@ -1004,18 +814,14 @@ snapshots:
|
||||
'@esbuild/win32-x64@0.27.0':
|
||||
optional: true
|
||||
|
||||
'@isaacs/balanced-match@4.0.1': {}
|
||||
|
||||
'@isaacs/brace-expansion@5.0.0':
|
||||
'@isaacs/fs-minipass@4.0.1':
|
||||
dependencies:
|
||||
'@isaacs/balanced-match': 4.0.1
|
||||
minipass: 7.1.2
|
||||
|
||||
'@mixmark-io/domino@2.2.0': {}
|
||||
|
||||
'@mozilla/readability@0.5.0': {}
|
||||
|
||||
'@sindresorhus/is@4.6.0': {}
|
||||
|
||||
'@types/jsdom@21.1.7':
|
||||
dependencies:
|
||||
'@types/node': 25.0.6
|
||||
@@ -1032,16 +838,12 @@ snapshots:
|
||||
|
||||
'@types/turndown@5.0.6': {}
|
||||
|
||||
adm-zip@0.5.16: {}
|
||||
|
||||
agent-base@7.1.4: {}
|
||||
|
||||
asynckit@0.4.0: {}
|
||||
|
||||
base64-js@1.5.1: {}
|
||||
|
||||
baseline-browser-mapping@2.9.14: {}
|
||||
|
||||
better-sqlite3@12.6.2:
|
||||
dependencies:
|
||||
bindings: 1.5.0
|
||||
@@ -1057,14 +859,6 @@ snapshots:
|
||||
inherits: 2.0.4
|
||||
readable-stream: 3.6.2
|
||||
|
||||
browserslist@4.28.1:
|
||||
dependencies:
|
||||
baseline-browser-mapping: 2.9.14
|
||||
caniuse-lite: 1.0.30001764
|
||||
electron-to-chromium: 1.5.267
|
||||
node-releases: 2.0.27
|
||||
update-browserslist-db: 1.2.3(browserslist@4.28.1)
|
||||
|
||||
buffer@5.7.1:
|
||||
dependencies:
|
||||
base64-js: 1.5.1
|
||||
@@ -1075,33 +869,21 @@ snapshots:
|
||||
es-errors: 1.3.0
|
||||
function-bind: 1.1.2
|
||||
|
||||
callsites@3.1.0: {}
|
||||
|
||||
camoufox-js@0.8.5(playwright-core@1.57.0):
|
||||
dependencies:
|
||||
adm-zip: 0.5.16
|
||||
better-sqlite3: 12.6.2
|
||||
commander: 14.0.2
|
||||
fingerprint-generator: 2.1.79
|
||||
glob: 13.0.0
|
||||
impit: 0.7.6
|
||||
language-tags: 2.1.0
|
||||
maxmind: 5.0.3
|
||||
playwright-core: 1.57.0
|
||||
progress: 2.0.3
|
||||
ua-parser-js: 2.0.7
|
||||
xml2js: 0.6.2
|
||||
|
||||
caniuse-lite@1.0.30001764: {}
|
||||
|
||||
chownr@1.1.4: {}
|
||||
|
||||
chownr@3.0.0: {}
|
||||
|
||||
cloakbrowser@0.3.14(mmdb-lib@3.0.1)(playwright-core@1.58.2):
|
||||
dependencies:
|
||||
tar: 7.5.11
|
||||
optionalDependencies:
|
||||
mmdb-lib: 3.0.1
|
||||
playwright-core: 1.58.2
|
||||
|
||||
combined-stream@1.0.8:
|
||||
dependencies:
|
||||
delayed-stream: 1.0.0
|
||||
|
||||
commander@14.0.2: {}
|
||||
|
||||
cssstyle@4.6.0:
|
||||
dependencies:
|
||||
'@asamuzakjp/css-color': 3.2.0
|
||||
@@ -1126,22 +908,14 @@ snapshots:
|
||||
|
||||
delayed-stream@1.0.0: {}
|
||||
|
||||
detect-europe-js@0.1.2: {}
|
||||
|
||||
detect-libc@2.1.2: {}
|
||||
|
||||
dot-prop@6.0.1:
|
||||
dependencies:
|
||||
is-obj: 2.0.0
|
||||
|
||||
dunder-proto@1.0.1:
|
||||
dependencies:
|
||||
call-bind-apply-helpers: 1.0.2
|
||||
es-errors: 1.3.0
|
||||
gopd: 1.2.0
|
||||
|
||||
electron-to-chromium@1.5.267: {}
|
||||
|
||||
end-of-stream@1.4.5:
|
||||
dependencies:
|
||||
once: 1.4.0
|
||||
@@ -1192,18 +966,10 @@ snapshots:
|
||||
'@esbuild/win32-ia32': 0.27.0
|
||||
'@esbuild/win32-x64': 0.27.0
|
||||
|
||||
escalade@3.2.0: {}
|
||||
|
||||
expand-template@2.0.3: {}
|
||||
|
||||
file-uri-to-path@1.0.0: {}
|
||||
|
||||
fingerprint-generator@2.1.79:
|
||||
dependencies:
|
||||
generative-bayesian-network: 2.1.79
|
||||
header-generator: 2.1.79
|
||||
tslib: 2.8.1
|
||||
|
||||
form-data@4.0.5:
|
||||
dependencies:
|
||||
asynckit: 0.4.0
|
||||
@@ -1214,19 +980,11 @@ snapshots:
|
||||
|
||||
fs-constants@1.0.0: {}
|
||||
|
||||
fsevents@2.3.2:
|
||||
optional: true
|
||||
|
||||
fsevents@2.3.3:
|
||||
optional: true
|
||||
|
||||
function-bind@1.1.2: {}
|
||||
|
||||
generative-bayesian-network@2.1.79:
|
||||
dependencies:
|
||||
adm-zip: 0.5.16
|
||||
tslib: 2.8.1
|
||||
|
||||
get-intrinsic@1.3.0:
|
||||
dependencies:
|
||||
call-bind-apply-helpers: 1.0.2
|
||||
@@ -1251,12 +1009,6 @@ snapshots:
|
||||
|
||||
github-from-package@0.0.0: {}
|
||||
|
||||
glob@13.0.0:
|
||||
dependencies:
|
||||
minimatch: 10.1.1
|
||||
minipass: 7.1.2
|
||||
path-scurry: 2.0.1
|
||||
|
||||
gopd@1.2.0: {}
|
||||
|
||||
has-symbols@1.1.0: {}
|
||||
@@ -1269,13 +1021,6 @@ snapshots:
|
||||
dependencies:
|
||||
function-bind: 1.1.2
|
||||
|
||||
header-generator@2.1.79:
|
||||
dependencies:
|
||||
browserslist: 4.28.1
|
||||
generative-bayesian-network: 2.1.79
|
||||
ow: 0.28.2
|
||||
tslib: 2.8.1
|
||||
|
||||
html-encoding-sniffer@4.0.0:
|
||||
dependencies:
|
||||
whatwg-encoding: 3.1.1
|
||||
@@ -1300,51 +1045,12 @@ snapshots:
|
||||
|
||||
ieee754@1.2.1: {}
|
||||
|
||||
impit-darwin-arm64@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-darwin-x64@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-linux-arm64-gnu@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-linux-arm64-musl@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-linux-x64-gnu@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-linux-x64-musl@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-win32-arm64-msvc@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit-win32-x64-msvc@0.7.6:
|
||||
optional: true
|
||||
|
||||
impit@0.7.6:
|
||||
optionalDependencies:
|
||||
impit-darwin-arm64: 0.7.6
|
||||
impit-darwin-x64: 0.7.6
|
||||
impit-linux-arm64-gnu: 0.7.6
|
||||
impit-linux-arm64-musl: 0.7.6
|
||||
impit-linux-x64-gnu: 0.7.6
|
||||
impit-linux-x64-musl: 0.7.6
|
||||
impit-win32-arm64-msvc: 0.7.6
|
||||
impit-win32-x64-msvc: 0.7.6
|
||||
|
||||
inherits@2.0.4: {}
|
||||
|
||||
ini@1.3.8: {}
|
||||
|
||||
is-obj@2.0.0: {}
|
||||
|
||||
is-potential-custom-element-name@1.0.1: {}
|
||||
|
||||
is-standalone-pwa@0.1.1: {}
|
||||
|
||||
jsdom@24.1.3:
|
||||
dependencies:
|
||||
cssstyle: 4.6.0
|
||||
@@ -1373,25 +1079,10 @@ snapshots:
|
||||
- supports-color
|
||||
- utf-8-validate
|
||||
|
||||
language-subtag-registry@0.3.23: {}
|
||||
|
||||
language-tags@2.1.0:
|
||||
dependencies:
|
||||
language-subtag-registry: 0.3.23
|
||||
|
||||
lodash.isequal@4.5.0: {}
|
||||
|
||||
lru-cache@10.4.3: {}
|
||||
|
||||
lru-cache@11.2.4: {}
|
||||
|
||||
math-intrinsics@1.1.0: {}
|
||||
|
||||
maxmind@5.0.3:
|
||||
dependencies:
|
||||
mmdb-lib: 3.0.1
|
||||
tiny-lru: 11.4.5
|
||||
|
||||
mime-db@1.52.0: {}
|
||||
|
||||
mime-types@2.1.35:
|
||||
@@ -1400,17 +1091,18 @@ snapshots:
|
||||
|
||||
mimic-response@3.1.0: {}
|
||||
|
||||
minimatch@10.1.1:
|
||||
dependencies:
|
||||
'@isaacs/brace-expansion': 5.0.0
|
||||
|
||||
minimist@1.2.8: {}
|
||||
|
||||
minipass@7.1.2: {}
|
||||
|
||||
minizlib@3.1.0:
|
||||
dependencies:
|
||||
minipass: 7.1.2
|
||||
|
||||
mkdirp-classic@0.5.3: {}
|
||||
|
||||
mmdb-lib@3.0.1: {}
|
||||
mmdb-lib@3.0.1:
|
||||
optional: true
|
||||
|
||||
ms@2.1.3: {}
|
||||
|
||||
@@ -1420,43 +1112,18 @@ snapshots:
|
||||
dependencies:
|
||||
semver: 7.7.3
|
||||
|
||||
node-releases@2.0.27: {}
|
||||
|
||||
nwsapi@2.2.23: {}
|
||||
|
||||
once@1.4.0:
|
||||
dependencies:
|
||||
wrappy: 1.0.2
|
||||
|
||||
ow@0.28.2:
|
||||
dependencies:
|
||||
'@sindresorhus/is': 4.6.0
|
||||
callsites: 3.1.0
|
||||
dot-prop: 6.0.1
|
||||
lodash.isequal: 4.5.0
|
||||
vali-date: 1.0.0
|
||||
|
||||
parse5@7.3.0:
|
||||
dependencies:
|
||||
entities: 6.0.1
|
||||
|
||||
path-scurry@2.0.1:
|
||||
dependencies:
|
||||
lru-cache: 11.2.4
|
||||
minipass: 7.1.2
|
||||
|
||||
picocolors@1.1.1: {}
|
||||
|
||||
playwright-core@1.57.0: {}
|
||||
|
||||
playwright-core@1.58.2: {}
|
||||
|
||||
playwright@1.58.2:
|
||||
dependencies:
|
||||
playwright-core: 1.58.2
|
||||
optionalDependencies:
|
||||
fsevents: 2.3.2
|
||||
|
||||
prebuild-install@7.1.3:
|
||||
dependencies:
|
||||
detect-libc: 2.1.2
|
||||
@@ -1472,8 +1139,6 @@ snapshots:
|
||||
tar-fs: 2.1.4
|
||||
tunnel-agent: 0.6.0
|
||||
|
||||
progress@2.0.3: {}
|
||||
|
||||
psl@1.15.0:
|
||||
dependencies:
|
||||
punycode: 2.3.1
|
||||
@@ -1512,8 +1177,6 @@ snapshots:
|
||||
|
||||
safer-buffer@2.1.2: {}
|
||||
|
||||
sax@1.4.4: {}
|
||||
|
||||
saxes@6.0.0:
|
||||
dependencies:
|
||||
xmlchars: 2.2.0
|
||||
@@ -1551,7 +1214,13 @@ snapshots:
|
||||
inherits: 2.0.4
|
||||
readable-stream: 3.6.2
|
||||
|
||||
tiny-lru@11.4.5: {}
|
||||
tar@7.5.11:
|
||||
dependencies:
|
||||
'@isaacs/fs-minipass': 4.0.1
|
||||
chownr: 3.0.0
|
||||
minipass: 7.1.2
|
||||
minizlib: 3.1.0
|
||||
yallist: 5.0.0
|
||||
|
||||
tough-cookie@4.1.4:
|
||||
dependencies:
|
||||
@@ -1564,8 +1233,6 @@ snapshots:
|
||||
dependencies:
|
||||
punycode: 2.3.1
|
||||
|
||||
tslib@2.8.1: {}
|
||||
|
||||
tsx@4.21.0:
|
||||
dependencies:
|
||||
esbuild: 0.27.0
|
||||
@@ -1585,24 +1252,10 @@ snapshots:
|
||||
|
||||
typescript@5.9.3: {}
|
||||
|
||||
ua-is-frozen@0.1.2: {}
|
||||
|
||||
ua-parser-js@2.0.7:
|
||||
dependencies:
|
||||
detect-europe-js: 0.1.2
|
||||
is-standalone-pwa: 0.1.1
|
||||
ua-is-frozen: 0.1.2
|
||||
|
||||
undici-types@7.16.0: {}
|
||||
|
||||
universalify@0.2.0: {}
|
||||
|
||||
update-browserslist-db@1.2.3(browserslist@4.28.1):
|
||||
dependencies:
|
||||
browserslist: 4.28.1
|
||||
escalade: 3.2.0
|
||||
picocolors: 1.1.1
|
||||
|
||||
url-parse@1.5.10:
|
||||
dependencies:
|
||||
querystringify: 2.2.0
|
||||
@@ -1610,8 +1263,6 @@ snapshots:
|
||||
|
||||
util-deprecate@1.0.2: {}
|
||||
|
||||
vali-date@1.0.0: {}
|
||||
|
||||
w3c-xmlserializer@5.0.0:
|
||||
dependencies:
|
||||
xml-name-validator: 5.0.0
|
||||
@@ -1635,11 +1286,6 @@ snapshots:
|
||||
|
||||
xml-name-validator@5.0.0: {}
|
||||
|
||||
xml2js@0.6.2:
|
||||
dependencies:
|
||||
sax: 1.4.4
|
||||
xmlbuilder: 11.0.1
|
||||
|
||||
xmlbuilder@11.0.1: {}
|
||||
|
||||
xmlchars@2.2.0: {}
|
||||
|
||||
yallist@5.0.0: {}
|
||||
|
||||
@@ -3,7 +3,7 @@ import { getPage } from './browse.js';
|
||||
|
||||
const baseUrl = 'http://localhost:3000';
|
||||
const username = 'analyst@fhb.local';
|
||||
const password = process.env.CAMOUFOX_PASSWORD ?? '';
|
||||
const password = process.env.CLOAKBROWSER_PASSWORD ?? '';
|
||||
|
||||
const reportPath = '/Users/stefano.fiorini/Documents/projects/fhb-loan-spreading-pilot-a/docs/plans/2026-01-24-financials-analysis-redesign/web-automation-scan.md';
|
||||
|
||||
|
||||
@@ -1,28 +1,25 @@
|
||||
import { Camoufox } from 'camoufox-js';
|
||||
import { launchPersistentContext } from 'cloakbrowser';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
import { mkdirSync, existsSync } from 'fs';
|
||||
|
||||
async function test() {
|
||||
const profilePath = join(homedir(), '.camoufox-profile');
|
||||
const profilePath = join(homedir(), '.cloakbrowser-profile');
|
||||
if (!existsSync(profilePath)) {
|
||||
mkdirSync(profilePath, { recursive: true });
|
||||
}
|
||||
|
||||
console.log('Profile path:', profilePath);
|
||||
console.log('Launching with full options...');
|
||||
console.log('Launching CloakBrowser with full options...');
|
||||
|
||||
const browser = await Camoufox({
|
||||
const browser = await launchPersistentContext({
|
||||
headless: true,
|
||||
user_data_dir: profilePath,
|
||||
// humanize: 1.5, // Test without this first
|
||||
// geoip: true, // Test without this first
|
||||
// enable_cache: true,
|
||||
// block_webrtc: false,
|
||||
userDataDir: profilePath,
|
||||
humanize: true,
|
||||
});
|
||||
|
||||
console.log('Browser launched');
|
||||
const page = await browser.newPage();
|
||||
const page = browser.pages()[0] || await browser.newPage();
|
||||
console.log('Page created');
|
||||
|
||||
await page.goto('https://github.com', { timeout: 30000 });
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import { Camoufox } from 'camoufox-js';
|
||||
import { launch } from 'cloakbrowser';
|
||||
|
||||
async function test() {
|
||||
console.log('Launching Camoufox with minimal config...');
|
||||
console.log('Launching CloakBrowser with minimal config...');
|
||||
|
||||
const browser = await Camoufox({
|
||||
const browser = await launch({
|
||||
headless: true,
|
||||
humanize: true,
|
||||
});
|
||||
|
||||
console.log('Browser launched');
|
||||
|
||||
@@ -1,24 +1,25 @@
|
||||
import { Camoufox } from 'camoufox-js';
|
||||
import { launchPersistentContext } from 'cloakbrowser';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
import { mkdirSync, existsSync } from 'fs';
|
||||
|
||||
async function test() {
|
||||
const profilePath = join(homedir(), '.camoufox-profile');
|
||||
const profilePath = join(homedir(), '.cloakbrowser-profile');
|
||||
if (!existsSync(profilePath)) {
|
||||
mkdirSync(profilePath, { recursive: true });
|
||||
}
|
||||
|
||||
console.log('Profile path:', profilePath);
|
||||
console.log('Launching with user_data_dir...');
|
||||
console.log('Launching with persistent userDataDir...');
|
||||
|
||||
const browser = await Camoufox({
|
||||
const browser = await launchPersistentContext({
|
||||
headless: true,
|
||||
user_data_dir: profilePath,
|
||||
userDataDir: profilePath,
|
||||
humanize: true,
|
||||
});
|
||||
|
||||
console.log('Browser launched');
|
||||
const page = await browser.newPage();
|
||||
const page = browser.pages()[0] || await browser.newPage();
|
||||
console.log('Page created');
|
||||
|
||||
await page.goto('https://example.com', { timeout: 30000 });
|
||||
|
||||
Reference in New Issue
Block a user