3 Commits

Author SHA1 Message Date
Stefano Fiorini
6e2fd17734 refactor: consolidate web scraping into web-automation 2026-03-10 19:24:17 -05:00
Stefano Fiorini
4b505e4421 feat: add safe Playwright scraper skill 2026-03-10 19:07:30 -05:00
Stefano Fiorini
60363f9f0c chore: ignore local worktrees 2026-03-10 18:59:11 -05:00
8 changed files with 328 additions and 8 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.worktrees/

View File

@@ -20,7 +20,7 @@ This repository contains practical OpenClaw skills and companion integrations. I
| `gitea-api` | Interact with Gitea via REST API (repos, issues, PRs, releases, branches, user info). | `skills/gitea-api` |
| `portainer` | Manage Portainer stacks via API (list, start/stop/restart, update, prune images). | `skills/portainer` |
| `searxng` | Search through a local or self-hosted SearXNG instance for web, news, images, and more. | `skills/searxng` |
| `web-automation` | Automate browsing/scraping with Playwright + Camoufox (auth flows, extraction, bot-protected sites). | `skills/web-automation` |
| `web-automation` | One-shot extraction plus broader browsing/scraping with Playwright + Camoufox (auth flows, extraction, bot-protected sites). | `skills/web-automation` |
## Integrations

View File

@@ -8,8 +8,7 @@ This folder contains detailed docs for each skill in this repository.
- [`gitea-api`](gitea-api.md) — REST-based Gitea automation (no `tea` CLI required)
- [`portainer`](portainer.md) — Portainer stack management (list, lifecycle, updates, image pruning)
- [`searxng`](searxng.md) — Privacy-respecting metasearch via a local or self-hosted SearXNG instance
- [`web-automation`](web-automation.md) — Playwright + Camoufox browser automation and scraping
- [`web-automation`](web-automation.md) — One-shot extraction plus Playwright + Camoufox browser automation and scraping
## Integrations

View File

@@ -1,14 +1,21 @@
# web-automation
Automated web browsing and scraping using Playwright with Camoufox anti-detection browser.
Automated web browsing and scraping using Playwright, with one-shot extraction and broader Camoufox-based automation under a single skill.
## What this skill is for
- One-shot extraction from one URL with JSON output
- Automating web workflows
- Authenticated session flows (logins/cookies)
- Extracting page content to markdown
- Working with bot-protected or dynamic pages
## Command selection
- Use `node skills/web-automation/scripts/extract.js "<URL>"` for one-shot extraction from a single URL
- Use `npx tsx scrape.ts ...` for markdown scraping modes
- Use `npx tsx browse.ts ...`, `auth.ts`, or `flow.ts` for interactive or authenticated flows
## Requirements
- Node.js 20+
@@ -20,6 +27,7 @@ Automated web browsing and scraping using Playwright with Camoufox anti-detectio
```bash
cd ~/.openclaw/workspace/skills/web-automation/scripts
pnpm install
npx playwright install chromium
npx camoufox-js fetch
pnpm approve-builds
pnpm rebuild better-sqlite3 esbuild
@@ -45,6 +53,9 @@ Without this, `browse.ts` and `scrape.ts` may fail before launch because the nat
## Common commands
```bash
# One-shot JSON extraction
node skills/web-automation/scripts/extract.js "https://example.com"
# Browse a page
npx tsx browse.ts --url "https://example.com"
@@ -58,6 +69,41 @@ npx tsx auth.ts --url "https://example.com/login"
npx tsx flow.ts --instruction 'go to https://search.fiorinis.com then type "pippo" then press enter then wait 2s'
```
## One-shot extraction (`extract.js`)
Use `extract.js` when the task is just: open one URL, render it, and return structured content.
### Features
- JavaScript rendering
- lightweight stealth and bounded anti-bot shaping
- JSON-only output
- optional screenshot and saved HTML
- browser sandbox left enabled
### Options
```bash
WAIT_TIME=5000 node skills/web-automation/scripts/extract.js "https://example.com"
SCREENSHOT_PATH=/tmp/page.png node skills/web-automation/scripts/extract.js "https://example.com"
SAVE_HTML=true node skills/web-automation/scripts/extract.js "https://example.com"
HEADLESS=false node skills/web-automation/scripts/extract.js "https://example.com"
USER_AGENT="Mozilla/5.0 ..." node skills/web-automation/scripts/extract.js "https://example.com"
```
### Output fields
- `requestedUrl`
- `finalUrl`
- `title`
- `content`
- `metaDescription`
- `status`
- `elapsedSeconds`
- `challengeDetected`
- optional `screenshot`
- optional `htmlFile`
## Natural-language flow runner (`flow.ts`)
Use `flow.ts` when you want a general command style like:

View File

@@ -1,11 +1,20 @@
---
name: web-automation
description: Browse and scrape web pages using Playwright with Camoufox anti-detection browser. Use when automating web workflows, extracting page content to markdown, handling authenticated sessions, or scraping websites with bot protection.
description: Browse and scrape web pages using Playwright with Camoufox anti-detection browser. Use when automating web workflows, extracting rendered page content, handling authenticated sessions, or scraping websites with bot protection.
---
# Web Automation with Camoufox (Codex)
Automated web browsing and scraping using Playwright with Camoufox anti-detection browser.
Automated web browsing and scraping using Playwright with two execution paths under one skill:
- one-shot extraction via `extract.js`
- broader stateful automation via Camoufox and the existing `auth.ts`, `browse.ts`, `flow.ts`, and `scrape.ts`
## When To Use Which Command
- Use `node scripts/extract.js "<URL>"` for one-shot extraction from a single URL when you need rendered content, bounded stealth behavior, and JSON output.
- Use `npx tsx scrape.ts ...` when you need markdown output, Readability extraction, full-page cleanup, or selector-based scraping.
- Use `npx tsx browse.ts ...`, `auth.ts`, or `flow.ts` when the task needs interactive navigation, persistent sessions, login handling, click/type actions, or multi-step workflows.
## Requirements
@@ -18,6 +27,7 @@ Automated web browsing and scraping using Playwright with Camoufox anti-detectio
```bash
cd ~/.openclaw/workspace/skills/web-automation/scripts
pnpm install
npx playwright install chromium
npx camoufox-js fetch
pnpm approve-builds
pnpm rebuild better-sqlite3 esbuild
@@ -29,13 +39,13 @@ Before running any automation, verify Playwright + Camoufox dependencies are ins
```bash
cd ~/.openclaw/workspace/skills/web-automation/scripts
node -e "require.resolve('playwright-core/package.json');require.resolve('camoufox-js/package.json');console.log('OK: playwright-core + camoufox-js installed')"
node -e "require.resolve('playwright/package.json');require.resolve('playwright-core/package.json');require.resolve('camoufox-js/package.json');console.log('OK: playwright + playwright-core + camoufox-js installed')"
node -e "const fs=require('fs');const t=fs.readFileSync('browse.ts','utf8');if(!/camoufox-js/.test(t)){throw new Error('browse.ts is not configured for Camoufox')}console.log('OK: Camoufox integration detected in browse.ts')"
```
If any check fails, stop and return:
"Missing dependency/config: web-automation requires `playwright-core` + `camoufox-js` and Camoufox-based scripts. Run setup in this skill, then retry."
"Missing dependency/config: web-automation requires `playwright`, `playwright-core`, and `camoufox-js` with Camoufox-based scripts. Run setup in this skill, then retry."
If runtime fails with missing native bindings for `better-sqlite3` or `esbuild`, run:
@@ -47,11 +57,35 @@ pnpm rebuild better-sqlite3 esbuild
## Quick Reference
- One-shot JSON extract: `node scripts/extract.js "https://example.com"`
- Browse page: `npx tsx browse.ts --url "https://example.com"`
- Scrape markdown: `npx tsx scrape.ts --url "https://example.com" --mode main --output page.md`
- Authenticate: `npx tsx auth.ts --url "https://example.com/login"`
- Natural-language flow: `npx tsx flow.ts --instruction 'go to https://example.com then click on "Login" then type "user@example.com" in #email then press enter'`
## One-shot extraction
Use `extract.js` when you need a single page fetch with JavaScript rendering and lightweight anti-bot shaping, but not a full automation session.
```bash
node scripts/extract.js "https://example.com"
WAIT_TIME=5000 node scripts/extract.js "https://example.com"
SCREENSHOT_PATH=/tmp/page.png SAVE_HTML=true node scripts/extract.js "https://example.com"
```
Output is JSON only and includes fields such as:
- `requestedUrl`
- `finalUrl`
- `title`
- `content`
- `metaDescription`
- `status`
- `elapsedSeconds`
- `challengeDetected`
- optional `screenshot`
- optional `htmlFile`
## General flow runner
Use `flow.ts` for multi-step commands in plain language (go/click/type/press/wait/screenshot).
@@ -67,3 +101,4 @@ npx tsx flow.ts --instruction 'go to https://search.fiorinis.com then type "pipp
- Sessions persist in Camoufox profile storage.
- Use `--wait` for dynamic pages.
- Use `--mode selector --selector "..."` for targeted extraction.
- `extract.js` keeps stealth and bounded anti-bot shaping while keeping the Chromium sandbox enabled.

View File

@@ -0,0 +1,208 @@
#!/usr/bin/env node
import fs from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
const DEFAULT_WAIT_MS = 5000;
const MAX_WAIT_MS = 20000;
const NAV_TIMEOUT_MS = 30000;
const EXTRA_CHALLENGE_WAIT_MS = 8000;
const CONTENT_LIMIT = 12000;
const DEFAULT_USER_AGENT =
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
function fail(message, details) {
const payload = { error: message };
if (details) payload.details = details;
process.stderr.write(`${JSON.stringify(payload)}\n`);
process.exit(1);
}
function parseWaitTime(raw) {
const value = Number.parseInt(raw || `${DEFAULT_WAIT_MS}`, 10);
if (!Number.isFinite(value) || value < 0) return DEFAULT_WAIT_MS;
return Math.min(value, MAX_WAIT_MS);
}
function parseTarget(rawUrl) {
if (!rawUrl) {
fail("Missing URL. Usage: node skills/web-automation/scripts/extract.js <URL>");
}
let parsed;
try {
parsed = new URL(rawUrl);
} catch (error) {
fail("Invalid URL.", error.message);
}
if (!["http:", "https:"].includes(parsed.protocol)) {
fail("Only http and https URLs are allowed.");
}
return parsed.toString();
}
function ensureParentDir(filePath) {
if (!filePath) return;
fs.mkdirSync(path.dirname(filePath), { recursive: true });
}
async function detectChallenge(page) {
try {
return await page.evaluate(() => {
const text = (document.body?.innerText || "").toLowerCase();
return (
text.includes("checking your browser") ||
text.includes("just a moment") ||
text.includes("verify you are human") ||
text.includes("press and hold") ||
document.querySelector('iframe[src*="challenge"]') !== null ||
document.querySelector('iframe[src*="cloudflare"]') !== null
);
});
} catch {
return false;
}
}
async function loadPlaywright() {
try {
return await import("playwright");
} catch (error) {
fail(
"Playwright is not installed for this skill. Run pnpm install and npx playwright install chromium in skills/web-automation/scripts first.",
error.message
);
}
}
async function main() {
const requestedUrl = parseTarget(process.argv[2]);
const waitTime = parseWaitTime(process.env.WAIT_TIME);
const screenshotPath = process.env.SCREENSHOT_PATH || "";
const saveHtml = process.env.SAVE_HTML === "true";
const headless = process.env.HEADLESS !== "false";
const userAgent = process.env.USER_AGENT || DEFAULT_USER_AGENT;
const startedAt = Date.now();
const { chromium } = await loadPlaywright();
let browser;
try {
browser = await chromium.launch({
headless,
ignoreDefaultArgs: ["--enable-automation"],
args: [
"--disable-blink-features=AutomationControlled",
"--disable-features=IsolateOrigins,site-per-process"
]
});
const context = await browser.newContext({
userAgent,
locale: "en-US",
viewport: { width: 1440, height: 900 },
extraHTTPHeaders: {
"Accept-Language": "en-US,en;q=0.9",
Accept: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
}
});
await context.addInitScript(() => {
Object.defineProperty(navigator, "webdriver", {
get: () => false
});
Object.defineProperty(navigator, "languages", {
get: () => ["en-US", "en"]
});
Object.defineProperty(navigator, "plugins", {
get: () => [1, 2, 3, 4, 5]
});
window.chrome = window.chrome || { runtime: {} };
const originalQuery = window.navigator.permissions?.query?.bind(window.navigator.permissions);
if (originalQuery) {
window.navigator.permissions.query = (parameters) => {
if (parameters?.name === "notifications") {
return Promise.resolve({ state: Notification.permission });
}
return originalQuery(parameters);
};
}
});
const page = await context.newPage();
const response = await page.goto(requestedUrl, {
waitUntil: "domcontentloaded",
timeout: NAV_TIMEOUT_MS
});
await page.waitForTimeout(waitTime);
let challengeDetected = await detectChallenge(page);
if (challengeDetected) {
await page.waitForTimeout(EXTRA_CHALLENGE_WAIT_MS);
challengeDetected = await detectChallenge(page);
}
const extracted = await page.evaluate((contentLimit) => {
const bodyText = document.body?.innerText || "";
return {
finalUrl: window.location.href,
title: document.title || "",
content: bodyText.slice(0, contentLimit),
metaDescription:
document.querySelector('meta[name="description"]')?.content ||
document.querySelector('meta[property="og:description"]')?.content ||
""
};
}, CONTENT_LIMIT);
const result = {
requestedUrl,
finalUrl: extracted.finalUrl,
title: extracted.title,
content: extracted.content,
metaDescription: extracted.metaDescription,
status: response ? response.status() : null,
challengeDetected,
elapsedSeconds: ((Date.now() - startedAt) / 1000).toFixed(2)
};
if (screenshotPath) {
ensureParentDir(screenshotPath);
await page.screenshot({ path: screenshotPath, fullPage: false, timeout: 10000 });
result.screenshot = screenshotPath;
}
if (saveHtml) {
const htmlTarget = screenshotPath
? screenshotPath.replace(/\.[^.]+$/, ".html")
: path.resolve(__dirname, `page-${Date.now()}.html`);
ensureParentDir(htmlTarget);
fs.writeFileSync(htmlTarget, await page.content());
result.htmlFile = htmlTarget;
}
process.stdout.write(`${JSON.stringify(result, null, 2)}\n`);
await browser.close();
} catch (error) {
if (browser) {
try {
await browser.close();
} catch {
// Ignore close errors after the primary failure.
}
}
fail("Scrape failed.", error.message);
}
}
main();

View File

@@ -4,6 +4,7 @@
"description": "Web browsing and scraping scripts using Camoufox",
"type": "module",
"scripts": {
"extract": "node extract.js",
"browse": "tsx browse.ts",
"scrape": "tsx scrape.ts",
"fetch-browser": "npx camoufox-js fetch"
@@ -14,6 +15,7 @@
"camoufox-js": "^0.8.5",
"jsdom": "^24.0.0",
"minimist": "^1.2.8",
"playwright": "^1.58.2",
"playwright-core": "^1.40.0",
"turndown": "^7.1.2",
"turndown-plugin-gfm": "^1.0.2"

View File

@@ -23,6 +23,9 @@ importers:
minimist:
specifier: ^1.2.8
version: 1.2.8
playwright:
specifier: ^1.58.2
version: 1.58.2
playwright-core:
specifier: ^1.40.0
version: 1.57.0
@@ -440,6 +443,11 @@ packages:
fs-constants@1.0.0:
resolution: {integrity: sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow==}
fsevents@2.3.2:
resolution: {integrity: sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==}
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
os: [darwin]
fsevents@2.3.3:
resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==}
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
@@ -679,6 +687,16 @@ packages:
engines: {node: '>=18'}
hasBin: true
playwright-core@1.58.2:
resolution: {integrity: sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==}
engines: {node: '>=18'}
hasBin: true
playwright@1.58.2:
resolution: {integrity: sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==}
engines: {node: '>=18'}
hasBin: true
prebuild-install@7.1.3:
resolution: {integrity: sha512-8Mf2cbV7x1cXPUILADGI3wuhfqWvtiLA1iclTDbFRZkgRQS0NqsPZphna9V+HyTEadheuPmjaJMsbzKQFOzLug==}
engines: {node: '>=10'}
@@ -1196,6 +1214,9 @@ snapshots:
fs-constants@1.0.0: {}
fsevents@2.3.2:
optional: true
fsevents@2.3.3:
optional: true
@@ -1428,6 +1449,14 @@ snapshots:
playwright-core@1.57.0: {}
playwright-core@1.58.2: {}
playwright@1.58.2:
dependencies:
playwright-core: 1.58.2
optionalDependencies:
fsevents: 2.3.2
prebuild-install@7.1.3:
dependencies:
detect-libc: 2.1.2