feat: add safe Playwright scraper skill

This commit is contained in:
Stefano Fiorini
2026-03-10 19:07:30 -05:00
parent 60363f9f0c
commit 4b505e4421
10 changed files with 430 additions and 0 deletions

View File

@@ -18,6 +18,7 @@ This repository contains practical OpenClaw skills and companion integrations. I
|---|---|---| |---|---|---|
| `elevenlabs-stt` | Transcribe local audio files with ElevenLabs Speech-to-Text, with diarization, language hints, event tags, and JSON output. | `skills/elevenlabs-stt` | | `elevenlabs-stt` | Transcribe local audio files with ElevenLabs Speech-to-Text, with diarization, language hints, event tags, and JSON output. | `skills/elevenlabs-stt` |
| `gitea-api` | Interact with Gitea via REST API (repos, issues, PRs, releases, branches, user info). | `skills/gitea-api` | | `gitea-api` | Interact with Gitea via REST API (repos, issues, PRs, releases, branches, user info). | `skills/gitea-api` |
| `playwright-safe` | Single-entry Playwright scraper for one-shot extraction with JS rendering and moderate anti-bot handling. | `skills/playwright-safe` |
| `portainer` | Manage Portainer stacks via API (list, start/stop/restart, update, prune images). | `skills/portainer` | | `portainer` | Manage Portainer stacks via API (list, start/stop/restart, update, prune images). | `skills/portainer` |
| `searxng` | Search through a local or self-hosted SearXNG instance for web, news, images, and more. | `skills/searxng` | | `searxng` | Search through a local or self-hosted SearXNG instance for web, news, images, and more. | `skills/searxng` |
| `web-automation` | Automate browsing/scraping with Playwright + Camoufox (auth flows, extraction, bot-protected sites). | `skills/web-automation` | | `web-automation` | Automate browsing/scraping with Playwright + Camoufox (auth flows, extraction, bot-protected sites). | `skills/web-automation` |

View File

@@ -6,6 +6,7 @@ This folder contains detailed docs for each skill in this repository.
- [`elevenlabs-stt`](elevenlabs-stt.md) — Local audio transcription through ElevenLabs Speech-to-Text - [`elevenlabs-stt`](elevenlabs-stt.md) — Local audio transcription through ElevenLabs Speech-to-Text
- [`gitea-api`](gitea-api.md) — REST-based Gitea automation (no `tea` CLI required) - [`gitea-api`](gitea-api.md) — REST-based Gitea automation (no `tea` CLI required)
- [`playwright-safe`](playwright-safe.md) — Single-entry Playwright scraper for one-shot extraction with JS rendering and moderate anti-bot handling
- [`portainer`](portainer.md) — Portainer stack management (list, lifecycle, updates, image pruning) - [`portainer`](portainer.md) — Portainer stack management (list, lifecycle, updates, image pruning)
- [`searxng`](searxng.md) — Privacy-respecting metasearch via a local or self-hosted SearXNG instance - [`searxng`](searxng.md) — Privacy-respecting metasearch via a local or self-hosted SearXNG instance
- [`web-automation`](web-automation.md) — Playwright + Camoufox browser automation and scraping - [`web-automation`](web-automation.md) — Playwright + Camoufox browser automation and scraping

72
docs/playwright-safe.md Normal file
View File

@@ -0,0 +1,72 @@
# playwright-safe
Single-entry Playwright scraper for one-shot page extraction with JavaScript rendering and moderate anti-bot handling.
## What this skill is for
- Extracting title, visible text, and metadata from one URL
- Pages that need client-side rendering
- Moderate anti-bot shaping without a full browser automation workflow
- Structured JSON output that agents can consume directly
## What this skill is not for
- Multi-step browser workflows
- Authenticated login flows
- Interactive click/type sequences across multiple pages
Use `web-automation` for those broader browser tasks.
## Runtime requirements
- Node.js 18+
- Local Playwright install under the skill directory
## First-time setup
```bash
cd ~/.openclaw/workspace/skills/playwright-safe
npm install
npx playwright install chromium
```
## Entry point
```bash
node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
```
Only pass a user-provided `http` or `https` URL.
## Options
```bash
WAIT_TIME=5000 node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
SCREENSHOT_PATH=/tmp/page.png node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
SAVE_HTML=true node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
HEADLESS=false node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
USER_AGENT="Mozilla/5.0 ..." node skills/playwright-safe/scripts/playwright-safe.js "<URL>"
```
## Output
The script prints JSON only. It includes:
- `requestedUrl`
- `finalUrl`
- `title`
- `content`
- `metaDescription`
- `status`
- `elapsedSeconds`
- `challengeDetected`
- optional `screenshot`
- optional `htmlFile`
## Security posture
- Keeps lightweight stealth and anti-bot shaping
- Keeps the browser sandbox enabled
- Does not use `--no-sandbox`
- Does not use `--disable-setuid-sandbox`
- Avoids site-specific extractors and cross-skill dependencies

View File

@@ -9,6 +9,11 @@ Automated web browsing and scraping using Playwright with Camoufox anti-detectio
- Extracting page content to markdown - Extracting page content to markdown
- Working with bot-protected or dynamic pages - Working with bot-protected or dynamic pages
## Routing rule
- For one-shot page extraction from a single URL, prefer `playwright-safe`
- Use `web-automation` only when the task needs interactive browser control, multi-step navigation, or authenticated flows
## Requirements ## Requirements
- Node.js 20+ - Node.js 20+

3
skills/playwright-safe/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
node_modules/
*.png
*.html

View File

@@ -0,0 +1,68 @@
---
name: playwright-safe
description: Use when a page needs JavaScript rendering or moderate anti-bot handling and the agent should use a single local Playwright scraper instead of generic web fetch tooling.
---
# Playwright Safe
Single-entry Playwright scraper for dynamic or moderately bot-protected pages.
## When To Use
- Page content depends on client-side rendering
- Generic `scrape` or `webfetch` is likely to miss rendered content
- The task needs one direct page extraction with lightweight stealth behavior
## Do Not Use
- For multi-step browser workflows with login/stateful interaction
- For site-specific automation flows
- When the page can be handled by a simpler built-in fetch path
## Setup
```bash
cd ~/.openclaw/workspace/skills/playwright-safe
npm install
npx playwright install chromium
```
## Command
```bash
node scripts/playwright-safe.js "<URL>"
```
Only pass a user-provided `http` or `https` URL.
## Options
```bash
WAIT_TIME=5000 node scripts/playwright-safe.js "<URL>"
SCREENSHOT_PATH=/tmp/page.png node scripts/playwright-safe.js "<URL>"
SAVE_HTML=true node scripts/playwright-safe.js "<URL>"
HEADLESS=false node scripts/playwright-safe.js "<URL>"
USER_AGENT="Mozilla/5.0 ..." node scripts/playwright-safe.js "<URL>"
```
## Output
The script prints JSON only, suitable for direct agent consumption. Fields include:
- `requestedUrl`
- `finalUrl`
- `title`
- `content`
- `metaDescription`
- `status`
- `elapsedSeconds`
- `challengeDetected`
- optional `screenshot`
- optional `htmlFile`
## Safety Notes
- Stealth and anti-bot shaping are retained
- Chromium sandbox remains enabled
- No sandbox-disabling flags are used
- No site-specific extractors or foreign tool dependencies are used

59
skills/playwright-safe/package-lock.json generated Normal file
View File

@@ -0,0 +1,59 @@
{
"name": "playwright-safe",
"version": "0.1.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "playwright-safe",
"version": "0.1.0",
"dependencies": {
"playwright": "^1.52.0"
}
},
"node_modules/fsevents": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz",
"integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==",
"hasInstallScript": true,
"license": "MIT",
"optional": true,
"os": [
"darwin"
],
"engines": {
"node": "^8.16.0 || ^10.6.0 || >=11.0.0"
}
},
"node_modules/playwright": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright/-/playwright-1.58.2.tgz",
"integrity": "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==",
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "1.58.2"
},
"bin": {
"playwright": "cli.js"
},
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"fsevents": "2.3.2"
}
},
"node_modules/playwright-core": {
"version": "1.58.2",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.2.tgz",
"integrity": "sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==",
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=18"
}
}
}
}

View File

@@ -0,0 +1,12 @@
{
"name": "playwright-safe",
"version": "0.1.0",
"private": true,
"description": "Single-entry Playwright scraper skill with bounded stealth behavior",
"scripts": {
"smoke": "node scripts/playwright-safe.js"
},
"dependencies": {
"playwright": "^1.52.0"
}
}

View File

@@ -0,0 +1,200 @@
#!/usr/bin/env node
const fs = require("fs");
const path = require("path");
const DEFAULT_WAIT_MS = 5000;
const MAX_WAIT_MS = 20000;
const NAV_TIMEOUT_MS = 30000;
const EXTRA_CHALLENGE_WAIT_MS = 8000;
const CONTENT_LIMIT = 12000;
const DEFAULT_USER_AGENT =
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36";
function fail(message, details) {
const payload = { error: message };
if (details) payload.details = details;
process.stderr.write(`${JSON.stringify(payload)}\n`);
process.exit(1);
}
function parseWaitTime(raw) {
const value = Number.parseInt(raw || `${DEFAULT_WAIT_MS}`, 10);
if (!Number.isFinite(value) || value < 0) return DEFAULT_WAIT_MS;
return Math.min(value, MAX_WAIT_MS);
}
function parseTarget(rawUrl) {
if (!rawUrl) {
fail("Missing URL. Usage: node scripts/playwright-safe.js <URL>");
}
let parsed;
try {
parsed = new URL(rawUrl);
} catch (error) {
fail("Invalid URL.", error.message);
}
if (!["http:", "https:"].includes(parsed.protocol)) {
fail("Only http and https URLs are allowed.");
}
return parsed.toString();
}
function ensureParentDir(filePath) {
if (!filePath) return;
fs.mkdirSync(path.dirname(filePath), { recursive: true });
}
async function detectChallenge(page) {
try {
return await page.evaluate(() => {
const text = (document.body?.innerText || "").toLowerCase();
return (
text.includes("checking your browser") ||
text.includes("just a moment") ||
text.includes("verify you are human") ||
text.includes("press and hold") ||
document.querySelector('iframe[src*="challenge"]') !== null ||
document.querySelector('iframe[src*="cloudflare"]') !== null
);
});
} catch {
return false;
}
}
async function main() {
const requestedUrl = parseTarget(process.argv[2]);
const waitTime = parseWaitTime(process.env.WAIT_TIME);
const screenshotPath = process.env.SCREENSHOT_PATH || "";
const saveHtml = process.env.SAVE_HTML === "true";
const headless = process.env.HEADLESS !== "false";
const userAgent = process.env.USER_AGENT || DEFAULT_USER_AGENT;
const startedAt = Date.now();
let chromium;
try {
({ chromium } = require("playwright"));
} catch (error) {
fail(
"Playwright is not installed for this skill. Run npm install and npx playwright install chromium first.",
error.message
);
}
let browser;
try {
browser = await chromium.launch({
headless,
ignoreDefaultArgs: ["--enable-automation"],
args: [
"--disable-blink-features=AutomationControlled",
"--disable-features=IsolateOrigins,site-per-process"
]
});
const context = await browser.newContext({
userAgent,
locale: "en-US",
viewport: { width: 1440, height: 900 },
extraHTTPHeaders: {
"Accept-Language": "en-US,en;q=0.9",
Accept: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
}
});
await context.addInitScript(() => {
Object.defineProperty(navigator, "webdriver", {
get: () => false
});
Object.defineProperty(navigator, "languages", {
get: () => ["en-US", "en"]
});
Object.defineProperty(navigator, "plugins", {
get: () => [1, 2, 3, 4, 5]
});
window.chrome = window.chrome || { runtime: {} };
const originalQuery = window.navigator.permissions?.query?.bind(window.navigator.permissions);
if (originalQuery) {
window.navigator.permissions.query = (parameters) => {
if (parameters?.name === "notifications") {
return Promise.resolve({ state: Notification.permission });
}
return originalQuery(parameters);
};
}
});
const page = await context.newPage();
const response = await page.goto(requestedUrl, {
waitUntil: "domcontentloaded",
timeout: NAV_TIMEOUT_MS
});
await page.waitForTimeout(waitTime);
let challengeDetected = await detectChallenge(page);
if (challengeDetected) {
await page.waitForTimeout(EXTRA_CHALLENGE_WAIT_MS);
challengeDetected = await detectChallenge(page);
}
const extracted = await page.evaluate((contentLimit) => {
const bodyText = document.body?.innerText || "";
return {
finalUrl: window.location.href,
title: document.title || "",
content: bodyText.slice(0, contentLimit),
metaDescription:
document.querySelector('meta[name="description"]')?.content ||
document.querySelector('meta[property="og:description"]')?.content ||
""
};
}, CONTENT_LIMIT);
const result = {
requestedUrl,
finalUrl: extracted.finalUrl,
title: extracted.title,
content: extracted.content,
metaDescription: extracted.metaDescription,
status: response ? response.status() : null,
challengeDetected,
elapsedSeconds: ((Date.now() - startedAt) / 1000).toFixed(2)
};
if (screenshotPath) {
ensureParentDir(screenshotPath);
await page.screenshot({ path: screenshotPath, fullPage: false, timeout: 10000 });
result.screenshot = screenshotPath;
}
if (saveHtml) {
const htmlTarget =
screenshotPath ? screenshotPath.replace(/\.[^.]+$/, ".html") : path.resolve(`page-${Date.now()}.html`);
ensureParentDir(htmlTarget);
fs.writeFileSync(htmlTarget, await page.content());
result.htmlFile = htmlTarget;
}
process.stdout.write(`${JSON.stringify(result, null, 2)}\n`);
await browser.close();
} catch (error) {
if (browser) {
try {
await browser.close();
} catch {
// Ignore close errors after the primary failure.
}
}
fail("Scrape failed.", error.message);
}
}
main();

View File

@@ -7,6 +7,15 @@ description: Browse and scrape web pages using Playwright with Camoufox anti-det
Automated web browsing and scraping using Playwright with Camoufox anti-detection browser. Automated web browsing and scraping using Playwright with Camoufox anti-detection browser.
## Routing Rule
Before using this skill, classify the task:
- If the task is one-shot page extraction for title/content from a single URL, use `~/.openclaw/workspace/skills/playwright-safe/SKILL.md` instead.
- If the task needs a multi-step browser flow, authenticated session handling, or interactive navigation/click/type behavior, use this `web-automation` skill.
Do not use `web-automation` for simple single-page extraction when `playwright-safe` is available.
## Requirements ## Requirements
- Node.js 20+ - Node.js 20+