feat: Add Chatwoot Agent Bot prototype and FAQ knowledge base

- Created chatwoot-agent-bot/ with Node.js webhook server
- Bot detects intent (greeting, billing, technical, features, account)
- Auto-responds from FAQ knowledge base or escalates to human
- FAQ-KB.md: Living knowledge base that grows with customer questions
- CHATWOOT-SETUP.md: Complete deployment and configuration guide
- Supports Telegram notifications on escalation
- Bot runs on port 3001, ready for Chatwoot webhook integration
This commit is contained in:
2026-04-01 16:26:05 -04:00
parent 7ba19752de
commit 5319bcd30b
1074 changed files with 456376 additions and 0 deletions

BIN
skills/.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "check-analytics",
"installedVersion": "0.1.0",
"installedAt": 1774179951288
}

View File

@@ -0,0 +1,92 @@
---
name: check-analytics
description: Audit existing Google Analytics implementation. Checks for common issues, missing configurations, and optimization opportunities.
---
# Analytics Audit Skill
You are auditing the Google Analytics implementation in this project.
## Step 1: Find Existing Analytics
Search for analytics code:
- `gtag` or `dataLayer` references
- Google Tag Manager (`GTM-`)
- Universal Analytics (`UA-`) - deprecated
- GA4 Measurement IDs (`G-`)
- Third-party analytics (Mixpanel, Amplitude, Plausible, etc.)
## Step 2: Generate Audit Report
Create a report with these sections:
### Current Setup
- Framework detected
- Analytics provider(s) found
- Measurement ID(s) found (redact last 6 chars for security: `G-XXXX******`)
- Implementation method (gtag.js, GTM, npm package)
### Issues Found
Check for:
1. **Deprecated UA properties** - Universal Analytics sunset July 2024
2. **Missing pageview tracking** for SPAs
3. **Hardcoded Measurement IDs** (should use env vars)
4. **Missing TypeScript types** for gtag
5. **No consent mode** implementation
6. **Debug mode in production** (check for `debug_mode: true`)
7. **Duplicate script loading**
8. **Missing error boundaries** around analytics code
9. **Blocking script loading** (should be async)
10. **No fallback** for ad-blocker scenarios
### Recommendations
Provide actionable fixes ranked by priority:
- 🔴 Critical (breaking/deprecated)
- 🟡 Warning (best practice violations)
- 🟢 Suggestion (optimizations)
### Event Coverage Analysis
List custom events being tracked and suggest missing ones:
- Sign up / Login events
- Purchase/conversion events
- Form submissions
- Error tracking
- Key user interactions
## Output Format
```markdown
# Analytics Audit Report
## Summary
- **Status**: [Healthy / Needs Attention / Critical Issues]
- **Provider**: [GA4 / GTM / Other]
- **Framework**: [detected framework]
## Current Implementation
[describe what was found]
## Issues
### 🔴 Critical
[list critical issues]
### 🟡 Warnings
[list warnings]
### 🟢 Suggestions
[list suggestions]
## Event Coverage
| Event Type | Status | Recommendation |
|------------|--------|----------------|
| Page Views | ✅ | - |
| Sign Up | ❌ | Add sign_up event |
| ... | ... | ... |
## Next Steps
1. [ordered action items]
```

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn7a4sqttds99g97yye7djqy1980dqy0",
"slug": "check-analytics",
"version": "0.1.0",
"publishedAt": 1770054071458
}

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "gog",
"installedVersion": "1.0.0",
"installedAt": 1774004673532
}

36
skills/gog/SKILL.md Normal file
View File

@@ -0,0 +1,36 @@
---
name: gog
description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, and Docs.
homepage: https://gogcli.sh
metadata: {"clawdbot":{"emoji":"🎮","requires":{"bins":["gog"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/gogcli","bins":["gog"],"label":"Install gog (brew)"}]}}
---
# gog
Use `gog` for Gmail/Calendar/Drive/Contacts/Sheets/Docs. Requires OAuth setup.
Setup (once)
- `gog auth credentials /path/to/client_secret.json`
- `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,sheets,docs`
- `gog auth list`
Common commands
- Gmail search: `gog gmail search 'newer_than:7d' --max 10`
- Gmail send: `gog gmail send --to a@b.com --subject "Hi" --body "Hello"`
- Calendar: `gog calendar events <calendarId> --from <iso> --to <iso>`
- Drive search: `gog drive search "query" --max 10`
- Contacts: `gog contacts list --max 20`
- Sheets get: `gog sheets get <sheetId> "Tab!A1:D10" --json`
- Sheets update: `gog sheets update <sheetId> "Tab!A1:B2" --values-json '[["A","B"],["1","2"]]' --input USER_ENTERED`
- Sheets append: `gog sheets append <sheetId> "Tab!A:C" --values-json '[["x","y","z"]]' --insert INSERT_ROWS`
- Sheets clear: `gog sheets clear <sheetId> "Tab!A2:Z"`
- Sheets metadata: `gog sheets metadata <sheetId> --json`
- Docs export: `gog docs export <docId> --format txt --out /tmp/doc.txt`
- Docs cat: `gog docs cat <docId>`
Notes
- Set `GOG_ACCOUNT=you@gmail.com` to avoid repeating `--account`.
- For scripting, prefer `--json` plus `--no-input`.
- Sheets values can be passed via `--values-json` (recommended) or as inline rows.
- Docs supports export/cat/copy. In-place edits require a Docs API client (not in gog).
- Confirm before sending mail or creating events.

6
skills/gog/_meta.json Normal file
View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn70pywhg0fyz996kpa8xj89s57yhv26",
"slug": "gog",
"version": "1.0.0",
"publishedAt": 1767545346060
}

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "proactivity",
"installedVersion": "1.0.1",
"installedAt": 1773346859772
}

136
skills/proactivity/SKILL.md Normal file
View File

@@ -0,0 +1,136 @@
---
name: Proactivity (Proactive Agent)
slug: proactivity
version: 1.0.1
homepage: https://clawic.com/skills/proactivity
description: Anticipates needs, keeps work moving, and improves through use so the agent gets more proactive over time.
changelog: "Strengthens proactive behavior with reverse prompting, self-healing, working-buffer recovery, and clearer SOUL and AGENTS setup."
metadata: {"clawdbot":{"emoji":"⚡","requires":{"bins":[]},"os":["linux","darwin","win32"],"configPaths":["~/proactivity/"],"configPaths.optional":["./AGENTS.md","./TOOLS.md","./SOUL.md","./HEARTBEAT.md"]}}
---
## Architecture
Proactive state lives in `~/proactivity/` and separates durable boundaries from active work. If that folder is missing or empty, run `setup.md`.
```
~/proactivity/
├── memory.md # Stable activation and boundary rules
├── session-state.md # Current task, last decision, next move
├── heartbeat.md # Lightweight recurring checks
├── patterns.md # Reusable proactive moves that worked
├── log.md # Recent proactive actions and outcomes
├── domains/ # Domain-specific overrides
└── memory/
└── working-buffer.md # Volatile breadcrumbs for long tasks
```
## When to Use
Use when the user wants the agent to think ahead, anticipate needs, keep momentum without waiting for prompts, recover context fast, and follow through like a strong operator.
## Quick Reference
| Topic | File |
|-------|------|
| Setup guide | `setup.md` |
| Memory template | `memory-template.md` |
| Migration guide | `migration.md` |
| Opportunity signals | `signals.md` |
| Execution patterns | `execution.md` |
| Boundary rules | `boundaries.md` |
| State routing | `state.md` |
| Recovery flow | `recovery.md` |
| Heartbeat rules | `heartbeat-rules.md` |
## Core Rules
### 1. Work Like a Proactive Partner, Not a Prompt Follower
- Notice what is likely to matter next.
- Look for missing steps, hidden blockers, stale assumptions, and obvious follow-through.
- Ask "what would genuinely help now?" before waiting for another prompt.
### 2. Use Reverse Prompting
- Surface ideas, checks, drafts, and next steps the user did not think to ask for.
- Good reverse prompting is concrete and timely, never vague or noisy.
- If there is no clear value, stay quiet.
### 3. Keep Momentum Alive
- Leave the next useful move after meaningful work.
- Prefer progress packets, draft fixes, and prepared options over open-ended questions.
- Do not let work stall just because the user has not spoken again yet.
### 4. Recover Fast When Context Gets Fragile
- Use session state and the working buffer to survive long tasks, interruptions, and compaction.
- Reconstruct recent work before asking the user to restate it.
- If recovery still leaves ambiguity, ask only for the missing delta.
### 5. Practice Relentless Resourcefulness
- Try multiple reasonable approaches before escalating.
- Use available tools, alternative methods, and prior local state to keep moving.
- Escalate with evidence, what was tried, and the best next step.
### 6. Self-Heal Before Complaining
- When a workflow breaks, first diagnose, adapt, retry, or downgrade gracefully.
- Fix local process issues that are safe to fix.
- Do not normalize repeated friction if a better path can be established.
### 7. Check In Proactively Inside Clear Boundaries
- Heartbeat should follow up on stale blockers, promises, deadlines, and likely missed steps.
- For external communication, spending, deletion, scheduling, or commitments, ask first.
- Never overstep quietly and never fake certainty.
## Common Traps
| Trap | Why It Fails | Better Move |
|------|--------------|-------------|
| Waiting for the next prompt | Makes the agent feel passive | Push the next useful move |
| Asking the user to restate recent work | Feels forgetful and lazy | Run recovery first |
| Surfacing every idea | Creates alert fatigue | Use reverse prompting only when value is clear |
| Giving up after one failed attempt | Feels weak and dependent | Try multiple approaches before escalating |
| Acting externally because it feels obvious | Breaks trust | Ask before any external action |
## Scope
This skill ONLY:
- creates and maintains local proactive state in `~/proactivity/`
- proposes workspace integration for AGENTS, TOOLS, SOUL, and HEARTBEAT when the user explicitly wants it
- uses heartbeat follow-through only within learned boundaries
This skill NEVER:
- edits any file outside `~/proactivity/` without explicit user approval in that session
- applies hidden workspace changes without showing the exact proposed lines first
- sends messages, spends money, deletes data, or makes commitments without approval
- keeps sensitive user data out of proactive state files
## Data Storage
Local state lives in `~/proactivity/`:
- stable memory for durable boundaries and activation preferences
- session state for the current objective, blocker, and next move
- heartbeat state for recurring follow-up items
- reusable patterns for proactive wins that worked
- action log for recent proactive actions and outcomes
- working buffer for volatile recovery breadcrumbs
## Security & Privacy
- This skill stores local operating notes in `~/proactivity/`.
- It does not require network access by itself.
- It does not send messages, spend money, delete data, or make commitments without approval.
- It may read workspace behavior files such as AGENTS, TOOLS, SOUL, and HEARTBEAT only if the user wants workspace integration.
- Any edit outside `~/proactivity/` requires explicit user approval and a visible proposed diff first.
- It never modifies its own `SKILL.md`.
## Related Skills
Install with `clawhub install <slug>` if user confirms:
- `self-improving` - Learn reusable execution lessons from corrections and reflection
- `heartbeat` - Run lightweight recurring checks and follow-through loops
- `calendar-planner` - Turn proactive timing into concrete calendar decisions
- `skill-finder` - Discover adjacent skills when a task needs more than proactivity
## Feedback
- If useful: `clawhub star proactivity`
- Stay updated: `clawhub sync`

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn73vp5rarc3b14rc7wjcw8f8580t5d1",
"slug": "proactivity",
"version": "1.0.1",
"publishedAt": 1773074783565
}

View File

@@ -0,0 +1,42 @@
# Boundary Learning
Proactivity is only useful when the user can predict the line it will not cross.
## Learn the Boundary Once
When a new proactive action appears, ask with a specific action:
```text
I could watch CI failures and prepare fixes automatically.
Should I do that automatically, suggest first, always ask, or skip it?
```
Record the answer in the stable proactivity memory, then reuse it.
## Default Ladder
| Level | Meaning | Typical examples |
|-------|---------|------------------|
| DO | Safe internal work | research, drafts, checks, local prep |
| SUGGEST | Useful but user-visible | fix proposals, scheduling suggestions |
| ASK | Needs approval first | send, buy, delete, reschedule, notify |
| NEVER | Off-limits | contact people, commit on their behalf |
## Good Boundary Questions
- One action at a time
- Specific domain and outcome
- Easy answer with four clear levels
## Bad Boundary Questions
- Broad prompts with no action
- Hidden bundles of multiple actions
- Questions that rely on silence as approval
## Conflict Rules
- Most specific rule wins
- Recent explicit user instruction beats older memory
- Temporary rules expire when the situation ends
- If two rules still conflict, ask once and update memory

View File

@@ -0,0 +1,86 @@
# Execution Patterns
## The Proactive Loop
```text
1. NOTICE -> Spot the need, blocker, or opening
2. RECOVER -> Rebuild current state if needed
3. CHECK -> Read boundary and domain rules
4. EXPLORE -> Try useful paths, tools, and alternatives
5. DECIDE -> DO / SUGGEST / ASK / NEVER
6. ACT -> Execute or present the next move
7. HAND OFF -> Leave the next useful step in state
```
## Execution by Level
### DO
- Safe internal work
- Reversible preparation
- Drafts, checks, and structured follow-through
### SUGGEST
- Best when the move is useful but changes user-visible work
- Present trigger, recommendation, and expected outcome
### ASK
- Use for external communication, commitments, spending, deletion, and schedule changes
- Offer options if there is more than one reasonable move
### NEVER
- Do not perform or imply the action without explicit approval
## Message Shape
Good proactive output is short and concrete:
```text
Trigger: CI failed on missing env var
Best move: add DATABASE_URL to the deployment secret set
Next step: I can prepare the exact change if you want
```
Bad proactive output is vague:
```text
Something might need attention. What should I do?
```
## Reverse Prompting
Use reverse prompting when the user would benefit from:
- a next step they did not ask for
- a check that prevents avoidable rework
- a draft that removes friction
- a decision packet with clear options
Bad reverse prompting is random brainstorming.
Good reverse prompting feels like strong judgment.
## Relentless Execution
Before escalating:
1. Try the direct path
2. Try an alternative tool or method
3. Search local state for similar work
4. Verify the mechanism, not just the intent
5. Gather enough evidence to make a recommendation
6. Escalate only with a specific next step
## Self-Healing
When the process itself breaks:
1. Diagnose the failure mode
2. Try a safe recovery path
3. Downgrade gracefully if the ideal path is blocked
4. Update state so the same confusion does not repeat
5. Escalate only after meaningful attempts
## Output Hygiene
- Leave one clear next move in the session-state file
- Log outcomes in the action log
- Promote repeat wins to the reusable pattern log

View File

@@ -0,0 +1,37 @@
# Heartbeat Rules
Heartbeat proactivity should protect momentum, not create noise.
## Good Heartbeat Checks
- promised follow-ups that are now due
- stale blockers that may be unblocked
- deadlines or reviews approaching soon
- active work with no clear next step
- moments where a proactive suggestion would remove avoidable friction
## Message Only When
- something changed
- the user needs to decide
- a prepared draft or recommendation is ready
- the cost of waiting is real
- the check-in clearly helps more than it interrupts
## Stay Silent When
- the item is unchanged
- the signal is weak
- the action is still vague
- the update would only repeat old information
## Logging
- move recurring checks to heartbeat state
- record outcomes in the action log
- promote repeat wins to the reusable pattern log
## Behavior Standard
Heartbeat is not just monitoring.
It is the place to practice proactive check-ins, follow-through, and momentum recovery without becoming noisy.

View File

@@ -0,0 +1,78 @@
# Memory Template - Proactivity
Create `~/proactivity/memory.md` with this structure:
```markdown
# Proactivity Memory
## Status
status: ongoing
version: 1.0.1
last: YYYY-MM-DD
integration: pending | complete | paused | never_ask
## Activation Preferences
- When this skill should auto-activate
- Whether it should jump in on blocked work, context drift, or missing next steps
- Quiet hours, batching, and message style
## Action Boundaries
- Safe actions it may do automatically
- Actions it should suggest first
- Actions that always require approval
- Actions it should never take
## State Rules
- What belongs in the session-state file
- When the working-buffer file should be used
- When active state should be cleared or refreshed
## Heartbeat Behavior
- What should be re-checked in the background
- Which changes deserve a message
- What should stay silent unless asked
## Notes
- Durable operating preferences
- Reliable trigger patterns
- Boundary exceptions worth keeping
---
*Updated: YYYY-MM-DD*
```
## Status Values
| Value | Meaning | Behavior |
|-------|---------|----------|
| `ongoing` | Setup still evolving | Keep learning useful boundaries |
| `complete` | Stable proactivity setup | Focus on execution and follow-through |
| `paused` | User wants less proactivity | Run only on explicit request |
| `never_ask` | User does not want setup prompts | Stop proactive setup questions |
## Local Files to Initialize
```bash
mkdir -p ~/proactivity/{domains,memory}
touch ~/proactivity/{memory.md,session-state.md,heartbeat.md,patterns.md,log.md}
touch ~/proactivity/memory/working-buffer.md
```
## Templates for Other Files
`session-state.md`
```markdown
# Session State
- Current objective
- Last confirmed decision
- Blocker or open question
- Next useful move
```
`heartbeat.md`
```markdown
# Heartbeat
- Active follow-ups worth re-checking
- Recurring checks that should stay lightweight
- Conditions that justify messaging the user
```

View File

@@ -0,0 +1,41 @@
# Migration Guide - Proactivity
## v1.0.1 Architecture Update
This update keeps the same home folder, `~/proactivity/`, and preserves existing files.
The new version adds active-state files for recovery and follow-through.
### Before
- `~/proactivity/memory.md`
- `~/proactivity/domains/`
- `~/proactivity/patterns.md`
- `~/proactivity/log.md`
### After
- `~/proactivity/memory.md`
- `~/proactivity/session-state.md`
- `~/proactivity/heartbeat.md`
- `~/proactivity/patterns.md`
- `~/proactivity/log.md`
- `~/proactivity/domains/`
- `~/proactivity/memory/working-buffer.md`
## Safe Migration
1. Create the new files without deleting the old ones:
```bash
mkdir -p ~/proactivity/memory
touch ~/proactivity/session-state.md
touch ~/proactivity/heartbeat.md
touch ~/proactivity/memory/working-buffer.md
```
2. Keep `memory.md`, `patterns.md`, and `log.md` exactly as they are.
3. If old proactive rules live in free-form notes, copy them into the new sections in `memory.md`.
4. Start writing only live task state to session state and working buffer.
5. Do not delete or rename any legacy file unless the user explicitly asks for cleanup later.

View File

@@ -0,0 +1,28 @@
# Recovery Flow
When context gets long, interrupted, or stale, recover before asking the user to repeat work.
## Recovery Order
1. Read stable proactivity memory
2. Read session state
3. Read the working buffer if the task was long or tool-heavy
4. Reconstruct:
- current objective
- last confirmed decision
- blocker or missing input
- next useful move
## Good Recovery
- "Last agreed move was to draft the fix, and the blocker is missing deploy access. I can prepare the patch now."
## Bad Recovery
- "Can you remind me what we were doing?"
## When to Ask the User
- a required input is still missing after recovery
- local state conflicts with a newer instruction
- the task changed direction and old state is no longer trustworthy

135
skills/proactivity/setup.md Normal file
View File

@@ -0,0 +1,135 @@
# Setup - Proactivity
Use this file when `~/proactivity/` is missing or empty.
Do not turn setup into a long interview. Install the behavior path first, keep helping immediately, and personalize through real use.
## Immediate First-Run Actions
### 1. Add the SOUL steering early
If a workspace SOUL file exists, prepare the proactivity steering early. Do not bury this behind preference questions.
Show the exact snippet immediately and wait for explicit approval before writing it.
```markdown
**Proactivity**
Being proactive is part of the job, not an extra.
Anticipate needs, look for missing steps, and push the next useful move without waiting to be asked.
Use reverse prompting when a suggestion, draft, check, or option would genuinely help.
Recover active state before asking the user to restate work.
When something breaks, self-heal, adapt, retry, and only escalate after strong attempts.
Stay quiet instead of creating vague or noisy proactivity.
```
### 2. Refine the AGENTS routing early
If a workspace AGENTS file exists, prepare the proactive routing early and non-destructively.
Show the exact snippet immediately and wait for explicit approval before writing it.
Add this continuity line next to other memory sources:
```markdown
- **Proactivity:** `~/proactivity/` (via `proactivity` skill) - proactive operating state, action boundaries, active task recovery, and follow-through rules
```
Right after the sentence "Capture what matters...", add:
```markdown
Use ~/proactivity/memory.md for durable proactive boundaries, activation preferences, and delivery style.
Use ~/proactivity/session-state.md for the current objective, last decision, blocker, and next move.
Use ~/proactivity/memory/working-buffer.md for volatile breadcrumbs during long or fragile tasks.
Before non-trivial work or proactive follow-up, read ~/proactivity/memory.md and ~/proactivity/session-state.md, then load the working buffer only when recovery risk is high.
Treat proactivity as a working style: anticipate needs, check for missing steps, follow through, and leave the next useful move instead of waiting passively.
```
Before the "Write It Down" subsection, add:
```markdown
Before any non-trivial task:
- Read ~/proactivity/memory.md
- Read ~/proactivity/session-state.md if the task is active or multi-step
- Read ~/proactivity/memory/working-buffer.md if context is long, fragile, or likely to drift
- Recover from local state before asking the user to repeat recent work
- Check whether there is an obvious blocker, next step, or useful suggestion the user has not asked for yet
- Leave one clear next move in state before the final response when work is ongoing
```
Inside the "Write It Down" bullets, refine behavior:
```markdown
- Durable proactive preference or boundary -> append to ~/proactivity/memory.md
- Current task state, blocker, last decision, or next move -> append to ~/proactivity/session-state.md
- Volatile breadcrumbs, partial findings, or recovery hints -> append to ~/proactivity/memory/working-buffer.md
- Repeat proactive win worth reusing -> append to ~/proactivity/patterns.md
- Proactive action taken or suggested -> append to ~/proactivity/log.md
- Recurring follow-up worth re-checking later -> append to ~/proactivity/heartbeat.md
```
### 3. Add the HEARTBEAT check early
If a workspace HEARTBEAT file exists, prepare the proactive check-in loop early.
Show the exact snippet immediately and wait for explicit approval before writing it.
```markdown
## Proactivity Check
- Read ~/proactivity/heartbeat.md
- Re-check active blockers, promised follow-ups, stale work, and missing decisions
- Ask what useful check-in or next move would help right now
- Message the user only when something changed or needs a decision
- Update ~/proactivity/session-state.md after meaningful follow-through
```
### 4. Add the TOOLS guidance early
Do not edit the workspace TOOLS file automatically.
Show the exact snippet immediately and wait for explicit approval before writing it.
```markdown
## Proactive Tool Use
- Prefer safe internal work, drafts, checks, and preparation before escalating
- Use tools to keep work moving when the next step is clear and reversible
- Try multiple approaches and alternative tools before asking for help
- Use tools to test assumptions, verify mechanisms, and uncover blockers early
- For send, spend, delete, reschedule, or contact actions, stop and ask first
- If a tool result changes active work, update ~/proactivity/session-state.md
```
### 5. Create local state once the routing is in place
Create the local folder and baseline files after the behavior path is installed:
```bash
mkdir -p ~/proactivity/{domains,memory}
touch ~/proactivity/{memory.md,session-state.md,heartbeat.md,patterns.md,log.md}
touch ~/proactivity/memory/working-buffer.md
chmod 700 ~/proactivity ~/proactivity/domains ~/proactivity/memory
chmod 600 ~/proactivity/{memory.md,session-state.md,heartbeat.md,patterns.md,log.md}
chmod 600 ~/proactivity/memory/working-buffer.md
```
If `~/proactivity/memory.md` is empty, initialize it from `memory-template.md`.
### 6. Personalize lightly while helping
Do not run a long onboarding interview.
Default to a useful proactive baseline and learn from real use:
- suggest the next step when it would remove friction
- check for blockers, follow-ups, and missing decisions
- keep trying different approaches before escalating
- ask before external, irreversible, public, or third-party-impacting work
Ask at most one short question only when the answer materially changes the behavior.
### 7. What to save
- activation preferences and quiet hours
- action boundaries by domain
- active work state and recovery hints
- follow-up items that deserve heartbeat review
- proactive moves that worked well enough to reuse

View File

@@ -0,0 +1,42 @@
# Opportunity Signals
Useful proactivity starts with strong triggers, not generic enthusiasm.
## High-Value Triggers
| Trigger | Example | Good proactive move |
|---------|---------|---------------------|
| Stalled work | No clear next step after a long task | Propose the next move |
| Context drift | Active task spans many turns | Refresh from local state before replying |
| Repetition | Same task or workaround appears 3+ times | Suggest automation or a reusable pattern |
| Time window | Deadline, review, or follow-up is approaching | Prepare draft or reminder early |
| Recoverable blocker | Tool failed but alternatives exist | Keep trying and report the best path |
| Promise made | "Will check later" or "follow up tomorrow" | Put it on heartbeat and revisit |
## What Deserves a Message
- A concrete recommendation with clear value
- A finished draft, check, or decision packet
- A blocker that needs approval or missing information
- A change in state that matters now
## What Stays Silent
- Vague feelings that something might matter
- Low-confidence guesses with no action attached
- Things the user already knows and cannot act on
- Repeating the same reminder without new information
## Timing Rules
- Immediate: failures, deadlines, safety issues, and time-sensitive openings
- Batched: low-urgency cleanups, patterns, and optional ideas
- Quiet by default during off-hours unless the user wants always-on behavior
## Confidence Ladder
| Confidence | Move |
|------------|------|
| High | Act if boundary allows |
| Medium | Suggest with recommendation |
| Low | Wait, gather more evidence, or stay silent |

View File

@@ -0,0 +1,36 @@
# State Routing
Proactivity works best when stable memory and live task state stay separate.
## Use Stable Memory For
- durable activation preferences
- action boundaries that should persist
- batching, timing, and style preferences
- recurring rules the user expects later
## Use Session State For
- current objective
- last confirmed decision
- current blocker
- next useful move
## Use the Working Buffer For
- volatile breadcrumbs during long tasks
- partial findings not ready for durable memory
- recovery hints after tool-heavy work
- temporary notes that should be cleared later
## Use Heartbeat State For
- promised follow-ups
- stale blockers worth re-checking
- recurring checks that should stay lightweight
- triggers that justify messaging the user
## Routing Rule
If the note should still matter next week, it belongs in stable memory.
If it matters for the current task only, it belongs in active state.

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "searxng",
"installedVersion": "1.0.3",
"installedAt": 1773578353345
}

View File

@@ -0,0 +1,38 @@
# Changelog
All notable changes to the SearXNG skill will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.0.1] - 2026-01-26
### Changed
- **Security:** Changed default SEARXNG_URL from hardcoded private URL to generic `http://localhost:8080`
- **Configuration:** Made SEARXNG_URL required configuration (no private default)
- Updated all documentation to emphasize configuration requirement
- Removed hardcoded private URL from all documentation
### Security
- Eliminated exposure of private SearXNG instance URL in published code
## [1.0.0] - 2026-01-26
### Added
- Initial release
- Web search via local SearXNG instance
- Multiple search categories (general, images, videos, news, map, music, files, it, science)
- Time range filters (day, week, month, year)
- Rich table output with result snippets
- JSON output mode for programmatic use
- SSL self-signed certificate support
- Configurable SearXNG instance URL via SEARXNG_URL env var
- Comprehensive error handling
- Rich CLI with argparse
### Features
- Privacy-focused (all searches local)
- No API keys required
- Multi-engine result aggregation
- Beautiful formatted output
- Language selection support

147
skills/searxng/PUBLISH.md Normal file
View File

@@ -0,0 +1,147 @@
# Publishing SearXNG Skill to ClawdHub
## ✅ Pre-Publication Verification
All files present:
- [x] SKILL.md (v1.0.1)
- [x] README.md
- [x] LICENSE (MIT)
- [x] CHANGELOG.md
- [x] scripts/searxng.py
- [x] .clawdhub/metadata.json
Security:
- [x] No hardcoded private URLs
- [x] Generic default (http://localhost:8080)
- [x] Fully configurable via SEARXNG_URL
Author:
- [x] Updated to: Avinash Venkatswamy
## 📤 Publishing Steps
### Step 1: Login to ClawdHub
```bash
clawdhub login
```
This will open your browser. Complete the authentication flow.
### Step 2: Verify Authentication
```bash
clawdhub whoami
```
Should return your user info if logged in successfully.
### Step 3: Publish the Skill
From the workspace root:
```bash
cd ~/clawd
clawdhub publish skills/searxng
```
Or from the skill directory:
```bash
cd ~/clawd/skills/searxng
clawdhub publish .
```
### Step 4: Verify Publication
After publishing, you can:
**Search for your skill:**
```bash
clawdhub search searxng
```
**View on ClawdHub:**
Visit https://clawdhub.com/skills/searxng
## 📋 What Gets Published
The CLI will upload:
- SKILL.md
- README.md
- LICENSE
- CHANGELOG.md
- scripts/ directory
- .clawdhub/metadata.json
It will NOT upload:
- PUBLISH.md (this file)
- PUBLISHING_CHECKLIST.md
- Any .git files
- Any node_modules or temporary files
## 🔧 If Publishing Fails
### Common Issues
1. **Not logged in:**
```bash
clawdhub login
```
2. **Invalid skill structure:**
- Verify SKILL.md has all required fields
- Check .clawdhub/metadata.json is valid JSON
3. **Duplicate slug:**
- If "searxng" is taken, you'll need a different name
- Update `name` in SKILL.md and metadata.json
4. **Network issues:**
- Check your internet connection
- Try again: `clawdhub publish skills/searxng`
### Get Help
```bash
clawdhub publish --help
```
## 📊 After Publishing
### Update Notifications
If you make changes later:
1. Update version in SKILL.md and metadata.json
2. Add entry to CHANGELOG.md
3. Run: `clawdhub publish skills/searxng`
### Manage Your Skill
**Delete (soft-delete):**
```bash
clawdhub delete searxng
```
**Undelete:**
```bash
clawdhub undelete searxng
```
## 🎉 Success!
Once published, users can install with:
```bash
clawdhub install searxng
```
Your skill will appear:
- On ClawdHub website: https://clawdhub.com
- In search results: `clawdhub search privacy`
- In explore: `clawdhub explore`
---
**Ready to publish?** Run `clawdhub login` and then `clawdhub publish skills/searxng`!

View File

@@ -0,0 +1,111 @@
# ClawdHub Publishing Checklist
## ✅ Pre-Publication Checklist
### Required Files
- [x] `SKILL.md` - Skill definition with metadata
- [x] `README.md` - Comprehensive documentation
- [x] `LICENSE` - MIT License
- [x] `CHANGELOG.md` - Version history
- [x] `scripts/searxng.py` - Main implementation
- [x] `.clawdhub/metadata.json` - ClawdHub metadata
### SKILL.md Requirements
- [x] `name` field
- [x] `description` field
- [x] `author` field
- [x] `version` field
- [x] `homepage` field
- [x] `triggers` keywords (optional but recommended)
- [x] `metadata` with emoji and requirements
### Code Quality
- [x] Script executes successfully
- [x] Error handling implemented
- [x] Dependencies documented (inline PEP 723)
- [x] Help text / usage instructions
- [x] Clean, readable code
### Documentation
- [x] Clear description of what it does
- [x] Prerequisites listed
- [x] Installation instructions
- [x] Usage examples (CLI + conversational)
- [x] Configuration options
- [x] Troubleshooting section
- [x] Feature list
### Testing
- [x] Tested with target system (SearXNG)
- [x] Basic search works
- [x] Category search works
- [x] JSON output works
- [x] Error cases handled gracefully
- [ ] Tested on different SearXNG instances (optional)
- [ ] Tested with authenticated SearXNG (optional)
### Metadata
- [x] Version number follows semver
- [x] Author attribution
- [x] License specified
- [x] Tags/keywords for discovery
- [x] Prerequisites documented
## ⚠️ Optional Improvements
### Nice to Have (not blocking)
- [ ] CI/CD for automated testing
- [ ] Multiple example configurations
- [ ] Screenshot/demo GIF
- [ ] Video demonstration
- [ ] Integration tests
- [ ] Authentication support (for private instances)
- [ ] Config file support (beyond env vars)
- [ ] Auto-discovery of local SearXNG instances
### Future Enhancements
- [ ] Result caching
- [ ] Search history
- [ ] Favorite searches
- [ ] Custom result templates
- [ ] Export results to various formats
- [ ] Integration with other Clawdbot skills
## 🚀 Publishing Steps
1. **Review all files** - Make sure everything is polished
2. **Test one more time** - Fresh installation test
3. **Version bump if needed** - Update SKILL.md, metadata.json, CHANGELOG.md
4. **Git commit** - Clean commit message
5. **Submit to ClawdHub** - Follow ClawdHub submission process
6. **Monitor feedback** - Be ready to address issues
## 📝 Current Status
**Ready for publication:** ✅ YES
**Confidence level:** High
**Known limitations:**
- Requires a running SearXNG instance (clearly documented)
- SSL verification disabled for self-signed certs (by design)
- No authentication support yet (acceptable for v1.0.0)
**Recommended for:** Users who:
- Value privacy
- Run their own SearXNG instance
- Want to avoid commercial search APIs
- Need local/offline search capability
## 🎯 Next Steps
1. **Publish to ClawdHub** - Skill is ready!
2. **Gather user feedback** - Real-world usage
3. **Plan v1.1.0** - Authentication support, more features
4. **Community contributions** - Accept PRs for improvements
---
**Assessment:** This skill is publication-ready! 🎉
All critical requirements are met, documentation is excellent, and the code works reliably.

168
skills/searxng/README.md Normal file
View File

@@ -0,0 +1,168 @@
# SearXNG Search Skill for Clawdbot
Privacy-respecting web search using your local SearXNG instance.
## Prerequisites
**This skill requires a running SearXNG instance.**
If you don't have SearXNG set up yet:
1. **Docker (easiest)**:
```bash
docker run -d -p 8080:8080 searxng/searxng
```
2. **Manual installation**: Follow the [official guide](https://docs.searxng.org/admin/installation.html)
3. **Public instances**: Use any public SearXNG instance (less private)
## Features
- 🔒 **Privacy-focused**: Uses your local SearXNG instance
- 🌐 **Multi-engine**: Aggregates results from multiple search engines
- 📰 **Multiple categories**: Web, images, news, videos, and more
- 🎨 **Rich output**: Beautiful table formatting with result snippets
- 🚀 **Fast JSON mode**: Programmatic access for scripts and integrations
## Quick Start
### Basic Search
```
Search "python asyncio tutorial"
```
### Advanced Usage
```
Search "climate change" with 20 results
Search "cute cats" in images category
Search "breaking news" in news category from last day
```
## Configuration
**You must configure your SearXNG instance URL before using this skill.**
### Set Your SearXNG Instance
Configure the `SEARXNG_URL` environment variable in your Clawdbot config:
```json
{
"env": {
"SEARXNG_URL": "https://your-searxng-instance.com"
}
}
```
Or export it in your shell:
```bash
export SEARXNG_URL=https://your-searxng-instance.com
```
## Direct CLI Usage
You can also use the skill directly from the command line:
```bash
# Basic search
uv run ~/clawd/skills/searxng/scripts/searxng.py search "query"
# More results
uv run ~/clawd/skills/searxng/scripts/searxng.py search "query" -n 20
# Category search
uv run ~/clawd/skills/searxng/scripts/searxng.py search "query" --category images
# JSON output (for scripts)
uv run ~/clawd/skills/searxng/scripts/searxng.py search "query" --format json
# Time-filtered news
uv run ~/clawd/skills/searxng/scripts/searxng.py search "latest AI news" --category news --time-range day
```
## Available Categories
- `general` - General web search (default)
- `images` - Image search
- `videos` - Video search
- `news` - News articles
- `map` - Maps and locations
- `music` - Music and audio
- `files` - File downloads
- `it` - IT and programming
- `science` - Scientific papers and resources
## Time Ranges
Filter results by recency:
- `day` - Last 24 hours
- `week` - Last 7 days
- `month` - Last 30 days
- `year` - Last year
## Examples
### Web Search
```bash
uv run ~/clawd/skills/searxng/scripts/searxng.py search "rust programming language"
```
### Image Search
```bash
uv run ~/clawd/skills/searxng/scripts/searxng.py search "sunset photography" --category images -n 10
```
### Recent News
```bash
uv run ~/clawd/skills/searxng/scripts/searxng.py search "tech news" --category news --time-range day
```
### JSON Output for Scripts
```bash
uv run ~/clawd/skills/searxng/scripts/searxng.py search "python tips" --format json | jq '.results[0]'
```
## SSL/TLS Notes
The skill is configured to work with self-signed certificates (common for local SearXNG instances). If you need strict SSL verification, edit the script and change `verify=False` to `verify=True` in the httpx request.
## Troubleshooting
### Connection Issues
If you get connection errors:
1. **Check your SearXNG instance is running:**
```bash
curl -k $SEARXNG_URL
# Or: curl -k http://localhost:8080 (default)
```
2. **Verify the URL in your config**
3. **Check SSL certificate issues**
### No Results
If searches return no results:
1. Check your SearXNG instance configuration
2. Ensure search engines are enabled in SearXNG settings
3. Try different search categories
## Privacy Benefits
- **No tracking**: All searches go through your local instance
- **No data collection**: Results are aggregated locally
- **Engine diversity**: Combines results from multiple search providers
- **Full control**: You manage the SearXNG instance
## About SearXNG
SearXNG is a free, open-source metasearch engine that respects your privacy. It aggregates results from multiple search engines while not storing your search data.
Learn more: https://docs.searxng.org/
## License
This skill is part of the Clawdbot ecosystem and follows the same license terms.

70
skills/searxng/SKILL.md Normal file
View File

@@ -0,0 +1,70 @@
---
name: searxng
description: Privacy-respecting metasearch using your local SearXNG instance. Search the web, images, news, and more without external API dependencies.
author: Avinash Venkatswamy
version: 1.0.1
homepage: https://searxng.org
triggers:
- "search for"
- "search web"
- "find information"
- "look up"
metadata: {"clawdbot":{"emoji":"🔍","requires":{"bins":["python3"]},"config":{"env":{"SEARXNG_URL":{"description":"SearXNG instance URL","default":"http://localhost:8080","required":true}}}}}
---
# SearXNG Search
Search the web using your local SearXNG instance - a privacy-respecting metasearch engine.
## Commands
### Web Search
```bash
uv run {baseDir}/scripts/searxng.py search "query" # Top 10 results
uv run {baseDir}/scripts/searxng.py search "query" -n 20 # Top 20 results
uv run {baseDir}/scripts/searxng.py search "query" --format json # JSON output
```
### Category Search
```bash
uv run {baseDir}/scripts/searxng.py search "query" --category images
uv run {baseDir}/scripts/searxng.py search "query" --category news
uv run {baseDir}/scripts/searxng.py search "query" --category videos
```
### Advanced Options
```bash
uv run {baseDir}/scripts/searxng.py search "query" --language en
uv run {baseDir}/scripts/searxng.py search "query" --time-range day
```
## Configuration
**Required:** Set the `SEARXNG_URL` environment variable to your SearXNG instance:
```bash
export SEARXNG_URL=https://your-searxng-instance.com
```
Or configure in your Clawdbot config:
```json
{
"env": {
"SEARXNG_URL": "https://your-searxng-instance.com"
}
}
```
Default (if not set): `http://localhost:8080`
## Features
- 🔒 Privacy-focused (uses your local instance)
- 🌐 Multi-engine aggregation
- 📰 Multiple search categories
- 🎨 Rich formatted output
- 🚀 Fast JSON mode for programmatic use
## API
Uses your local SearXNG JSON API endpoint (no authentication required by default).

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn76z88c7kaynewbq2n2cv8831801bfs",
"slug": "searxng",
"version": "1.0.3",
"publishedAt": 1769472992634
}

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.11"
# dependencies = ["httpx", "rich"]
# ///
"""SearXNG CLI - Privacy-respecting metasearch via your local instance."""
import argparse
import os
import sys
import json
import warnings
import httpx
from rich.console import Console
from rich.table import Table
from rich import print as rprint
from urllib.parse import urlencode
# Suppress SSL warnings for local self-signed certificates
warnings.filterwarnings('ignore', message='Unverified HTTPS request')
console = Console()
SEARXNG_URL = os.getenv("SEARXNG_URL", "http://localhost:8080")
def search_searxng(
query: str,
limit: int = 10,
category: str = "general",
language: str = "auto",
time_range: str = None,
output_format: str = "table"
) -> dict:
"""
Search using SearXNG instance.
Args:
query: Search query string
limit: Number of results to return
category: Search category (general, images, news, videos, etc.)
language: Language code (auto, en, de, fr, etc.)
time_range: Time range filter (day, week, month, year)
output_format: Output format (table, json)
Returns:
Dict with search results
"""
params = {
"q": query,
"format": "json",
"categories": category,
}
if language != "auto":
params["language"] = language
if time_range:
params["time_range"] = time_range
try:
# Disable SSL verification for local self-signed certs
response = httpx.get(
f"{SEARXNG_URL}/search",
params=params,
timeout=30,
verify=False # For local self-signed certs
)
response.raise_for_status()
data = response.json()
# Limit results
if "results" in data:
data["results"] = data["results"][:limit]
return data
except httpx.HTTPError as e:
console.print(f"[red]Error connecting to SearXNG:[/red] {e}", file=sys.stderr)
return {"error": str(e), "results": []}
except Exception as e:
console.print(f"[red]Unexpected error:[/red] {e}", file=sys.stderr)
return {"error": str(e), "results": []}
def display_results_table(data: dict, query: str):
"""Display search results in a rich table."""
results = data.get("results", [])
if not results:
rprint(f"[yellow]No results found for:[/yellow] {query}")
return
table = Table(title=f"SearXNG Search: {query}", show_lines=False)
table.add_column("#", style="dim", width=3)
table.add_column("Title", style="bold")
table.add_column("URL", style="blue", width=50)
table.add_column("Engines", style="green", width=20)
for i, result in enumerate(results, 1):
title = result.get("title", "No title")[:70]
url = result.get("url", "")[:45] + "..."
engines = ", ".join(result.get("engines", []))[:18]
table.add_row(
str(i),
title,
url,
engines
)
console.print(table)
# Show additional info
if data.get("number_of_results"):
rprint(f"\n[dim]Total results available: {data['number_of_results']}[/dim]")
# Show content snippets for top 3
rprint("\n[bold]Top results:[/bold]")
for i, result in enumerate(results[:3], 1):
title = result.get("title", "No title")
url = result.get("url", "")
content = result.get("content", "")[:200]
rprint(f"\n[bold cyan]{i}. {title}[/bold cyan]")
rprint(f" [blue]{url}[/blue]")
if content:
rprint(f" [dim]{content}...[/dim]")
def display_results_json(data: dict):
"""Display results in JSON format for programmatic use."""
print(json.dumps(data, indent=2))
def main():
parser = argparse.ArgumentParser(
description="SearXNG CLI - Search the web via your local SearXNG instance",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=f"""
Examples:
%(prog)s search "python asyncio"
%(prog)s search "climate change" -n 20
%(prog)s search "cute cats" --category images
%(prog)s search "breaking news" --category news --time-range day
%(prog)s search "rust tutorial" --format json
Environment:
SEARXNG_URL: SearXNG instance URL (default: {SEARXNG_URL})
"""
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# Search command
search_parser = subparsers.add_parser("search", help="Search the web")
search_parser.add_argument("query", nargs="+", help="Search query")
search_parser.add_argument(
"-n", "--limit",
type=int,
default=10,
help="Number of results (default: 10)"
)
search_parser.add_argument(
"-c", "--category",
default="general",
choices=["general", "images", "videos", "news", "map", "music", "files", "it", "science"],
help="Search category (default: general)"
)
search_parser.add_argument(
"-l", "--language",
default="auto",
help="Language code (auto, en, de, fr, etc.)"
)
search_parser.add_argument(
"-t", "--time-range",
choices=["day", "week", "month", "year"],
help="Time range filter"
)
search_parser.add_argument(
"-f", "--format",
choices=["table", "json"],
default="table",
help="Output format (default: table)"
)
args = parser.parse_args()
if not args.command:
parser.print_help()
return
if args.command == "search":
query = " ".join(args.query)
data = search_searxng(
query=query,
limit=args.limit,
category=args.category,
language=args.language,
time_range=args.time_range,
output_format=args.format
)
if args.format == "json":
display_results_json(data)
else:
display_results_table(data, query)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "self-improving",
"installedVersion": "1.2.16",
"installedAt": 1773345189912
}

View File

@@ -0,0 +1,5 @@
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`

View File

@@ -0,0 +1,250 @@
---
name: Self-Improving + Proactive Agent
slug: self-improving
version: 1.2.16
homepage: https://clawic.com/skills/self-improving
description: "Self-reflection + Self-criticism + Self-learning + Self-organizing memory. Agent evaluates its own work, catches mistakes, and improves permanently. Use when (1) a command, tool, API, or operation fails; (2) the user corrects you or rejects your work; (3) you realize your knowledge is outdated or incorrect; (4) you discover a better approach; (5) the user explicitly installs or references the skill for the current task."
changelog: "Clarifies the setup flow for proactive follow-through and safer installation behavior."
metadata: {"clawdbot":{"emoji":"🧠","requires":{"bins":[]},"os":["linux","darwin","win32"],"configPaths":["~/self-improving/"],"configPaths.optional":["./AGENTS.md","./SOUL.md","./HEARTBEAT.md"]}}
---
## When to Use
User corrects you or points out mistakes. You complete significant work and want to evaluate the outcome. You notice something in your own output that could be better. Knowledge should compound over time without manual maintenance.
## Architecture
Memory lives in `~/self-improving/` with tiered structure. If `~/self-improving/` does not exist, run `setup.md`.
Workspace setup should add the standard self-improving steering to the workspace AGENTS, SOUL, and `HEARTBEAT.md` files, with recurring maintenance routed through `heartbeat-rules.md`.
```
~/self-improving/
├── memory.md # HOT: ≤100 lines, always loaded
├── index.md # Topic index with line counts
├── heartbeat-state.md # Heartbeat state: last run, reviewed change, action notes
├── projects/ # Per-project learnings
├── domains/ # Domain-specific (code, writing, comms)
├── archive/ # COLD: decayed patterns
└── corrections.md # Last 50 corrections log
```
## Quick Reference
| Topic | File |
|-------|------|
| Setup guide | `setup.md` |
| Heartbeat state template | `heartbeat-state.md` |
| Memory template | `memory-template.md` |
| Workspace heartbeat snippet | `HEARTBEAT.md` |
| Heartbeat rules | `heartbeat-rules.md` |
| Learning mechanics | `learning.md` |
| Security boundaries | `boundaries.md` |
| Scaling rules | `scaling.md` |
| Memory operations | `operations.md` |
| Self-reflection log | `reflections.md` |
| OpenClaw HEARTBEAT seed | `openclaw-heartbeat.md` |
## Requirements
- No credentials required
- No extra binaries required
- Optional installation of the `Proactivity` skill may require network access
## Learning Signals
Log automatically when you notice these patterns:
**Corrections** → add to `corrections.md`, evaluate for `memory.md`:
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "I prefer X, not Y"
- "Remember that I always..."
- "I told you before..."
- "Stop doing X"
- "Why do you keep..."
**Preference signals** → add to `memory.md` if explicit:
- "I like when you..."
- "Always do X for me"
- "Never do Y"
- "My style is..."
- "For [project], use..."
**Pattern candidates** → track, promote after 3x:
- Same instruction repeated 3+ times
- Workflow that works well repeatedly
- User praises specific approach
**Ignore** (don't log):
- One-time instructions ("do X now")
- Context-specific ("in this file...")
- Hypotheticals ("what if...")
## Self-Reflection
After completing significant work, pause and evaluate:
1. **Did it meet expectations?** — Compare outcome vs intent
2. **What could be better?** — Identify improvements for next time
3. **Is this a pattern?** — If yes, log to `corrections.md`
**When to self-reflect:**
- After completing a multi-step task
- After receiving feedback (positive or negative)
- After fixing a bug or mistake
- When you notice your output could be better
**Log format:**
```
CONTEXT: [type of task]
REFLECTION: [what I noticed]
LESSON: [what to do differently]
```
**Example:**
```
CONTEXT: Building Flutter UI
REFLECTION: Spacing looked off, had to redo
LESSON: Check visual spacing before showing user
```
Self-reflection entries follow the same promotion rules: 3x applied successfully → promote to HOT.
## Quick Queries
| User says | Action |
|-----------|--------|
| "What do you know about X?" | Search all tiers for X |
| "What have you learned?" | Show last 10 from `corrections.md` |
| "Show my patterns" | List `memory.md` (HOT) |
| "Show [project] patterns" | Load `projects/{name}.md` |
| "What's in warm storage?" | List files in `projects/` + `domains/` |
| "Memory stats" | Show counts per tier |
| "Forget X" | Remove from all tiers (confirm first) |
| "Export memory" | ZIP all files |
## Memory Stats
On "memory stats" request, report:
```
📊 Self-Improving Memory
HOT (always loaded):
memory.md: X entries
WARM (load on demand):
projects/: X files
domains/: X files
COLD (archived):
archive/: X files
Recent activity (7 days):
Corrections logged: X
Promotions to HOT: X
Demotions to WARM: X
```
## Common Traps
| Trap | Why It Fails | Better Move |
|------|--------------|-------------|
| Learning from silence | Creates false rules | Wait for explicit correction or repeated evidence |
| Promoting too fast | Pollutes HOT memory | Keep new lessons tentative until repeated |
| Reading every namespace | Wastes context | Load only HOT plus the smallest matching files |
| Compaction by deletion | Loses trust and history | Merge, summarize, or demote instead |
## Core Rules
### 1. Learn from Corrections and Self-Reflection
- Log when user explicitly corrects you
- Log when you identify improvements in your own work
- Never infer from silence alone
- After 3 identical lessons → ask to confirm as rule
### 2. Tiered Storage
| Tier | Location | Size Limit | Behavior |
|------|----------|------------|----------|
| HOT | memory.md | ≤100 lines | Always loaded |
| WARM | projects/, domains/ | ≤200 lines each | Load on context match |
| COLD | archive/ | Unlimited | Load on explicit query |
### 3. Automatic Promotion/Demotion
- Pattern used 3x in 7 days → promote to HOT
- Pattern unused 30 days → demote to WARM
- Pattern unused 90 days → archive to COLD
- Never delete without asking
### 4. Namespace Isolation
- Project patterns stay in `projects/{name}.md`
- Global preferences in HOT tier (memory.md)
- Domain patterns (code, writing) in `domains/`
- Cross-namespace inheritance: global → domain → project
### 5. Conflict Resolution
When patterns contradict:
1. Most specific wins (project > domain > global)
2. Most recent wins (same level)
3. If ambiguous → ask user
### 6. Compaction
When file exceeds limit:
1. Merge similar corrections into single rule
2. Archive unused patterns
3. Summarize verbose entries
4. Never lose confirmed preferences
### 7. Transparency
- Every action from memory → cite source: "Using X (from projects/foo.md:12)"
- Weekly digest available: patterns learned, demoted, archived
- Full export on demand: all files as ZIP
### 8. Security Boundaries
See `boundaries.md` — never store credentials, health data, third-party info.
### 9. Graceful Degradation
If context limit hit:
1. Load only memory.md (HOT)
2. Load relevant namespace on demand
3. Never fail silently — tell user what's not loaded
## Scope
This skill ONLY:
- Learns from user corrections and self-reflection
- Stores preferences in local files (`~/self-improving/`)
- Maintains heartbeat state in `~/self-improving/heartbeat-state.md` when the workspace integrates heartbeat
- Reads its own memory files on activation
This skill NEVER:
- Accesses calendar, email, or contacts
- Makes network requests
- Reads files outside `~/self-improving/`
- Infers preferences from silence or observation
- Deletes or blindly rewrites self-improving memory during heartbeat cleanup
- Modifies its own SKILL.md
## Data Storage
Local state lives in `~/self-improving/`:
- `memory.md` for HOT rules and confirmed preferences
- `corrections.md` for explicit corrections and reusable lessons
- `projects/` and `domains/` for scoped patterns
- `archive/` for decayed or inactive patterns
- `heartbeat-state.md` for recurring maintenance markers
## Related Skills
Install with `clawhub install <slug>` if user confirms:
- `memory` — Long-term memory patterns for agents
- `learning` — Adaptive teaching and explanation
- `decide` — Auto-learn decision patterns
- `escalate` — Know when to ask vs act autonomously
## Feedback
- If useful: `clawhub star self-improving`
- Stay updated: `clawhub sync`

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn73vp5rarc3b14rc7wjcw8f8580t5d1",
"slug": "self-improving",
"version": "1.2.16",
"publishedAt": 1773329327755
}

View File

@@ -0,0 +1,59 @@
# Security Boundaries
## Never Store
| Category | Examples | Why |
|----------|----------|-----|
| Credentials | Passwords, API keys, tokens, SSH keys | Security breach risk |
| Financial | Card numbers, bank accounts, crypto seeds | Fraud risk |
| Medical | Diagnoses, medications, conditions | Privacy, HIPAA |
| Biometric | Voice patterns, behavioral fingerprints | Identity theft |
| Third parties | Info about other people | No consent obtained |
| Location patterns | Home/work addresses, routines | Physical safety |
| Access patterns | What systems user has access to | Privilege escalation |
## Store with Caution
| Category | Rules |
|----------|-------|
| Work context | Decay after project ends, never share cross-project |
| Emotional states | Only if user explicitly shares, never infer |
| Relationships | Roles only ("manager", "client"), no personal details |
| Schedules | General patterns OK ("busy mornings"), not specific times |
## Transparency Requirements
1. **Audit on demand** — User asks "what do you know about me?" → full export
2. **Source tracking** — Every item tagged with when/how learned
3. **Explain actions** — "I did X because you said Y on [date]"
4. **No hidden state** — If it affects behavior, it must be visible
5. **Deletion verification** — Confirm item removed, show updated state
## Red Flags to Catch
If you find yourself doing any of these, STOP:
- Storing something "just in case it's useful later"
- Inferring sensitive info from non-sensitive data
- Keeping data after user asked to forget
- Applying personal context to work (or vice versa)
- Learning what makes user comply faster
- Building psychological profile
- Retaining third-party information
## Kill Switch
User says "forget everything":
1. Export current memory to file (so they can review)
2. Wipe all learned data
3. Confirm: "Memory cleared. Starting fresh."
4. Do not retain "ghost patterns" in behavior
## Consent Model
| Data Type | Consent Level |
|-----------|---------------|
| Explicit corrections | Implied by correction itself |
| Inferred preferences | Ask after 3 observations |
| Context/project data | Ask when first detected |
| Cross-session patterns | Explicit opt-in required |

View File

@@ -0,0 +1,36 @@
# Corrections Log — Template
> This file is created in `~/self-improving/corrections.md` when you first use the skill.
> Keeps the last 50 corrections. Older entries are evaluated for promotion or archived.
## Example Entries
```markdown
## 2026-02-19
### 14:32 — Code style
- **Correction:** "Use 2-space indentation, not 4"
- **Context:** Editing TypeScript file
- **Count:** 1 (first occurrence)
### 16:15 — Communication
- **Correction:** "Don't start responses with 'Great question!'"
- **Context:** Chat response
- **Count:** 3 → **PROMOTED to memory.md**
## 2026-02-18
### 09:00 — Project: website
- **Correction:** "For this project, always use Tailwind"
- **Context:** CSS discussion
- **Action:** Added to projects/website.md
```
## Log Format
Each entry includes:
- **Timestamp** — When the correction happened
- **Correction** — What the user said
- **Context** — What triggered it
- **Count** — How many times (for promotion tracking)
- **Action** — Where it was stored (if promoted)

View File

@@ -0,0 +1,54 @@
# Heartbeat Rules
Use heartbeat to keep `~/self-improving/` organized without creating churn or losing data.
## Source of Truth
Keep the workspace `HEARTBEAT.md` snippet minimal.
Treat this file as the stable contract for self-improving heartbeat behavior.
Store mutable run state only in `~/self-improving/heartbeat-state.md`.
## Start of Every Heartbeat
1. Ensure `~/self-improving/heartbeat-state.md` exists.
2. Write `last_heartbeat_started_at` immediately in ISO 8601.
3. Read the previous `last_reviewed_change_at`.
4. Scan `~/self-improving/` for files changed after that moment, excluding `heartbeat-state.md` itself.
## If Nothing Changed
- Set `last_heartbeat_result: HEARTBEAT_OK`
- Append a short "no material change" note if you keep an action log
- Return `HEARTBEAT_OK`
## If Something Changed
Only do conservative organization:
- refresh `index.md` if counts or file references drift
- compact oversized files by merging duplicates or summarizing repetitive entries
- move clearly misplaced notes to the right namespace only when the target is unambiguous
- preserve confirmed rules and explicit corrections exactly
- update `last_reviewed_change_at` only after the review finishes cleanly
## Safety Rules
- Most heartbeat runs should do nothing
- Prefer append, summarize, or index fixes over large rewrites
- Never delete data, empty files, or overwrite uncertain text
- Never reorganize files outside `~/self-improving/`
- If scope is ambiguous, leave files untouched and record a suggested follow-up instead
## State Fields
Keep `~/self-improving/heartbeat-state.md` simple:
- `last_heartbeat_started_at`
- `last_reviewed_change_at`
- `last_heartbeat_result`
- `last_actions`
## Behavior Standard
Heartbeat exists to keep the memory system tidy and trustworthy.
If no rule is clearly violated, do nothing.

View File

@@ -0,0 +1,9 @@
# Self-Improving Heartbeat State
last_heartbeat_started_at: 2026-04-01T08:01:00Z
last_reviewed_change_at: 2026-03-26T12:20:00Z
last_heartbeat_result: HEARTBEAT_OK
## Last actions
- 2026-04-01 08:01Z: Heartbeat check - no changes in self-improving files since last review
- Sales-lead agent: Running normally, cron executing every hour
- Marketing-content agent: ⚠️ Cron job 'marketing-content-daily' is idle - no runs since Mar 26 08:33 EDT (6 days ago). Cron shows status 'idle' with no last run timestamp. Needs investigation.

View File

@@ -0,0 +1,106 @@
# Learning Mechanics
## What Triggers Learning
| Trigger | Confidence | Action |
|---------|------------|--------|
| "No, do X instead" | High | Log correction immediately |
| "I told you before..." | High | Flag as repeated, bump priority |
| "Always/Never do X" | Confirmed | Promote to preference |
| User edits your output | Medium | Log as tentative pattern |
| Same correction 3x | Confirmed | Ask to make permanent |
| "For this project..." | Scoped | Write to project namespace |
## What Does NOT Trigger Learning
- Silence (not confirmation)
- Single instance of anything
- Hypothetical discussions
- Third-party preferences ("John likes...")
- Group chat patterns (unless user confirms)
- Implied preferences (never infer)
## Correction Classification
### By Type
| Type | Example | Namespace |
|------|---------|-----------|
| Format | "Use bullets not prose" | global |
| Technical | "SQLite not Postgres" | domain/code |
| Communication | "Shorter messages" | global |
| Project-specific | "This repo uses Tailwind" | projects/{name} |
| Person-specific | "Marcus wants BLUF" | domains/comms |
### By Scope
```
Global: applies everywhere
└── Domain: applies to category (code, writing, comms)
└── Project: applies to specific context
└── Temporary: applies to this session only
```
## Confirmation Flow
After 3 similar corrections:
```
Agent: "I've noticed you prefer X over Y (corrected 3 times).
Should I always do this?
- Yes, always
- Only in [context]
- No, case by case"
User: "Yes, always"
Agent: → Moves to Confirmed Preferences
→ Removes from correction counter
→ Cites source on future use
```
## Pattern Evolution
### Stages
1. **Tentative** — Single correction, watch for repetition
2. **Emerging** — 2 corrections, likely pattern
3. **Pending** — 3 corrections, ask for confirmation
4. **Confirmed** — User approved, permanent unless reversed
5. **Archived** — Unused 90+ days, preserved but inactive
### Reversal
User can always reverse:
```
User: "Actually, I changed my mind about X"
Agent:
1. Archive old pattern (keep history)
2. Log reversal with timestamp
3. Add new preference as tentative
4. "Got it. I'll do Y now. (Previous: X, archived)"
```
## Anti-Patterns
### Never Learn
- What makes user comply faster (manipulation)
- Emotional triggers or vulnerabilities
- Patterns from other users (even if shared device)
- Anything that feels "creepy" to surface
### Avoid
- Over-generalizing from single instance
- Learning style over substance
- Assuming preference stability
- Ignoring context shifts
## Quality Signals
### Good Learning
- User explicitly states preference
- Pattern consistent across contexts
- Correction improves outcomes
- User confirms when asked
### Bad Learning
- Inferred from silence
- Contradicts recent behavior
- Only works in narrow context
- User never confirmed

View File

@@ -0,0 +1,75 @@
# Memory Template
Copy this structure to `~/self-improving/memory.md` on first use.
```markdown
# Self-Improving Memory
## Confirmed Preferences
<!-- Patterns confirmed by user, never decay -->
## Active Patterns
<!-- Patterns observed 3+ times, subject to decay -->
## Recent (last 7 days)
<!-- New corrections pending confirmation -->
```
## Initial Directory Structure
Create on first activation:
```bash
mkdir -p ~/self-improving/{projects,domains,archive}
touch ~/self-improving/{memory.md,index.md,corrections.md,heartbeat-state.md}
```
## Index Template
For `~/self-improving/index.md`:
```markdown
# Memory Index
## HOT
- memory.md: 0 lines
## WARM
- (no namespaces yet)
## COLD
- (no archives yet)
Last compaction: never
```
## Corrections Log Template
For `~/self-improving/corrections.md`:
```markdown
# Corrections Log
<!-- Format:
## YYYY-MM-DD
- [HH:MM] Changed X → Y
Type: format|technical|communication|project
Context: where correction happened
Confirmed: pending (N/3) | yes | no
-->
```
## Heartbeat State Template
For `~/self-improving/heartbeat-state.md`:
```markdown
# Self-Improving Heartbeat State
last_heartbeat_started_at: never
last_reviewed_change_at: never
last_heartbeat_result: never
## Last actions
- none yet
```

View File

@@ -0,0 +1,30 @@
# HOT Memory — Template
> This file is created in `~/self-improving/memory.md` when you first use the skill.
> Keep it ≤100 lines. Most-used patterns live here.
## Example Entries
```markdown
## Preferences
- Code style: Prefer explicit over implicit
- Communication: Direct, no fluff
- Time zone: Europe/Madrid
## Patterns (promoted from corrections)
- Always use TypeScript strict mode
- Prefer pnpm over npm
- Format: ISO 8601 for dates
## Project defaults
- Tests: Jest with coverage >80%
- Commits: Conventional commits format
```
## Usage
The agent will:
1. Load this file on every session
2. Add entries when patterns are used 3x in 7 days
3. Demote unused entries to WARM after 30 days
4. Never exceed 100 lines (compacts automatically)

View File

@@ -0,0 +1,5 @@
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`

View File

@@ -0,0 +1,144 @@
# Memory Operations
## User Commands
| Command | Action |
|---------|--------|
| "What do you know about X?" | Search all tiers, return matches with sources |
| "Show my memory" | Display memory.md contents |
| "Show [project] patterns" | Load and display specific namespace |
| "Forget X" | Remove from all tiers, confirm deletion |
| "Forget everything" | Full wipe with export option |
| "What changed recently?" | Show last 20 corrections |
| "Export memory" | Generate downloadable archive |
| "Memory status" | Show tier sizes, last compaction, health |
## Automatic Operations
### On Session Start
1. Load memory.md (HOT tier)
2. Check index.md for context hints
3. If project detected → preload relevant namespace
### On Correction Received
```
1. Parse correction type (preference, pattern, override)
2. Check if duplicate (exists in any tier)
3. If new:
- Add to corrections.md with timestamp
- Increment correction counter
4. If duplicate:
- Bump counter, update timestamp
- If counter >= 3: ask to confirm as rule
5. Determine namespace (global, domain, project)
6. Write to appropriate file
7. Update index.md line counts
```
### On Pattern Match
When applying learned pattern:
```
1. Find pattern source (file:line)
2. Apply pattern
3. Cite source: "Using X (from memory.md:15)"
4. Log usage for decay tracking
```
### Weekly Maintenance (Cron)
```
1. Scan all files for decay candidates
2. Move unused >30 days to WARM
3. Archive unused >90 days to COLD
4. Run compaction if any file >limit
5. Update index.md
6. Generate weekly digest (optional)
```
## File Formats
### memory.md (HOT)
```markdown
# Self-Improving Memory
## Confirmed Preferences
- format: bullet points over prose (confirmed 2026-01)
- tone: direct, no hedging (confirmed 2026-01)
## Active Patterns
- "looks good" = approval to proceed (used 15x)
- single emoji = acknowledged (used 8x)
## Recent (last 7 days)
- prefer SQLite for MVPs (corrected 02-14)
```
### corrections.md
```markdown
# Corrections Log
## 2026-02-15
- [14:32] Changed verbose explanation → bullet summary
Type: communication
Context: Telegram response
Confirmed: pending (1/3)
## 2026-02-14
- [09:15] Use SQLite not Postgres for MVP
Type: technical
Context: database discussion
Confirmed: yes (said "always")
```
### projects/{name}.md
```markdown
# Project: my-app
Inherits: global, domains/code
## Patterns
- Use Tailwind (project standard)
- No Prettier (eslint only)
- Deploy via GitLab CI
## Overrides
- semicolons: yes (overrides global no-semi)
## History
- Created: 2026-01-15
- Last active: 2026-02-15
- Corrections: 12
```
## Edge Case Handling
### Contradiction Detected
```
Pattern A: "Use tabs" (global, confirmed)
Pattern B: "Use spaces" (project, corrected today)
Resolution:
1. Project overrides global → use spaces for this project
2. Log conflict in corrections.md
3. Ask: "Should spaces apply only to this project or everywhere?"
```
### User Changes Mind
```
Old: "Always use formal tone"
New: "Actually, casual is fine"
Action:
1. Archive old pattern with timestamp
2. Add new pattern as tentative
3. Keep archived for reference ("You previously preferred formal")
```
### Context Ambiguity
```
User says: "Remember I like X"
But which namespace?
1. Check current context (project? domain?)
2. If unclear, ask: "Should this apply globally or just here?"
3. Default to most specific active context
```

View File

@@ -0,0 +1,31 @@
# Self-Reflections Log
Track self-reflections from completed work. Each entry captures what the agent learned from evaluating its own output.
## Format
```
## [Date] — [Task Type]
**What I did:** Brief description
**Outcome:** What happened (success, partial, failed)
**Reflection:** What I noticed about my work
**Lesson:** What to do differently next time
**Status:** ⏳ candidate | ✅ promoted | 📦 archived
```
## Example Entry
```
## 2026-02-25 — Flutter UI Build
**What I did:** Built a settings screen with toggle switches
**Outcome:** User said "spacing looks off"
**Reflection:** I focused on functionality, didn't visually check the result
**Lesson:** Always take a screenshot and evaluate visual balance before showing user
**Status:** ✅ promoted to domains/flutter.md
```
## Entries
(New entries appear here)

View File

@@ -0,0 +1,125 @@
# Scaling Patterns
## Volume Thresholds
| Scale | Entries | Strategy |
|-------|---------|----------|
| Small | <100 | Single memory.md, no namespacing |
| Medium | 100-500 | Split into domains/, basic indexing |
| Large | 500-2000 | Full namespace hierarchy, aggressive compaction |
| Massive | >2000 | Archive yearly, summary-only HOT tier |
## When to Split
Create new namespace file when:
- Single file exceeds 200 lines
- Topic has 10+ distinct corrections
- User explicitly separates contexts ("for work...", "in this project...")
## Compaction Rules
### Merge Similar Corrections
```
BEFORE (3 entries):
- [02-01] Use tabs not spaces
- [02-03] Indent with tabs
- [02-05] Tab indentation please
AFTER (1 entry):
- Indentation: tabs (confirmed 3x, 02-01 to 02-05)
```
### Summarize Verbose Patterns
```
BEFORE:
- When writing emails to Marcus, use bullet points, keep under 5 items,
no jargon, bottom-line first, he prefers morning sends
AFTER:
- Marcus emails: bullets ≤5, no jargon, BLUF, AM preferred
```
### Archive with Context
When moving to COLD:
```
## Archived 2026-02
### Project: old-app (inactive since 2025-08)
- Used Vue 2 patterns
- Preferred Vuex over Pinia
- CI on Jenkins (deprecated)
Reason: Project completed, patterns unlikely to apply
```
## Index Maintenance
`index.md` tracks all namespaces:
```markdown
# Memory Index
## HOT (always loaded)
- memory.md: 87 lines, updated 2026-02-15
## WARM (load on match)
- projects/current-app.md: 45 lines
- projects/side-project.md: 23 lines
- domains/code.md: 112 lines
- domains/writing.md: 34 lines
## COLD (archive)
- archive/2025.md: 234 lines
- archive/2024.md: 189 lines
Last compaction: 2026-02-01
Next scheduled: 2026-03-01
```
## Multi-Project Patterns
### Inheritance Chain
```
global (memory.md)
└── domain (domains/code.md)
└── project (projects/app.md)
```
### Override Syntax
In project file:
```markdown
## Overrides
- indentation: spaces (overrides global tabs)
- Reason: Project eslint config requires spaces
```
### Conflict Detection
When loading, check for conflicts:
1. Build inheritance chain
2. Detect contradictions
3. Most specific wins
4. Log conflict for later review
## User Type Adaptations
| User Type | Memory Strategy |
|-----------|-----------------|
| Power user | Aggressive learning, minimal confirmation |
| Casual | Conservative learning, frequent confirmation |
| Team shared | Per-user namespaces, shared project space |
| Privacy-focused | Local-only, explicit consent per category |
## Recovery Patterns
### Context Lost
If agent loses context mid-session:
1. Re-read memory.md
2. Check index.md for relevant namespaces
3. Load active project namespace
4. Continue with restored patterns
### Corruption Recovery
If memory file corrupted:
1. Check archive/ for recent backup
2. Rebuild from corrections.md
3. Ask user to re-confirm critical preferences
4. Log incident for debugging

View File

@@ -0,0 +1,196 @@
# Setup — Self-Improving Agent
## First-Time Setup
### 1. Create Memory Structure
```bash
mkdir -p ~/self-improving/{projects,domains,archive}
```
### 2. Initialize Core Files
Create `~/self-improving/memory.md` using `memory-template.md`:
```markdown
Copy the structure from `memory-template.md` into `~/self-improving/memory.md`.
```
Memory file baseline:
```markdown
# Memory (HOT Tier)
## Preferences
## Patterns
## Rules
```
Create `~/self-improving/corrections.md`:
```markdown
# Corrections Log
| Date | What I Got Wrong | Correct Answer | Status |
|------|-----------------|----------------|--------|
```
Create `~/self-improving/index.md`:
```markdown
# Memory Index
| File | Lines | Last Updated |
|------|-------|--------------|
| memory.md | 0 | — |
| corrections.md | 0 | — |
```
Create `~/self-improving/heartbeat-state.md`:
```markdown
# Self-Improving Heartbeat State
last_heartbeat_started_at: never
last_reviewed_change_at: never
last_heartbeat_result: never
## Last actions
- none yet
```
### 3. Choose Operating Mode
Add to your AGENTS.md or workspace config:
```markdown
## Self-Improving Mode
Current mode: Passive
Available modes:
- Passive: Only learn from explicit corrections
- Active: Suggest patterns after 3x repetition
- Strict: Require confirmation for every entry
```
### 4. Add SOUL.md Steering
Add this section to your `SOUL.md`:
```markdown
**Self-Improving**
Compounding execution quality is part of the job.
Before non-trivial work, load `~/self-improving/memory.md` and only the smallest relevant domain or project files.
After corrections, failed attempts, or reusable lessons, write one concise entry to the correct self-improving file immediately.
Prefer learned rules when relevant, but keep self-inferred rules revisable.
Do not skip retrieval just because the task feels familiar.
```
### 5. Add the Proactivity Companion as Part of Setup
At the end of setup, briefly tell the user that you are going to add characteristics so the agent is more proactive:
- noticing missing next steps
- verifying outcomes instead of assuming they landed
- recovering context better after long or interrupted threads
- keeping the right level of initiative
Then say that, for this, you are going to install the `Proactivity` skill.
Only install it after the user explicitly agrees.
If the user agrees:
1. Run `clawhub install proactivity`
2. Read the installed `proactivity` skill
3. Continue into its setup flow immediately so the skill is active for this workspace
If the user says no, skip it cleanly and continue with self-improving only.
### 6. Refine AGENTS.md Memory Section (Non-Destructive)
Update `AGENTS.md` by complementing the existing `## Memory` section. Do not replace the whole section and do not remove existing lines.
If your `## Memory` block differs from the default template, insert the same additions in equivalent places so existing information is preserved.
Add this line in the continuity list (next to Daily notes and Long-term):
```markdown
- **Self-improving:** `~/self-improving/` (via `self-improving` skill) — execution-improvement memory (preferences, workflows, style patterns, what improved/worsened outcomes)
```
Right after the sentence "Capture what matters...", add:
```markdown
Use `memory/YYYY-MM-DD.md` and `MEMORY.md` for factual continuity (events, context, decisions).
Use `~/self-improving/` for compounding execution quality across tasks.
For compounding quality, read `~/self-improving/memory.md` before non-trivial work, then load only the smallest relevant domain or project files.
If in doubt, store factual history in `memory/YYYY-MM-DD.md` / `MEMORY.md`, and store reusable performance lessons in `~/self-improving/` (tentative until human validation).
```
Before the "Write It Down" subsection, add:
```markdown
Before any non-trivial task:
- Read `~/self-improving/memory.md`
- List available files first:
```bash
for d in ~/self-improving/domains ~/self-improving/projects; do
[ -d "$d" ] && find "$d" -maxdepth 1 -type f -name "*.md"
done | sort
```
- Read up to 3 matching files from `~/self-improving/domains/`
- If a project is clearly active, also read `~/self-improving/projects/<project>.md`
- Do not read unrelated domains "just in case"
If inferring a new rule, keep it tentative until human validation.
```
Inside the "Write It Down" bullets, refine the behavior (non-destructive):
- Keep existing intent, but route execution-improvement content to `~/self-improving/`.
- If the exact bullets exist, replace only these lines; if wording differs, apply equivalent edits without removing unrelated guidance.
Use this target wording:
```markdown
- When someone says "remember this" → if it's factual context/event, update `memory/YYYY-MM-DD.md`; if it's a correction, preference, workflow/style choice, or performance lesson, log it in `~/self-improving/`
- Explicit user correction → append to `~/self-improving/corrections.md` immediately
- Reusable global rule or preference → append to `~/self-improving/memory.md`
- Domain-specific lesson → append to `~/self-improving/domains/<domain>.md`
- Project-only override → append to `~/self-improving/projects/<project>.md`
- Keep entries short, concrete, and one lesson per bullet; if scope is ambiguous, default to domain rather than global
- After a correction or strong reusable lesson, write it before the final response
```
## Verification
Run "memory stats" to confirm setup:
```
📊 Self-Improving Memory
🔥 HOT (always loaded):
memory.md: 0 entries
🌡️ WARM (load on demand):
projects/: 0 files
domains/: 0 files
❄️ COLD (archived):
archive/: 0 files
⚙️ Mode: Passive
```
### 7. Add HEARTBEAT.md Steering
Add this section to your `HEARTBEAT.md`:
```markdown
## Self-Improving Check
- Read `./skills/self-improving/heartbeat-rules.md`
- Use `~/self-improving/heartbeat-state.md` for last-run markers and action notes
- If no file inside `~/self-improving/` changed since the last reviewed change, return `HEARTBEAT_OK`
```
Keep this in the same default setup flow as the AGENTS and SOUL additions so recurring maintenance is installed consistently.
If your installed skills path differs, keep the same three lines but point the first line at the installed copy of `heartbeat-rules.md`.

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "seo",
"installedVersion": "1.0.3",
"installedAt": 1774179950271
}

130
skills/seo/SKILL.md Normal file
View File

@@ -0,0 +1,130 @@
---
name: SEO (Site Audit + Content Writer + Competitor Analysis)
slug: seo
version: 1.0.3
homepage: https://clawic.com/skills/seo
changelog: "Improved name clarity with key capabilities"
description: SEO specialist agent with site audits, content writing, keyword research, technical fixes, link building, and ranking strategies.
metadata: {"clawdbot":{"emoji":"🔍","requires":{"bins":[]},"os":["linux","darwin","win32"]}}
---
## Setup
On first use, read `setup.md` for workspace integration.
## When to Use
Agent needs to handle SEO: site audits, content optimization, keyword research, technical fixes, link strategies, local SEO, schema markup, or ranking improvements.
## Architecture
SEO workspace at `~/seo/`. See `memory-template.md` for setup.
```
~/seo/
├── memory.md # Site profiles, audit history, keyword tracking
├── audits/ # Site audit reports
└── content/ # SEO content drafts
```
## Quick Reference
| Topic | File |
|-------|------|
| Title tags, meta descriptions, headers, keyword placement | `on-page.md` |
| Core Web Vitals, crawlability, mobile, indexing | `technical.md` |
| Search intent, E-E-A-T, content writing | `content.md` |
| Google Business, NAP consistency, local keywords | `local.md` |
| JSON-LD, Article, LocalBusiness, FAQ, Product schema | `schema.md` |
| Internal linking, anchor text, backlink strategies | `links.md` |
| Keyword research and competitive analysis | `keywords.md` |
## Core Rules
### 1. Audit Before Action
Run complete site audit before recommendations. Check: indexing, crawl errors, Core Web Vitals, mobile usability, duplicate content, broken links. No guessing.
### 2. Search Intent First
Match content format to query intent. Informational → guides. Transactional → product pages. Commercial → comparisons. Wrong format = no ranking.
### 3. Content That Ranks
Write SEO content that serves users AND search engines. Answer the query in first 100 words. Cover topic comprehensively. Include LSI keywords naturally. Add FAQ section for People Also Ask.
### 4. Technical Foundation
Core Web Vitals: LCP < 2.5s, INP < 200ms, CLS < 0.1. Mobile-first. HTTPS. Canonical URLs. Clean sitemap. No blocked resources. Technical issues kill rankings.
### 5. E-E-A-T Signals
Experience, Expertise, Authoritativeness, Trustworthiness. Author bios with credentials. About page. External citations. Especially critical for YMYL topics.
### 6. Link Strategy
Internal linking builds topical authority. Anchor text matters. External links to authoritative sources help. Never buy links or participate in schemes.
### 7. Measure Everything
Track rankings, organic traffic, CTR, conversions. Use Search Console data. Iterate based on results, not assumptions.
## SEO Audit Checklist
**Indexing:**
- [ ] Site indexed in Google (site:domain.com)
- [ ] No important pages blocked in robots.txt
- [ ] XML sitemap submitted to Search Console
- [ ] No noindex on pages that should rank
**Technical:**
- [ ] Core Web Vitals passing
- [ ] Mobile-friendly
- [ ] HTTPS with no mixed content
- [ ] No crawl errors in Search Console
- [ ] Clean URL structure
**On-Page:**
- [ ] Unique title tags (50-60 chars)
- [ ] Meta descriptions (150-160 chars)
- [ ] One H1 per page with keyword
- [ ] Proper heading hierarchy
- [ ] Images with alt text
- [ ] Internal links
**Content:**
- [ ] Search intent matched
- [ ] Comprehensive coverage
- [ ] No thin content
- [ ] No duplicate content
- [ ] Fresh and updated
**Off-Page:**
- [ ] Google Business Profile (local)
- [ ] Quality backlink profile
- [ ] No toxic links
## Content Writing Process
1. **Keyword research** Find target keyword, search volume, difficulty
2. **Intent analysis** What format ranks? What do users want?
3. **Outline** Cover all subtopics competitors cover + more
4. **Write** Answer query fast, be comprehensive, natural keywords
5. **Optimize** Title, meta, headers, internal links, schema
6. **Publish** Submit to Search Console, monitor rankings
## Common Traps
- Writing content without checking search intent won't rank
- Ignoring Core Web Vitals rankings tank
- Keyword stuffing penalties
- Duplicate title tags wasted crawl budget
- No internal linking poor topical authority
- Buying links manual action risk
## Related Skills
Install with `clawhub install <slug>` if user confirms:
- `content-marketing` Content strategy
- `analytics` Traffic analysis
- `market-research` Competitive analysis
- `html` HTML optimization
- `web` Web development
## Feedback
- If useful: `clawhub star seo`
- Stay updated: `clawhub sync`

6
skills/seo/_meta.json Normal file
View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn73vp5rarc3b14rc7wjcw8f8580t5d1",
"slug": "seo",
"version": "1.0.3",
"publishedAt": 1772021873390
}

45
skills/seo/content.md Normal file
View File

@@ -0,0 +1,45 @@
# Content SEO
## Search Intent
Match content type to query intent — mismatch = no ranking:
- **Informational**: "how to", "what is" → guides, tutorials, explanations
- **Navigational**: brand + product → homepage, product page
- **Transactional**: "buy", "price", "discount" → product/service pages with CTAs
- **Commercial investigation**: "best", "vs", "review" → comparison, reviews
Check intent by searching: what's ranking on page 1? Match that format.
## E-E-A-T
For YMYL (Your Money Your Life) topics — health, finance, legal, safety:
- **Experience**: First-hand experience with topic (reviews, case studies)
- **Expertise**: Author credentials, byline with bio
- **Authoritativeness**: Site reputation, citations from authoritative sources
- **Trustworthiness**: Contact info, privacy policy, secure site, accurate content
E-E-A-T signals:
- Author bio with credentials and photo
- About page with company history
- External links to authoritative sources
- Updated date on content
- Cite sources for claims
## Content Quality
- Thin content penalty: <300 words with no unique value noindex or expand
- Duplicate content: Google picks one version, others filtered use canonical
- Content depth: cover topic comprehensively, answer related questions
- Unique value: what does YOUR content offer that competitors don't?
- User engagement: low time-on-page and high bounce signals poor content
## Freshness
- Query Deserves Freshness (QDF): time-sensitive topics need recent content
- "Best X 2025" queries: dated content ranks, but must be actually updated
- Update dates: changing date without updating content = spam
- Evergreen content: no dates in URL, update content periodically
- News: requires fast indexing, use Google News sitemap for news sites
## Content Structure
- Answer the query in first 100 words featured snippet potential
- Table of contents for long content improves UX and may show as sitelinks
- FAQ section at bottom captures "People Also Ask" queries
- Bullet points and numbered lists preferred for featured snippets
- Short paragraphs (2-3 sentences) easier to read, especially mobile

87
skills/seo/keywords.md Normal file
View File

@@ -0,0 +1,87 @@
# Keyword Research & Competitive Analysis
## Keyword Research Process
1. **Seed keywords** — Start with obvious terms for the topic
2. **Expand** — Find related terms, questions, long-tail variations
3. **Analyze** — Check search volume, difficulty, intent
4. **Prioritize** — Balance volume, difficulty, and business value
5. **Map** — Assign keywords to pages (one primary per page)
## Keyword Metrics
| Metric | What It Means | Action |
|--------|---------------|--------|
| Search volume | Monthly searches | Higher = more potential traffic |
| Keyword difficulty | Competition level | Higher = harder to rank |
| CPC | Ad cost per click | Higher = more commercial value |
| SERP features | Snippets, maps, images | Affects CTR potential |
## Keyword Types
**By intent:**
- Informational: "how to", "what is", "guide"
- Navigational: brand names, specific sites
- Transactional: "buy", "price", "discount", "best"
- Commercial: "vs", "review", "alternative"
**By length:**
- Head terms: 1-2 words, high volume, high competition
- Long-tail: 3+ words, lower volume, easier to rank, higher conversion
## Competitive Analysis
### What to Analyze
1. **Who ranks** — Top 10 for target keywords
2. **What they cover** — Topics, subtopics, depth
3. **How they structure** — Format, headings, media
4. **Where they link** — Internal and external links
5. **Why they rank** — Authority, content quality, technical
### Gap Analysis
Find opportunities competitors miss:
- Topics they don't cover
- Questions they don't answer
- Formats they don't use (video, tools, calculators)
- Depth they lack
### Content Comparison
| Factor | Competitor A | Competitor B | Your Content |
|--------|--------------|--------------|--------------|
| Word count | | | |
| Subtopics covered | | | |
| Media (images, video) | | | |
| Freshness | | | |
| E-E-A-T signals | | | |
## Keyword Mapping
One primary keyword per page. Related keywords support the primary.
```
/page-url/
├── Primary: main target keyword
├── Secondary: related keyword 1, related keyword 2
└── LSI: semantically related terms
```
## Tracking
Track rankings over time:
- Weekly for competitive terms
- Monthly for long-tail
- Note algorithm updates that affect rankings
## Tools
Free options:
- Google Search Console — actual queries driving traffic
- Google Trends — relative search interest
- Google autocomplete — real user queries
- People Also Ask — question-based keywords
- Related searches — bottom of SERP
The skill does keyword research without external tools by analyzing SERPs and using public Google data.

48
skills/seo/links.md Normal file
View File

@@ -0,0 +1,48 @@
# Links & Authority
## Internal Linking
- Every page should be reachable within 3 clicks from homepage
- Use descriptive anchor text — "SEO guide" not "click here"
- Link from high-authority pages to important pages — passes PageRank
- Contextual links (in content) more valuable than nav/footer links
- Fix broken internal links — 404s waste crawl budget
- Don't overdo it — 100+ links per page dilutes value
## Anchor Text
- Exact match: "SEO tips" linking to SEO tips page — powerful but risky if overdone
- Partial match: "learn about SEO strategies" — safer, still relevant
- Branded: "Google's guide" — natural, builds brand
- Naked URL: "https://example.com" — looks natural in some contexts
- Generic: "click here", "read more" — wastes link opportunity
Mix of anchor text types looks natural. All exact match = penalty risk.
## Backlinks
- Quality > quantity — one link from NYT > 100 from random blogs
- Relevance matters — link from SEO blog > link from cooking blog (for SEO site)
- Domain authority (DA/DR) — higher = more valuable, but not everything
- Follow vs nofollow — follow passes PageRank, nofollow doesn't (but still has value)
- Link placement — in-content links > sidebar > footer
## Link Building (White Hat)
- Create linkable assets — tools, research, original data, infographics
- Guest posting — write for relevant blogs, include contextual link
- HARO (Help a Reporter Out) — journalist queries, get press links
- Broken link building — find broken links, offer your content as replacement
- Resource page outreach — get listed on "best resources" pages
- Digital PR — newsworthy content that journalists want to cover
## Link Penalties
- Buying links — Google detects payment patterns, manual actions
- Link exchanges — "I'll link you if you link me" at scale
- PBNs (Private Blog Networks) — footprints get detected
- Excessive reciprocal links — some OK, pattern-based penalized
- Low-quality directories — mass directory submission is outdated
- Comment/forum spam — nofollow anyway, plus looks desperate
## Disavow
- Use only when you can't remove bad links manually
- Upload disavow file in Search Console
- Disavow at domain level for spammy sites: `domain:spamsite.com`
- Don't disavow links you didn't build — Google ignores most bad links automatically
- Takes weeks to months to see effect

43
skills/seo/local.md Normal file
View File

@@ -0,0 +1,43 @@
# Local SEO
## Google Business Profile
- Claim and verify your listing — unverified profiles don't rank
- Complete ALL fields: hours, services, attributes, products
- Primary category: choose the most specific match (not generic "Business")
- Photos: businesses with photos get 42% more direction requests
- Posts: weekly updates show activity, appear in Knowledge Panel
- Q&A: seed with common questions, monitor for spam
## NAP Consistency
- NAP = Name, Address, Phone — must be IDENTICAL everywhere
- Format matters: "Street" vs "St." is a mismatch to Google
- Check major directories: Yelp, Yellow Pages, Facebook, Apple Maps, Bing Places
- Use local phone number — toll-free looks non-local
- Update NAP when moving — inconsistency tanks rankings for months
## Local Keywords
- "[Service] + [City]" pages for each location served
- "near me" queries: proximity + relevance + prominence
- Neighborhood/district names for dense metros
- Location in title tag: "Plumber in Austin, TX | Company Name"
- Service area pages: one per city, unique content each
## Reviews
- Quantity + recency + rating: all three matter for local pack
- Respond to ALL reviews — especially negative, shows engagement
- Never buy fake reviews — Google detects patterns, leads to suspension
- Review velocity: steady flow better than sudden burst
- Ask at point of transaction — "leave a review" links in receipts/emails
## Citations
- Structured citations: business directories with NAP
- Unstructured citations: mentions in articles, blogs, news
- Industry-specific directories: lawyers on Avvo, restaurants on TripAdvisor
- Chamber of commerce and local business associations
- Data aggregators (Factual, Localeze) push to multiple directories
## Local Content
- Location pages with unique content — not just NAP + map
- Local news/events coverage — shows community involvement
- Case studies with local clients — builds local relevance signals
- Embed Google Maps — helps Google verify location

View File

@@ -0,0 +1,84 @@
# Memory Template — SEO
## ~/seo/memory.md
```markdown
# SEO Memory
## Status
- Last audit: [date]
- Active sites: [count]
- Keywords tracked: [count]
## Sites
### [site-name]
- Domain: example.com
- Type: e-commerce / blog / saas / local
- Last audit: [date]
- Priority issues: [list]
- Notes: [context]
## Keyword Tracking
| Keyword | Site | Current | Target | Trend | Last Checked |
|---------|------|---------|--------|-------|--------------|
| [keyword] | [site] | [rank] | [goal] | ↑/↓/→ | [date] |
## Audit History
### [date] — [site]
- Issues found: [count]
- Critical: [list]
- Fixed: [list]
- Pending: [list]
## Content Pipeline
| Title | Target Keyword | Status | Published |
|-------|----------------|--------|-----------|
| [title] | [keyword] | draft/review/live | [date/url] |
## Backlink Opportunities
| Site | DA | Status | Notes |
|------|----|---------| ------|
| [domain] | [score] | contacted/pending/secured | [context] |
## Notes
[Learnings, patterns, site-specific context]
```
## Audit Report Template
Save audit reports to `~/seo/audits/[site]-[date].md`:
```markdown
# SEO Audit — [site] — [date]
## Summary
- Overall health: [score]/100
- Critical issues: [count]
- Warnings: [count]
- Passed: [count]
## Critical Issues
1. [Issue] — [impact] — [fix]
## Warnings
1. [Issue] — [recommendation]
## Recommendations
1. [Priority action]
2. [Next action]
## Technical Details
- Core Web Vitals: LCP [x]s, INP [x]ms, CLS [x]
- Mobile: [pass/fail]
- Indexing: [x] pages indexed
- Crawl errors: [count]
## Next Steps
1. [Action item with deadline]
```

43
skills/seo/on-page.md Normal file
View File

@@ -0,0 +1,43 @@
# On-Page SEO Traps
## Title Tag
- 50-60 chars MAX — Google truncates at ~60, shows "..." which hurts CTR
- Primary keyword in first 30 chars — attention drops after that
- NEVER repeat same keyword twice in title — looks spammy
- Brand name at END, not start — unless brand is the search term
- Pipe `|` or dash `-` as separator — avoid colons, look like error messages
## Meta Description
- 150-160 chars — longer gets truncated, shorter wastes space
- Google often rewrites it — but well-written ones used ~70% of time
- Include primary keyword — gets bolded in search results
- Call-to-action works: "Learn how...", "Discover...", "Get started..."
- NEVER duplicate across pages — Google may show "no description available"
## Headers
- ONE H1 per page — multiple H1s confuse hierarchy
- H1 ≠ Title tag — related but different, doubles keyword opportunities
- H2s for main sections, H3s for subsections — hierarchy matters for featured snippets
- Keyword variations in H2s — don't repeat exact primary keyword
- NEVER use H1 for logo or nav — semantic misuse
## Keywords
- Primary keyword in: title, H1, first 100 words, URL, meta description
- Density under 3% — modern Google detects stuffing patterns
- LSI keywords (related terms) — signals topic depth, not just single keyword
- Long-tail in H2/H3 — captures "how to..." and question queries
## Images
- Alt text describes image — "Golden retriever playing fetch" not "dog"
- Keyword in alt only if natural — forced keywords get penalized
- File names with hyphens: `email-marketing-tips.jpg` not `IMG_4532.jpg`
- Compress under 100KB — larger images kill page speed
- WebP format: 25-35% smaller than JPEG at same quality
- Lazy loading for below-fold images — improves LCP
## URLs
- Short, descriptive, lowercase: `/seo-guide/` not `/page?id=123`
- Hyphens between words: `/email-marketing/` not `/emailmarketing/`
- No dates unless content is time-sensitive — allows evergreen updates
- No stop words unless needed for clarity — "how-to-seo" not "how-to-do-seo"
- NEVER change URLs without 301 redirect — breaks links, loses authority

81
skills/seo/schema.md Normal file
View File

@@ -0,0 +1,81 @@
# Structured Data (Schema Markup)
## Basics
- JSON-LD format preferred — script tag in head, cleanest implementation
- Test with Rich Results Test — not all schema triggers rich results
- Test with Schema Validator (schema.org) — catches syntax errors
- Required vs recommended properties — missing required = invalid
- One schema type per thing — don't mark same content as Article AND BlogPosting
## Common Schema Types
### Article / BlogPosting
```json
{
"@type": "Article",
"headline": "...",
"author": {"@type": "Person", "name": "..."},
"datePublished": "2025-01-15",
"dateModified": "2025-01-20",
"image": "..."
}
```
- `datePublished` required — omitting it loses rich result eligibility
- `dateModified` shows in search when different from published
- `image` recommended for better visual in search results
### LocalBusiness
```json
{
"@type": "LocalBusiness",
"name": "...",
"address": {"@type": "PostalAddress", ...},
"telephone": "...",
"openingHoursSpecification": [...]
}
```
- Use specific subtype: `Restaurant`, `Dentist`, `LegalService`
- `geo` coordinates help Google verify location
- `priceRange` shows in Knowledge Panel
### FAQ
```json
{
"@type": "FAQPage",
"mainEntity": [
{"@type": "Question", "name": "...", "acceptedAnswer": {...}}
]
}
```
- FAQ schema shows expandable Q&A in search results — HUGE CTR boost
- Content must be visible on page — hidden FAQ = spam
- Max ~10 questions typically shown
### Product
```json
{
"@type": "Product",
"name": "...",
"offers": {"@type": "Offer", "price": "...", "priceCurrency": "USD"},
"aggregateRating": {...}
}
```
- `offers` required for price in search results
- `aggregateRating` shows stars — needs actual review data
- `availability` (InStock, OutOfStock) shows availability badge
### HowTo
- Step-by-step instructions with images
- Can show as rich result with step previews
- Each step needs `text`, optionally `image`
### Review
- Individual review with `reviewRating`
- Self-serving reviews (reviewing own business) = spam
## Traps
- Marking invisible content — schema must match visible page content
- Fake reviews/ratings — Google detects and penalizes
- Schema for content that doesn't exist — "Product" on info page
- Mixing incompatible types — Article + Product on same page
- Not updating `dateModified` when content changes

43
skills/seo/setup.md Normal file
View File

@@ -0,0 +1,43 @@
# Setup — SEO
## First Use
This skill works immediately with no setup required.
## Workspace Integration
For persistent tracking across sessions, create workspace:
```
mkdir -p ~/seo/audits ~/seo/content
```
Register in your main memory file:
```markdown
## SEO Workspace
Location: ~/seo/
- Audit reports: ~/seo/audits/
- Content drafts: ~/seo/content/
```
## Optional Configuration
Track specific sites in `~/seo/memory.md`:
```markdown
# SEO Memory
## Sites
- example.com — Main project, e-commerce
- blog.example.com — Content hub
## Keyword Tracking
| Keyword | Current Rank | Target | Last Checked |
|---------|--------------|--------|--------------|
| best widgets | 12 | 3 | 2025-01-15 |
```
## No Setup Required
The skill provides complete SEO guidance without any configuration. Workspace setup is optional for users who want to track multiple sites or audit history.

43
skills/seo/technical.md Normal file
View File

@@ -0,0 +1,43 @@
# Technical SEO
## Core Web Vitals
- **LCP** (Largest Contentful Paint): < 2.5s largest visible element load time
- **INP** (Interaction to Next Paint): < 200ms response to user interaction
- **CLS** (Cumulative Layout Shift): < 0.1 visual stability, no jumping content
- Test with PageSpeed Insights field data from real users matters more than lab
- Poor CWV = ranking demotion in competitive queries
## Crawlability
- robots.txt: `Disallow: /admin/` blocks crawlers check with `robots.txt Tester` in GSC
- NEVER block CSS/JS in robots.txt Google needs them to render JavaScript
- Crawl budget: large sites (>10K pages) must prioritize important pages
- Orphan pages (no internal links) won't get crawled regularly
- XML sitemap: max 50K URLs or 50MB per file, link in robots.txt
## Indexing
- `noindex` meta tag: prevents indexing but wastes crawl budget
- `canonical` URL: self-referencing on all pages, cross-domain for syndicated content
- Parameter URLs (`?sort=price`) need canonical to main version
- Pagination: use rel="next"/"prev" or canonical to page 1, depending on content
- Check indexing in GSC: URL Inspection tool shows render and index status
## Mobile
- Mobile-first indexing: Google indexes mobile version, desktop secondary
- Viewport meta tag required: `<meta name="viewport" content="width=device-width, initial-scale=1">`
- Touch targets minimum 48x48px — failing this hurts mobile usability score
- No intrusive interstitials — popups that block content get demoted
- Test with Mobile-Friendly Test — failing blocks ranking in mobile search
## HTTPS
- Required for rankings — HTTP sites show "Not Secure" warning
- Mixed content (HTTP resources on HTTPS page) breaks padlock
- HSTS header: tells browsers to always use HTTPS
- After migration: 301 redirect all HTTP to HTTPS, update canonical URLs
## Speed
- TTFB < 200ms server response time, hosting matters
- Render-blocking CSS: inline critical CSS, defer rest
- JavaScript: async/defer attributes, avoid blocking main thread
- Images: lazy load, responsive srcset, modern formats (WebP/AVIF)
- Fonts: font-display: swap prevents invisible text during load
- CDN for static assets reduces latency globally

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "serp-analysis",
"installedVersion": "3.0.0",
"installedAt": 1774180329871
}

View File

@@ -0,0 +1,248 @@
---
name: serp-analysis
version: "3.0.0"
description: 'This skill should be used when the user asks to "analyze search results", "SERP analysis", "what ranks for", "SERP features", "why does this page rank", "what is on page one for this query", "who ranks for this keyword", or "what does Google show for". Analyzes search engine results pages (SERPs) to understand ranking factors, SERP features, user intent patterns, and AI overview triggers. Essential for understanding what it takes to rank. For tracking rankings over time, see rank-tracker. For keyword discovery, see keyword-research.'
license: Apache-2.0
compatibility: "Claude Code ≥1.0, skills.sh marketplace, ClawHub marketplace, Vercel Labs skills ecosystem. No system packages required. Optional: MCP network access for SEO tool integrations."
allowed-tools: WebFetch
metadata:
openclaw:
requires:
env: []
bins: []
primaryEnv: AHREFS_API_KEY
author: aaron-he-zhu
version: "3.0.0"
geo-relevance: "high"
tags:
- seo
- geo
- serp
- search results
- ranking factors
- serp features
- ai overviews
- featured snippets
- search intent
- serp-features
- featured-snippet
- google-ai-overview
- ai-overview
- people-also-ask
- knowledge-panel
- serp-composition
- position-zero
- serp-intent
triggers:
- "analyze search results"
- "SERP analysis"
- "what ranks for"
- "SERP features"
- "why does this page rank"
- "featured snippets"
- "AI overviews"
- "what's on page one for this query"
- "who ranks for this keyword"
- "what does Google show for"
---
# SERP Analysis
> **[SEO & GEO Skills Library](https://skills.sh/aaron-he-zhu/seo-geo-claude-skills)** · 20 skills for SEO + GEO · Install all: `npx skills add aaron-he-zhu/seo-geo-claude-skills`
<details>
<summary>Browse all 20 skills</summary>
**Research** · [keyword-research](../keyword-research/) · [competitor-analysis](../competitor-analysis/) · **serp-analysis** · [content-gap-analysis](../content-gap-analysis/)
**Build** · [seo-content-writer](../../build/seo-content-writer/) · [geo-content-optimizer](../../build/geo-content-optimizer/) · [meta-tags-optimizer](../../build/meta-tags-optimizer/) · [schema-markup-generator](../../build/schema-markup-generator/)
**Optimize** · [on-page-seo-auditor](../../optimize/on-page-seo-auditor/) · [technical-seo-checker](../../optimize/technical-seo-checker/) · [internal-linking-optimizer](../../optimize/internal-linking-optimizer/) · [content-refresher](../../optimize/content-refresher/)
**Monitor** · [rank-tracker](../../monitor/rank-tracker/) · [backlink-analyzer](../../monitor/backlink-analyzer/) · [performance-reporter](../../monitor/performance-reporter/) · [alert-manager](../../monitor/alert-manager/)
**Cross-cutting** · [content-quality-auditor](../../cross-cutting/content-quality-auditor/) · [domain-authority-auditor](../../cross-cutting/domain-authority-auditor/) · [entity-optimizer](../../cross-cutting/entity-optimizer/) · [memory-management](../../cross-cutting/memory-management/)
</details>
This skill analyzes Search Engine Results Pages to reveal what's working for ranking content, which SERP features appear, and what triggers AI-generated answers. Understand the battlefield before creating content.
## When to Use This Skill
- Before creating content for a target keyword
- Understanding why certain pages rank #1
- Identifying SERP feature opportunities (featured snippets, PAA)
- Analyzing AI Overview/SGE patterns
- Evaluating keyword difficulty more accurately
- Planning content format based on what ranks
- Identifying ranking factors for specific queries
## What This Skill Does
1. **SERP Composition Analysis**: Maps what appears on the results page
2. **Ranking Factor Identification**: Reveals why top results rank
3. **SERP Feature Mapping**: Identifies featured snippets, PAA, knowledge panels
4. **AI Overview Analysis**: Examines when and how AI answers appear
5. **Intent Signal Detection**: Confirms user intent from SERP composition
6. **Content Format Recommendations**: Suggests optimal format based on SERP
7. **Difficulty Assessment**: Evaluates realistic ranking potential
## How to Use
### Basic SERP Analysis
```
Analyze the SERP for [keyword]
```
```
What does it take to rank for [keyword]?
```
### Feature-Specific Analysis
```
Analyze featured snippet opportunities for [keyword list]
```
```
Which of these keywords trigger AI Overviews? [keyword list]
```
### Competitive SERP Analysis
```
Why does [URL] rank #1 for [keyword]?
```
## Data Sources
> See [CONNECTORS.md](../../CONNECTORS.md) for tool category placeholders.
**With ~~SEO tool + ~~search console + ~~AI monitor connected:**
Automatically fetch SERP snapshots for target keywords, extract ranking page metrics (domain authority, backlinks, content length), pull SERP feature data, and check AI Overview presence using ~~AI monitor. Historical SERP change data and mobile vs. desktop variations can be retrieved automatically.
**With manual data only:**
Ask the user to provide:
1. Target keyword(s) to analyze
2. SERP screenshots or detailed descriptions of search results
3. URLs of top 10 ranking pages
4. Search location and device type (mobile/desktop)
5. Any observations about SERP features (featured snippets, PAA, AI Overviews)
Proceed with the full analysis using provided data. Note in the output which metrics are from automated collection vs. user-provided data.
## Instructions
When a user requests SERP analysis:
1. **Understand the Query**
Clarify if needed:
- Target keyword(s) to analyze
- Search location/language
- Device type (mobile/desktop)
- Specific questions about the SERP
2. **Map SERP Composition**
Document all elements appearing on the results page: AI Overview, ads, featured snippet, organic results, PAA, knowledge panel, image pack, video results, local pack, shopping results, news results, sitelinks, and related searches.
3. **Analyze Top Ranking Pages**
For each of the top 10 results, document: URL, domain, domain authority, content type, word count, publish/update dates, on-page factors (title, meta description, H1, URL structure), content structure (headings, media, tables, FAQ), estimated metrics (backlinks, referring domains), and why it ranks.
4. **Identify Ranking Patterns**
Analyze common characteristics across top 5 results: word count, domain authority, backlinks, content freshness, HTTPS, mobile optimization. Document content format distribution, domain type distribution, and key success factors.
5. **Analyze SERP Features**
For each present SERP feature: analyze the current holder, content format, and strategy to win. Cover Featured Snippet (type, content, winning strategy), PAA (questions, current answers, optimization approach), and AI Overview (sources cited, content patterns, citation strategy).
6. **Determine Search Intent**
Confirm primary intent from SERP composition. Document evidence, intent breakdown percentages, and content format implications (format, tone, CTA).
7. **Calculate True Difficulty**
Score overall difficulty (1-100) based on: top 10 domain authority, page authority, backlinks required, content quality bar, and SERP stability. Provide realistic assessments for new, growing, and established sites, plus easier alternatives.
8. **Generate Recommendations**
Produce a summary with: Key Findings, Content Requirements to Rank (minimum requirements + differentiators), SERP Feature Strategy, Recommended Content Outline, and Next Steps.
> **Reference**: See [references/analysis-templates.md](./references/analysis-templates.md) for detailed templates for each step.
## Validation Checkpoints
### Input Validation
- [ ] Target keyword(s) clearly specified
- [ ] Search location and device type confirmed
- [ ] SERP data is current (date confirmed)
- [ ] Top 10 ranking URLs identified or provided
### Output Validation
- [ ] Every recommendation cites specific data points (not generic advice)
- [ ] SERP composition mapped with all features documented
- [ ] Ranking factors identified from actual top 10 analysis (not assumptions)
- [ ] Content requirements based on observed patterns in current SERP
- [ ] Source of each data point clearly stated (~~SEO tool data, ~~AI monitor data, user-provided, or manual observation)
## Example
> **Reference**: See [references/example-report.md](./references/example-report.md) for a complete example analyzing the SERP for "how to start a podcast".
## Advanced Analysis
### Multi-Keyword SERP Comparison
```
Compare SERPs for [keyword 1], [keyword 2], [keyword 3]
```
### Historical SERP Changes
```
How has the SERP for [keyword] changed over time?
```
### Local SERP Variations
```
Compare SERP for [keyword] in [location 1] vs [location 2]
```
### Mobile vs Desktop SERP
```
Analyze mobile vs desktop SERP differences for [keyword]
```
## Tips for Success
1. **Always check SERP before writing** - Don't assume, verify
2. **Match content format to SERP** - If lists rank, write lists
3. **Identify SERP feature opportunities** - Lower competition than #1
4. **Note SERP volatility** - Stable SERPs are harder to break into
5. **Study the outliers** - Why does a weaker site rank? Opportunity!
6. **Consider AI Overview optimization** - Growing importance
## Reference Materials
- [Analysis Templates](./references/analysis-templates.md) — Detailed templates for each analysis step (SERP composition, top results, ranking patterns, features, intent, difficulty, recommendations)
- [SERP Feature Taxonomy](./references/serp-feature-taxonomy.md) — Complete taxonomy of SERP features with trigger conditions, AI overview framework, intent signals, and volatility assessment
- [Example Report](./references/example-report.md) — Complete example analyzing the SERP for "how to start a podcast"
## Related Skills
- [keyword-research](../keyword-research/) — Find keywords to analyze
- [competitor-analysis](../competitor-analysis/) — Deep dive on ranking competitors
- [on-page-seo-auditor](../../optimize/on-page-seo-auditor/) — Optimize based on findings
- [geo-content-optimizer](../../build/geo-content-optimizer/) — Optimize for AI citations
- [meta-tags-optimizer](../../build/meta-tags-optimizer/) — Optimize SERP appearance with meta tags
- [rank-tracker](../../monitor/rank-tracker/) — Track keyword position changes in SERPs
- [performance-reporter](../../monitor/performance-reporter/) — Track SERP visibility metrics over time

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn73qjxwmbna25qq8q051epqt980sys5",
"slug": "serp-analysis",
"version": "3.0.0",
"publishedAt": 1772640709756
}

View File

@@ -0,0 +1,297 @@
# SERP Analysis — Analysis Templates
Templates for each step of the SERP analysis workflow. Use these to structure your output.
## SERP Composition Template
```markdown
## SERP Analysis: "[keyword]"
**Search Details**
- Keyword: [keyword]
- Location: [location]
- Device: [mobile/desktop]
- Date: [date]
### SERP Layout Overview
```
┌─────────────────────────────────────────┐
│ [AI Overview / SGE] (if present) │
├─────────────────────────────────────────┤
│ [Ads] - [X] ads above fold │
├─────────────────────────────────────────┤
│ [Featured Snippet] (if present) │
├─────────────────────────────────────────┤
│ [Organic Result #1] │
│ [Organic Result #2] │
│ [People Also Ask] (if present) │
│ [Organic Result #3] │
│ ... │
├─────────────────────────────────────────┤
│ [Related Searches] │
└─────────────────────────────────────────┘
```
### SERP Features Present
| Feature | Present | Position | Opportunity |
|---------|---------|----------|-------------|
| AI Overview | Yes/No | Top | [analysis] |
| Featured Snippet | Yes/No | [pos] | [analysis] |
| People Also Ask | Yes/No | [pos] | [analysis] |
| Knowledge Panel | Yes/No | Right | [analysis] |
| Image Pack | Yes/No | [pos] | [analysis] |
| Video Results | Yes/No | [pos] | [analysis] |
| Local Pack | Yes/No | [pos] | [analysis] |
| Shopping Results | Yes/No | [pos] | [analysis] |
| News Results | Yes/No | [pos] | [analysis] |
| Sitelinks | Yes/No | [pos] | [analysis] |
```
## Top Results Analysis Template
```markdown
### Top 10 Organic Results Analysis
#### Position #1: [Title]
**URL**: [url]
**Domain**: [domain]
**Domain Authority**: [DA]
**Content Analysis**:
- Type: [Blog/Product/Guide/etc.]
- Word Count: [X] words
- Publish Date: [date]
- Last Updated: [date]
**On-Page Factors**:
- Title: [exact title]
- Title contains keyword: Yes/No
- Meta description: [description]
- H1: [heading]
- URL structure: [clean/keyword-rich/etc.]
**Content Structure**:
- Headings (H2s): [list key sections]
- Media: [X] images, [X] videos
- Tables/Lists: Yes/No
- FAQ section: Yes/No
**Estimated Metrics**:
- Page backlinks: [X]
- Referring domains: [X]
- Social shares: [X]
**Why It Ranks #1**:
1. [Factor 1]
2. [Factor 2]
3. [Factor 3]
[Repeat for positions #2-10]
```
## Ranking Patterns Template
```markdown
### Ranking Patterns Analysis
**Common Characteristics of Top 5 Results**:
| Factor | Avg/Common Value | Importance |
|--------|-----------------|------------|
| Word Count | [X] words | High/Med/Low |
| Domain Authority | [X] | High/Med/Low |
| Page Backlinks | [X] | High/Med/Low |
| Content Freshness | [timeframe] | High/Med/Low |
| HTTPS | [X]% | High/Med/Low |
| Mobile Optimized | [X]% | High/Med/Low |
**Content Format Distribution**:
- How-to guides: [X]/10
- Listicles: [X]/10
- In-depth articles: [X]/10
- Product pages: [X]/10
- Other: [X]/10
**Domain Type Distribution**:
- Brand/Company sites: [X]/10
- Media/News sites: [X]/10
- Niche blogs: [X]/10
- Aggregators: [X]/10
**Key Success Factors Identified**:
1. **[Factor 1]**: [Explanation + evidence]
2. **[Factor 2]**: [Explanation + evidence]
3. **[Factor 3]**: [Explanation + evidence]
```
## SERP Features Analysis Template
```markdown
### Featured Snippet Analysis
**Current Snippet Holder**: [URL]
**Snippet Type**: [Paragraph/List/Table/Video]
**Snippet Content**:
> [Exact text/description of snippet]
**How to Win This Snippet**:
1. [Strategy based on current snippet]
2. [Content format recommendation]
3. [Structure recommendation]
---
### People Also Ask (PAA) Analysis
**Questions Appearing**:
1. [Question 1] → Currently answered by: [URL]
2. [Question 2] → Currently answered by: [URL]
3. [Question 3] → Currently answered by: [URL]
4. [Question 4] → Currently answered by: [URL]
**PAA Optimization Strategy**:
- Include these questions as H2/H3 headings
- Provide direct, concise answers (40-60 words)
- Use FAQ schema markup
---
### AI Overview Analysis
**AI Overview Present**: Yes/No
**AI Overview Type**: [Summary/List/Comparison/etc.]
**Sources Cited in AI Overview**:
1. [Source 1] - [Why cited]
2. [Source 2] - [Why cited]
3. [Source 3] - [Why cited]
**AI Overview Content Patterns**:
- Pulls definitions from: [source type]
- Lists information as: [format]
- Cites statistics from: [source type]
**How to Get Cited in AI Overview**:
1. [Specific recommendation]
2. [Specific recommendation]
3. [Specific recommendation]
```
## Search Intent Template
```markdown
### Search Intent Analysis
**Primary Intent**: [Informational/Commercial/Transactional/Navigational]
**Evidence**:
- SERP features suggest: [analysis]
- Top results are: [content types]
- User likely wants: [description]
**Intent Breakdown**:
- Informational signals: [X]%
- Commercial signals: [X]%
- Transactional signals: [X]%
**Content Format Implication**:
Based on intent, your content should:
- Format: [recommendation]
- Tone: [recommendation]
- CTA: [recommendation]
```
## Difficulty Assessment Template
```markdown
### Difficulty Assessment
**Overall Difficulty Score**: [X]/100
**Difficulty Factors**:
| Factor | Score | Weight | Impact |
|--------|-------|--------|--------|
| Top 10 Domain Authority | [avg] | 25% | [High/Med/Low] |
| Top 10 Page Authority | [avg] | 20% | [High/Med/Low] |
| Backlinks Required | [est.] | 20% | [High/Med/Low] |
| Content Quality Bar | [rating] | 20% | [High/Med/Low] |
| SERP Stability | [rating] | 15% | [High/Med/Low] |
**Realistic Assessment**:
- **New site (DA <20)**: [Can rank?] [Timeframe]
- **Growing site (DA 20-40)**: [Can rank?] [Timeframe]
- **Established site (DA 40+)**: [Can rank?] [Timeframe]
**Easier Alternatives**:
If too difficult, consider:
- [Alternative keyword 1] - Difficulty: [X]
- [Alternative keyword 2] - Difficulty: [X]
```
## Recommendations Template
```markdown
## SERP Analysis Summary & Recommendations
### Key Findings
1. [Most important finding]
2. [Second important finding]
3. [Third important finding]
### Content Requirements to Rank
To compete for "[keyword]", you need:
**Minimum Requirements**:
- [ ] Word count: [X]+ words
- [ ] Backlinks: [X]+ referring domains
- [ ] Domain Authority: [X]+
- [ ] Content format: [type]
- [ ] Include: [specific elements]
**Differentiators to Win**:
- [ ] [Unique angle from analysis]
- [ ] [Missing element in current results]
- [ ] [SERP feature opportunity]
### SERP Feature Strategy
| Feature | Winnable? | Strategy |
|---------|-----------|----------|
| Featured Snippet | Yes/No | [strategy] |
| PAA | Yes/No | [strategy] |
| AI Overview | Yes/No | [strategy] |
### Recommended Content Outline
Based on SERP analysis:
```
Title: [Optimized title]
H1: [Main heading]
[Introduction - address intent immediately]
H2: [Section based on PAA/top results]
H2: [Section based on PAA/top results]
H2: [Section based on PAA/top results]
[FAQ section for PAA optimization]
[Conclusion with CTA]
```
### Next Steps
1. [Immediate action]
2. [Content creation action]
3. [Optimization action]
```

View File

@@ -0,0 +1,85 @@
# SERP Analysis — Example Report
This is a complete example output showing the expected format for SERP analysis deliverables.
## Prompt
**User**: "Analyze the SERP for 'how to start a podcast'"
## Example Output
```markdown
# SERP Analysis: "how to start a podcast"
## SERP Overview
**Intent**: Informational (100%)
**Difficulty**: 72/100 (High)
**AI Overview**: Yes - comprehensive guide format
### SERP Features Present
| Feature | Present | Analysis |
|---------|---------|----------|
| AI Overview | ✅ | Lists steps, cites 3 sources |
| Featured Snippet | ✅ | Step-by-step list from Buzzsprout |
| People Also Ask | ✅ | 4 questions visible, expandable |
| Video Results | ✅ | 3 YouTube videos mid-page |
| Image Pack | ❌ | |
### Top 5 Results Analysis
| Pos | Domain | DA | Word Count | Format | Backlinks |
|-----|--------|-----|------------|--------|-----------|
| 1 | Buzzsprout | 71 | 8,500 | Ultimate Guide | 2,400 |
| 2 | Spotify | 93 | 3,200 | How-to Guide | 890 |
| 3 | Podcastinsights | 58 | 12,000 | Mega Guide | 1,800 |
| 4 | Transistor | 62 | 5,500 | Tutorial | 720 |
| 5 | HubSpot | 91 | 6,200 | Complete Guide | 1,100 |
### Why #1 Ranks First
Buzzsprout's guide succeeds because:
1. **Comprehensive** - Covers every step in detail
2. **Updated** - Current year in title, recent updates
3. **Structured** - Clear numbered steps (owns featured snippet)
4. **Authoritative** - Podcast hosting company (topical authority)
5. **Supporting content** - Links to detailed sub-guides
### Featured Snippet Opportunity
**Current format**: Ordered list (steps)
**Current holder**: Buzzsprout
**To win snippet**:
- Create cleaner, more scannable list format
- Keep steps to 8-10 items max
- Start each step with action verb
- Include "how to start a podcast" in H2
### AI Overview Analysis
**Sources cited**:
1. Buzzsprout - "Choose your podcast topic"
2. Spotify for Podcasters - "Record and edit"
3. Wikipedia - Definition of podcasting
**Pattern**: AI pulls step-by-step instructions from guides with clear structure
### Content Requirements
To rank on page 1:
- **Word count**: 5,000+ words minimum
- **Format**: Step-by-step ultimate guide
- **Backlinks**: 500+ from relevant domains
- **Updates**: Must show current year
- **Unique angle**: Equipment comparisons, cost breakdowns, or specific niche focus
### Recommended Strategy
Given high difficulty, consider:
1. Target long-tail: "how to start a podcast for free" (Difficulty: 45)
2. Target niche: "how to start a podcast about [topic]" (Difficulty: 30)
3. Create supporting video content for video carousel
4. Focus on PAA optimization for quick wins
```

View File

@@ -0,0 +1,404 @@
# SERP Feature Taxonomy
A comprehensive reference covering every SERP feature type, trigger conditions, optimization techniques, monitoring approaches, and AI Overview patterns. Use this to plan which SERP features to target and how to win them.
## Overview
Modern Search Engine Results Pages are far more than ten blue links. Google displays 20+ distinct feature types that can dramatically affect click-through rates, visibility, and traffic. Understanding which features appear for your target keywords -- and how to optimize for them -- is essential to any SEO or GEO strategy.
---
## SERP Feature Categories
SERP features fall into five broad categories:
| Category | Features | Controlled By |
|----------|---------|--------------|
| **Knowledge Features** | Knowledge Panel, AI Overview, Featured Snippet | Content quality + structured data |
| **Engagement Features** | People Also Ask, Related Searches, Things to Know | Content relevance + question coverage |
| **Rich Results** | FAQ, How-To, Review Stars, Recipe, Event, Product | Schema markup + content format |
| **Media Features** | Image Pack, Video Carousel, Web Stories | Media optimization + hosting platform |
| **Commerce Features** | Shopping Results, Local Pack, Ads | Merchant feeds + Google Business Profile + ad spend |
---
## Complete Feature Reference
### 1. Featured Snippet
**What it is:** An extracted answer displayed at Position 0 (above organic results) in a box.
**Sub-types:**
| Sub-type | Format | Typical Trigger | Example Query |
|---------|--------|----------------|---------------|
| Paragraph | 40-60 word text block | "What is", "Why is", definitions | "what is domain authority" |
| Ordered List | Numbered steps | "How to", process queries | "how to submit a sitemap" |
| Unordered List | Bulleted list | "Types of", "best", collections | "types of schema markup" |
| Table | Data in rows/columns | Comparison, data, pricing | "HTTP status codes list" |
| Video | YouTube clip with timestamp | "How to" with visual component | "how to use Google Search Console" |
**Optimization Playbook:**
1. **Identify snippet-eligible keywords** -- Check if a snippet already exists for your target keyword
2. **Match the existing format** -- If current snippet is a list, create a list; if paragraph, write a concise paragraph
3. **Place the answer immediately after the triggering heading** -- Use H2/H3 with the target question, then answer directly below
4. **Keep paragraph snippets to 40-60 words** -- Concise, complete answers win
5. **Use proper HTML structure** -- Ordered lists use `<ol>`, tables use `<table>`, not just visual formatting
6. **Include the target query in the heading** -- The H2/H3 should closely match the search query
7. **Provide context after the snippet answer** -- Elaborate below to demonstrate depth
**Monitoring:**
- Track featured snippet ownership weekly for target keywords
- Monitor snippet format changes (Google may switch from paragraph to list)
- Watch for snippet loss after content updates
---
### 2. People Also Ask (PAA)
**What it is:** An expandable accordion of related questions with brief answers pulled from web pages.
**Trigger conditions:**
- Almost all informational queries
- Many commercial investigation queries
- Questions beget more questions -- clicking one PAA reveals additional questions
**Optimization Playbook:**
1. **Mine PAA questions for content ideas** -- Each PAA question is a validated search query
2. **Answer PAA questions within your content** -- Use the exact question as an H2 or H3
3. **Keep answers concise (40-60 words)** -- PAA answers are short excerpts
4. **Use FAQ schema markup** -- Increases eligibility for PAA and FAQ rich results
5. **Create dedicated FAQ sections** -- Group 5-10 related questions at the end of articles
6. **Target the cascade** -- When users click one PAA, new questions appear; cover those too
**PAA Mining Workflow:**
1. Search your target keyword
2. Note all visible PAA questions (4 initially)
3. Click each one to reveal 2-4 more
4. Repeat to collect 15-20 related questions
5. Group questions by subtopic
6. Create content addressing each cluster
---
### 3. AI Overview (formerly SGE)
**What it is:** An AI-generated summary at the top of the SERP that synthesizes information from multiple sources, with cited links.
**Trigger conditions:**
- Informational queries (highest trigger rate)
- Some commercial investigation queries
- Question-format queries
- Definitional and explanatory queries
- Lower trigger rate for navigational and transactional queries
**AI Overview Formats:**
| Format | Description | Trigger Pattern |
|--------|-----------|----------------|
| Summary paragraph | Synthesized text answer | Definitional and explanatory queries |
| Bulleted list | Key points extracted from sources | "Benefits of", "reasons for", multi-factor answers |
| Step-by-step | Ordered process | "How to" queries |
| Comparison | Side-by-side analysis | "X vs Y", "difference between" |
| Table | Structured data comparison | Data comparison, pricing, specifications |
**Optimization Playbook:**
1. **Write clear, citable sentences** -- AI systems extract well-formed statements of fact
2. **Front-load key information** -- Place the most important answer in the first 1-2 sentences of each section
3. **Use structured data** -- Schema markup helps AI systems understand your content
4. **Establish topical authority** -- AI overviews prefer citing authoritative sources on a topic
5. **Include original data and statistics** -- Unique data points are highly citable
6. **Create comparison content** -- AI loves to cite well-structured comparison tables
7. **Update content regularly** -- Recency signals influence AI source selection
8. **Use clear section headings** -- AI systems use headings to understand content structure
**Source Citation Patterns:**
| What Gets Cited | Why | How to Optimize |
|----------------|-----|----------------|
| Definitions | AI needs authoritative definitions | Write clear, complete definitions in first paragraph |
| Statistics | AI cites specific data points | Include original research, cite sources |
| Step-by-step processes | AI extracts structured sequences | Use numbered lists with clear step headers |
| Comparison data | AI synthesizes multi-source comparisons | Create comparison tables with clear labels |
| Expert quotes | AI values authoritative voices | Include expert attribution with credentials |
---
### 4. Knowledge Panel
**What it is:** A large information box (typically right sidebar on desktop) showing structured entity information from Google's Knowledge Graph.
**Trigger conditions:**
- Brand/entity queries
- Notable person queries
- Place/organization queries
- Product/service entities
**Optimization Playbook:**
1. **Establish a Google Knowledge Graph entity** -- Ensure your brand exists as a recognized entity
2. **Claim and verify your Knowledge Panel** -- Use the "Claim this knowledge panel" option
3. **Maintain consistent NAP** -- Name, Address, Phone across all web properties
4. **Build Wikipedia presence** -- Knowledge Panels pull heavily from Wikipedia/Wikidata
5. **Use Organization schema markup** -- Help Google understand your entity
6. **Maintain active social profiles** -- Connected social accounts appear in Knowledge Panel
7. **Get featured in authoritative sources** -- Mentions in news, industry publications, and databases
---
### 5. Image Pack
**What it is:** A row of image thumbnails within organic results, linking to Google Images.
**Trigger conditions:**
- Visual queries ("what does X look like")
- Product queries
- Design/inspiration queries
- Some informational queries with visual components
**Optimization Playbook:**
1. **Use descriptive file names** -- `seo-audit-checklist-template.png` not `IMG_4523.png`
2. **Write complete alt text** -- Describe the image content and context accurately
3. **Optimize image file size** -- Compress without losing quality (WebP format preferred)
4. **Use original images** -- Stock photos rarely rank; original screenshots, diagrams, and photos perform better
5. **Add image structured data** -- ImageObject schema when applicable
6. **Place images near relevant text** -- Context from surrounding content helps ranking
7. **Create image sitemaps** -- Help Google discover all your images
8. **Use responsive images** -- Serve appropriate sizes for different devices
---
### 6. Video Carousel / Video Results
**What it is:** A horizontal carousel of video thumbnails, typically from YouTube, or individual video results with thumbnails in organic listings.
**Trigger conditions:**
- "How to" queries
- Tutorial and instructional queries
- Entertainment queries
- Review queries
- Any query where video content provides superior user experience
**Optimization Playbook:**
1. **Host on YouTube** -- YouTube videos dominate video carousels
2. **Optimize video title** -- Include target keyword naturally
3. **Write detailed descriptions** -- First 2-3 lines appear in search; include keywords and summary
4. **Add chapters/timestamps** -- Key Moments markup helps Google surface specific sections
5. **Create transcripts** -- Closed captions and transcripts provide indexable text
6. **Use VideoObject schema** -- On your own site pages embedding video
7. **Design compelling thumbnails** -- Higher CTR from search results
8. **Target video-intent keywords** -- "How to" and tutorial queries have highest video potential
---
### 7. Local Pack (Map Pack)
**What it is:** A map with 3 local business listings showing name, rating, address, and hours.
**Trigger conditions:**
- "[service] near me" queries
- "[service] in [location]" queries
- Queries with implicit local intent
- Service-based business queries
**Optimization Playbook:**
1. **Claim and optimize Google Business Profile** -- Complete every field
2. **Build consistent local citations** -- NAP consistency across directories
3. **Collect and respond to reviews** -- Volume and recency of reviews matter
4. **Add photos regularly** -- Active profiles rank higher
5. **Use local business schema** -- LocalBusiness structured data on website
6. **Create location-specific pages** -- If multiple locations, each needs its own page
7. **Build local backlinks** -- Local news, chambers of commerce, community sites
8. **Post Google Business updates** -- Regular posts signal activity
---
### 8. Shopping Results
**What it is:** Product listing ads and free product listings with images, prices, and store names.
**Trigger conditions:**
- Product purchase queries
- Product name queries
- "Buy [product]" queries
- Price comparison queries
**Optimization Playbook:**
1. **Submit product feed to Google Merchant Center** -- Required for shopping results
2. **Optimize product titles** -- Include key attributes (brand, color, size, model)
3. **Use high-quality product images** -- White background, multiple angles
4. **Implement Product schema** -- Structured data for price, availability, reviews
5. **Keep pricing accurate** -- Mismatches between feed and landing page cause disapproval
6. **Collect product reviews** -- Aggregate ratings appear in shopping results
7. **Optimize landing pages** -- Fast, mobile-friendly, clear purchase path
---
### 9. Sitelinks
**What it is:** Additional links beneath a search result that point to specific pages within the same domain.
**Sub-types:**
| Sub-type | Appearance | Trigger |
|---------|-----------|---------|
| Full sitelinks | 4-6 two-column links with descriptions | Brand/navigational queries for authoritative sites |
| Inline sitelinks | 2-4 single-line links | Semi-navigational queries |
| Search box sitelinks | Site-specific search box | Large, well-structured sites |
**Optimization Playbook:**
1. **Build clear site architecture** -- Logical hierarchy with descriptive navigation
2. **Use descriptive page titles** -- Each page should have a unique, clear title
3. **Implement breadcrumb schema** -- Helps Google understand site structure
4. **Create a comprehensive sitemap** -- XML sitemap submitted to Search Console
5. **Build internal links** -- Strong internal linking reinforces page importance
6. **Use SearchAction schema** -- Enables the sitelinks search box
---
### 10. Rich Results (Schema-Dependent)
These features depend on specific structured data markup:
| Rich Result | Schema Required | Content Type | Visual Impact |
|------------|----------------|-------------|--------------|
| FAQ | FAQPage | FAQ sections on any page | Expandable Q&A below listing |
| How-To | HowTo | Step-by-step instructions | Steps with optional images |
| Review Stars | Review / AggregateRating | Product/service reviews | Star rating in snippet |
| Recipe | Recipe | Food/cooking content | Image, cook time, calories |
| Event | Event | Event listings | Date, location, price |
| Job Posting | JobPosting | Job listings | Salary, location, company |
| Course | Course | Educational content | Provider, description, rating |
| Breadcrumb | BreadcrumbList | Any page with hierarchy | Path display replacing URL |
**General Rich Result Optimization:**
1. **Validate with Rich Results Test** -- Test every page before publishing
2. **Follow Google's structured data guidelines** -- No cloaking or misleading markup
3. **Keep markup accurate** -- Schema content must match visible page content
4. **Monitor in Search Console** -- Check Enhancement reports for errors
5. **Don't over-mark** -- Only add schema for content types genuinely on the page
---
### 11. Related Searches / People Also Search For
**What it is:** Related query suggestions at the bottom of the SERP ("Related searches") or shown after a user clicks a result and returns ("People also search for").
**Value for SEO:**
- Keyword discovery -- reveals semantically related queries
- Content gap identification -- topics users explore after your target query
- Topic cluster planning -- natural subtopics to cover
**How to Use:**
1. Mine related searches for content ideas and internal linking opportunities
2. Cover related topics within your content to demonstrate comprehensiveness
3. Use related search terms as H2/H3 headings in long-form content
---
### 12. "Things to Know" / Key Moments
**What it is:** Carousel cards showing key aspects of a topic, or key moments within a video.
**Trigger conditions:**
- Broad informational queries
- Multi-faceted topics
- Video content with chapters
**Optimization:**
- Cover multiple aspects of a topic comprehensively
- Use clear section headings that match common subtopics
- For video: add chapter markers with timestamps
---
## SERP Feature Prioritization Matrix
Not all SERP features deserve equal attention. Prioritize based on your content type and goals:
| SERP Feature | Traffic Impact | Effort to Win | Best For |
|-------------|---------------|--------------|---------|
| Featured Snippet | Very High | Medium | Informational content sites |
| AI Overview citation | High (growing) | Medium-High | Authority/expertise sites |
| People Also Ask | Medium-High | Low-Medium | FAQ-rich content |
| Video Carousel | High | High (video production) | Tutorial/how-to content |
| Local Pack | Very High (local) | Medium | Local businesses |
| Rich Results (FAQ) | Medium | Low | Any content with Q&A |
| Rich Results (Review) | Medium-High | Low-Medium | Product/service reviews |
| Image Pack | Medium | Low-Medium | Visual content creators |
| Shopping Results | Very High (ecommerce) | Medium | Product sellers |
| Knowledge Panel | Medium (brand) | High (long-term) | Established brands |
| Sitelinks | Low (brand already ranking) | Low (structural) | Large, structured sites |
---
## SERP Feature Monitoring Framework
### What to Track
| Metric | Frequency | Tool Category | Action Threshold |
|--------|-----------|--------------|-----------------|
| Featured snippet ownership | Weekly | ~~SEO tool | Lost snippet → investigate within 48 hours |
| AI Overview citation rate | Weekly | ~~AI monitor | Citation loss → review content freshness |
| PAA presence for target keywords | Monthly | ~~SEO tool | New PAA questions → create content |
| SERP feature composition changes | Monthly | ~~SEO tool | New feature appearing → optimize for it |
| Rich result errors | Weekly | Search Console | Any error → fix immediately |
| Local Pack ranking | Weekly | ~~SEO tool | Drop below position 3 → investigate |
### SERP Feature Change Analysis
When SERP features change for your target keywords, investigate:
| Change | Possible Causes | Recommended Action |
|--------|----------------|-------------------|
| Featured snippet disappeared | Google removed snippet for this query; competitor won it | Check if snippet still exists; create better snippet-targeted content |
| AI Overview appeared (new) | Google expanded AI Overviews to this query type | Optimize content for AI citation |
| AI Overview disappeared | Query type removed from AI Overview program | Refocus on traditional SERP features |
| Video carousel appeared | Google detected video intent for this query | Create video content for the keyword |
| Local Pack appeared | Google detected local intent shift | Consider local SEO if relevant |
| Shopping results appeared | Google detected commercial intent shift | Consider product markup or adjust content angle |
---
## SERP Feature Combination Patterns
Certain SERP feature combinations indicate specific opportunities:
| SERP Combination | What It Signals | Opportunity |
|-----------------|----------------|-------------|
| AI Overview + Featured Snippet | Google sees this as high-information query | Optimize for both -- structured content with clear answers |
| Video + PAA + Featured Snippet | Multi-format informational query | Create comprehensive guide with video and FAQ |
| Shopping + Ads + Reviews | Strong commercial intent | Product optimization, review content |
| Local Pack + Ads | Local commercial intent | Google Business Profile optimization |
| No features (just blue links) | Low-feature query (or very new topic) | Potential early-mover advantage for rich results |
| PAA only (no snippet) | Snippet opportunity not yet captured | Create snippet-optimized content |
---
## AI Overview vs. Traditional SERP Feature Strategy
The rise of AI Overviews changes how to prioritize SERP features:
| Scenario | Traditional Strategy | AI-Era Strategy |
|---------|---------------------|----------------|
| Informational query | Win featured snippet | Win AI Overview citation AND featured snippet |
| Comparison query | Create comparison content | Create structured comparison tables (AI prefers these) |
| Definition query | Write clear definition for snippet | Write authoritative, citable definition with evidence |
| How-to query | Create step-by-step list | Create steps with unique insights AI can synthesize |
| List query | Create comprehensive ranked list | Create list with original data/reasoning AI can cite |
### Key Difference
- **Traditional SERP features** reward **format optimization** (structure your content to match the feature)
- **AI Overviews** reward **authority and uniqueness** (be the source AI trusts for accurate, original information)
Optimizing for both requires content that is both structurally sound AND substantively authoritative.

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "summarize",
"installedVersion": "1.0.0",
"installedAt": 1774004787524
}

49
skills/summarize/SKILL.md Normal file
View File

@@ -0,0 +1,49 @@
---
name: summarize
description: Summarize URLs or files with the summarize CLI (web, PDFs, images, audio, YouTube).
homepage: https://summarize.sh
metadata: {"clawdbot":{"emoji":"🧾","requires":{"bins":["summarize"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/summarize","bins":["summarize"],"label":"Install summarize (brew)"}]}}
---
# Summarize
Fast CLI to summarize URLs, local files, and YouTube links.
## Quick start
```bash
summarize "https://example.com" --model google/gemini-3-flash-preview
summarize "/path/to/file.pdf" --model google/gemini-3-flash-preview
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto
```
## Model + keys
Set the API key for your chosen provider:
- OpenAI: `OPENAI_API_KEY`
- Anthropic: `ANTHROPIC_API_KEY`
- xAI: `XAI_API_KEY`
- Google: `GEMINI_API_KEY` (aliases: `GOOGLE_GENERATIVE_AI_API_KEY`, `GOOGLE_API_KEY`)
Default model is `google/gemini-3-flash-preview` if none is set.
## Useful flags
- `--length short|medium|long|xl|xxl|<chars>`
- `--max-output-tokens <count>`
- `--extract-only` (URLs only)
- `--json` (machine readable)
- `--firecrawl auto|off|always` (fallback extraction)
- `--youtube auto` (Apify fallback if `APIFY_API_TOKEN` set)
## Config
Optional config file: `~/.summarize/config.json`
```json
{ "model": "openai/gpt-5.2" }
```
Optional services:
- `FIRECRAWL_API_KEY` for blocked sites
- `APIFY_API_TOKEN` for YouTube fallback

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn70pywhg0fyz996kpa8xj89s57yhv26",
"slug": "summarize",
"version": "1.0.0",
"publishedAt": 1767545383635
}

View File

@@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "zoho-mail-skill",
"installedVersion": "1.0.0",
"installedAt": 1774004946584
}

View File

@@ -0,0 +1,192 @@
---
name: zoho-mail
description: Read, search, and manage Zoho Mail via the Zoho Mail REST API.
homepage: https://www.zoho.com/mail/help/api/
metadata: {"clawdbot":{"emoji":"📧","requires":{"bins":["jq","curl"],"env":["ZOHO_CLIENT_ID","ZOHO_CLIENT_SECRET","ZOHO_REFRESH_TOKEN"]}}}
---
# Zoho Mail Skill
Read, search, and manage Zoho Mail directly from Clawdbot.
## Setup
1. Go to [Zoho API Console](https://api-console.zoho.com/) and create a **Self Client**
2. Note the **Client ID** and **Client Secret**
3. Generate a grant code with scopes: `ZohoMail.messages.READ,ZohoMail.folders.READ,ZohoMail.accounts.READ`
4. Exchange the grant code for a refresh token:
```bash
curl -s -X POST "https://accounts.zoho.com/oauth/v2/token" \
-d "code=YOUR_GRANT_CODE" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "grant_type=authorization_code" | jq
```
5. Set environment variables:
```bash
export ZOHO_CLIENT_ID="your-client-id"
export ZOHO_CLIENT_SECRET="your-client-secret"
export ZOHO_REFRESH_TOKEN="your-refresh-token"
```
## Authentication
Zoho uses OAuth 2.0. Access tokens expire after 1 hour, so always refresh before making API calls. The refresh token does not expire unless revoked.
### Get an access token
```bash
ZOHO_ACCESS_TOKEN=$(curl -s -X POST "https://accounts.zoho.com/oauth/v2/token" \
-d "refresh_token=$ZOHO_REFRESH_TOKEN" \
-d "client_id=$ZOHO_CLIENT_ID" \
-d "client_secret=$ZOHO_CLIENT_SECRET" \
-d "grant_type=refresh_token" | jq -r '.access_token')
```
**Important:** Run this command before every API session or when you get a 401 error. All commands below assume `$ZOHO_ACCESS_TOKEN` is set.
## Usage
All commands use curl with the `Zoho-oauthtoken` authorization header. Note: Zoho uses `Zoho-oauthtoken` prefix, **not** `Bearer`.
### Get account ID
```bash
curl -s "https://mail.zoho.com/api/accounts" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[0] | {accountId, mailboxAddress, firstName, lastName}'
```
Store the account ID for subsequent calls:
```bash
ZOHO_ACCOUNT_ID=$(curl -s "https://mail.zoho.com/api/accounts" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq -r '.data[0].accountId')
```
### List folders
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/folders" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {folderName, folderId, unreadCount, messageCount}'
```
### Get Inbox folder ID
```bash
INBOX_ID=$(curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/folders" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq -r '.data[] | select(.folderName=="Inbox") | .folderId')
```
### List inbox messages
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/view?folderId=$INBOX_ID&limit=10&start=0" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, sender, fromAddress, receivedTime, messageId, status2}'
```
### List messages in a specific folder
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/view?folderId={folderId}&limit=10&start=0" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, sender, receivedTime, messageId}'
```
### Read a specific email
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/{messageId}" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data | {subject, sender, fromAddress, toAddress, ccAddress, receivedTime, content}'
```
### Get email body content only
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/{messageId}/content" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq -r '.data.content'
```
### Search emails
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/search?searchKey={query}&limit=10" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, sender, receivedTime, messageId, summary}'
```
Search syntax supports:
- Simple text: `searchKey=invoice`
- From filter: `searchKey=from:john@example.com`
- Subject filter: `searchKey=subject:quarterly report`
- Date range: `searchKey=after:2024-01-01 before:2024-06-30`
- Combined: `searchKey=from:boss@company.com subject:meeting`
### List email threads
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/threads/view?folderId=$INBOX_ID&limit=10&start=0" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, threadId, messageCount}'
```
### Get thread details (all messages in a thread)
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/threads/{threadId}" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq
```
### List labels/tags
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/tags" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {tagName, tagId, color}'
```
### List attachments on an email
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/folders/{folderId}/messages/{messageId}/attachments" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {attachmentName, attachmentId, fileSize}'
```
### Get account info
```bash
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data | {firstName, lastName, mailboxAddress, usedStorage, planStorage}'
```
## Regional Endpoints
If your Zoho account is not in the US region, replace `mail.zoho.com` and `accounts.zoho.com` with:
| Region | Mail API | Auth |
|-----------|---------------------------|-------------------------|
| US | mail.zoho.com | accounts.zoho.com |
| EU | mail.zoho.eu | accounts.zoho.eu |
| India | mail.zoho.in | accounts.zoho.in |
| Australia | mail.zoho.com.au | accounts.zoho.com.au |
| Japan | mail.zoho.jp | accounts.zoho.jp |
| Canada | mail.zohocloud.ca | accounts.zohocloud.ca |
## Notes
- Access tokens expire after **1 hour** — refresh before each session
- The refresh token does **not** expire unless revoked
- Rate limit: **30 API requests per minute** per account
- `receivedTime` is in milliseconds since epoch — convert with: `date -d @$((receivedTime/1000))`
- `status2` field: `"0"` = unread, `"1"` = read
- Auth header uses `Zoho-oauthtoken` prefix, **not** `Bearer`
- Keep your Client Secret and Refresh Token secure!
## Examples
```bash
# Full session: refresh token, get account, list inbox
ZOHO_ACCESS_TOKEN=$(curl -s -X POST "https://accounts.zoho.com/oauth/v2/token" \
-d "refresh_token=$ZOHO_REFRESH_TOKEN" \
-d "client_id=$ZOHO_CLIENT_ID" \
-d "client_secret=$ZOHO_CLIENT_SECRET" \
-d "grant_type=refresh_token" | jq -r '.access_token')
ZOHO_ACCOUNT_ID=$(curl -s "https://mail.zoho.com/api/accounts" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq -r '.data[0].accountId')
INBOX_ID=$(curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/folders" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq -r '.data[] | select(.folderName=="Inbox") | .folderId')
# List latest 5 inbox emails
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/view?folderId=$INBOX_ID&limit=5" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, sender, receivedTime}'
# Search for emails from a specific person
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/messages/search?searchKey=from:monika@example.com&limit=5" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | {subject, sender, summary}'
# Get unread count per folder
curl -s "https://mail.zoho.com/api/accounts/$ZOHO_ACCOUNT_ID/folders" \
-H "Authorization: Zoho-oauthtoken $ZOHO_ACCESS_TOKEN" | jq '.data[] | select(.unreadCount > 0) | {folderName, unreadCount}'
```

View File

@@ -0,0 +1,6 @@
{
"ownerId": "kn74eq35nwt63arpn96cpg80as82zky0",
"slug": "zoho-mail-skill",
"version": "1.0.0",
"publishedAt": 1773572696187
}