> skill-comply
Visualize whether skills, rules, and agent definitions are actually followed — auto-generates scenarios at 3 prompt strictness levels, runs agents, classifies behavioral sequences, and reports compliance rates with full tool call timelines
curl "https://skillshub.wtf/affaan-m/everything-claude-code/skill-comply?format=md"skill-comply: Automated Compliance Measurement
Measures whether coding agents actually follow skills, rules, or agent definitions by:
- Auto-generating expected behavioral sequences (specs) from any .md file
- Auto-generating scenarios with decreasing prompt strictness (supportive → neutral → competing)
- Running
claude -pand capturing tool call traces via stream-json - Classifying tool calls against spec steps using LLM (not regex)
- Checking temporal ordering deterministically
- Generating self-contained reports with spec, prompts, and timelines
Supported Targets
- Skills (
skills/*/SKILL.md): Workflow skills like search-first, TDD guides - Rules (
rules/common/*.md): Mandatory rules like testing.md, security.md, git-workflow.md - Agent definitions (
agents/*.md): Whether an agent gets invoked when expected (internal workflow verification not yet supported)
When to Activate
- User runs
/skill-comply <path> - User asks "is this rule actually being followed?"
- After adding new rules/skills, to verify agent compliance
- Periodically as part of quality maintenance
Usage
# Full run
uv run python -m scripts.run ~/.claude/rules/common/testing.md
# Dry run (no cost, spec + scenarios only)
uv run python -m scripts.run --dry-run ~/.claude/skills/search-first/SKILL.md
# Custom models
uv run python -m scripts.run --gen-model haiku --model sonnet <path>
Key Concept: Prompt Independence
Measures whether a skill/rule is followed even when the prompt doesn't explicitly support it.
Report Contents
Reports are self-contained and include:
- Expected behavioral sequence (auto-generated spec)
- Scenario prompts (what was asked at each strictness level)
- Compliance scores per scenario
- Tool call timelines with LLM classification labels
Advanced (optional)
For users familiar with hooks, reports also include hook promotion recommendations for steps with low compliance. This is informational — the main value is the compliance visibility itself.
> related_skills --same-repo
> santa-method
Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
> safety-guard
# Safety Guard — Prevent Destructive Operations ## When to Use - When working on production systems - When agents are running autonomously (full-auto mode) - When you want to restrict edits to a specific directory - During sensitive operations (migrations, deploys, data changes) ## How It Works Three modes of protection: ### Mode 1: Careful Mode Intercepts destructive commands before execution and warns: ``` Watched patterns: - rm -rf (especially /, ~, or project root) - git push --force
> product-lens
# Product Lens — Think Before You Build ## When to Use - Before starting any feature — validate the "why" - Weekly product review — are we building the right thing? - When stuck choosing between features - Before a launch — sanity check the user journey - When converting a vague idea into a spec ## How It Works ### Mode 1: Product Diagnostic Like YC office hours but automated. Asks the hard questions: ``` 1. Who is this for? (specific person, not "developers") 2. What's the pain? (quantify
> design-system
# Design System — Generate & Audit Visual Systems ## When to Use - Starting a new project that needs a design system - Auditing an existing codebase for visual consistency - Before a redesign — understand what you have - When the UI looks "off" but you can't pinpoint why - Reviewing PRs that touch styling ## How It Works ### Mode 1: Generate Design System Analyzes your codebase and generates a cohesive design system: ``` 1. Scan CSS/Tailwind/styled-components for existing patterns 2. Extra