> NeoLabHQ/context-engineering-kit
Hand-crafted Claude Code Skills focused on improving agent results quality. Compatible with OpenCode, Cursor, Antigravity, Gemini CLI, and others.
curl "https://skillshub.wtf/NeoLabHQ/context-engineering-kit/review-local-changes?format=md"> about
Hand-crafted Claude Code Skills focused on improving agent results quality. Compatible with OpenCode, Cursor, Antigravity, Gemini CLI, and others.
> skills (66)
> code-review:review-local-changes
Comprehensive review of local uncommitted changes using specialized agents with code improvement suggestions
> code-review:review-pr
Comprehensive pull request review using specialized agents
> customaize-agent:agent-evaluation
Evaluate and improve Claude Code commands, skills, and agents. Use when testing prompt effectiveness, validating context engineering choices, or measuring improvement quality.
> customaize-agent:apply-anthropic-skill-best-practices
Comprehensive guide for skill development based on Anthropic's official best practices - use for complex skills requiring detailed structure
> customaize-agent:context-engineering
Understand the components, mechanics, and constraints of context in agent systems. Use when writing, editing, or optimizing commands, skills, or sub-agents prompts.
> customaize-agent:create-command
Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration
> customaize-agent:create-hook
Create and configure git hooks with intelligent project analysis, suggestions, and automated testing
> customaize-agent:create-rule
Use when found gap or repetative issue, that produced by you or implemenataion agent. Esentially use it each time when you say "You absolutly right, I should have done it differently." -> need create rule for this issue so it not appears again.
> customaize-agent:create-skill
Guide for creating effective skills. This command should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization
> customaize-agent:prompt-engineering
Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.
> customaize-agent:test-prompt
Use when creating or editing any prompt (commands, hooks, skills, subagent instructions) to verify it produces desired behavior - applies RED-GREEN-REFACTOR cycle to prompt engineering using subagents for isolated testing
> customaize-agent:test-skill
Use when creating or editing skills, before deployment, to verify they work under pressure and resist rationalization - applies RED-GREEN-REFACTOR cycle to process documentation by running baseline without skill, writing to address failures, iterating to close loopholes
> customaize-agent:thought-based-reasoning
Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques (Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, Reflexion) with templates, decision matrices, and research-backed patterns
> ddd:setup-code-formating
Sets up code formatting rules and style guidelines in CLAUDE.md
> docs:update-docs
Update and maintain project documentation for local code changes using multi-agent workflow with tech-writer agents. Covers docs/, READMEs, JSDoc, and API documentation.
> docs:write-concisely
Apply writing rules to any documentation that humans will read. Makes your writing clearer, stronger, and more professional.
> fpf:actualize
Reconcile the project's FPF state with recent repository changes
> fpf:decay
Manage evidence freshness by identifying stale decisions and providing governance actions
> fpf:propose-hypotheses
Execute complete FPF cycle from hypothesis generation to decision
> fpf:query
Search the FPF knowledge base and display hypothesis details with assurance information
> fpf:reset
Reset the FPF reasoning cycle to start fresh
> fpf:status
Display the current state of the FPF knowledge base
> git:analyze-issue
Analyze a GitHub issue and create a detailed technical specification
> git:attach-review-to-pr
Add line-specific review comments to pull requests using GitHub CLI API
> git:commit
Create well-formatted commits with conventional commit messages and emoji
> git:compare-worktrees
Compare files and directories between git worktrees or worktree and current branch
> git:create-pr
Create pull requests using GitHub CLI with proper templates and formatting
> git:create-worktree
Create and setup git worktrees for parallel development with automatic dependency installation
> git:load-issues
Load all open issues from GitHub and save them as markdown files
> git:merge-worktree
Merge changes from worktrees into current branch with selective file checkout, cherry-picking, interactive patch selection, or manual merge
> git:notes
Use when adding metadata to commits without changing history, tracking review status, test results, code quality annotations, or supplementing commit messages post-hoc - provides git notes commands and patterns for attaching non-invasive metadata to Git objects.
> git:worktrees
Use when working on multiple branches simultaneously, context switching without stashing, reviewing PRs while developing, testing in isolation, or comparing implementations across branches - provides git worktree commands and workflow patterns for parallel development with multiple working directories.
> kaizen:analyse
Auto-selects best Kaizen method (Gemba Walk, Value Stream, or Muda) for target
> kaizen:analyse-problem
Comprehensive A3 one-page problem analysis with root cause and action plan
> kaizen:cause-and-effect
Systematic Fishbone analysis exploring problem causes across six categories
> kaizen:kaizen
Use when Code implementation and refactoring, architecturing or designing systems, process and workflow improvements, error handling and validation. Provide tehniquest to avoid over-engineering and apply iterative improvements.
> kaizen:plan-do-check-act
Iterative PDCA cycle for systematic experimentation and continuous improvement
> kaizen:root-cause-tracing
Use when errors occur deep in execution and you need to trace back to find the original trigger - systematically traces bugs backward through call stack, adding instrumentation when needed, to identify source of invalid data or incorrect behavior
> kaizen:why
Iterative Five Whys root cause analysis drilling from symptoms to fundamentals
> mcp:build-mcp
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
> mcp:setup-arxiv-mcp
Guide for setup arXiv paper search MCP server using Docker MCP
> mcp:setup-codemap-cli
Guide for setup Codemap CLI for intelligent codebase visualization and navigation
> mcp:setup-context7-mcp
Guide for setup Context7 MCP server to load documentation for specific technologies.
> mcp:setup-serena-mcp
Guide for setup Serena MCP server for semantic code retrieval and editing capabilities
> reflexion:critique
Comprehensive multi-perspective review using specialized judges with debate and consensus building
> reflexion:memorize
Curates insights from reflections and critiques into CLAUDE.md using Agentic Context Engineering
> reflexion:reflect
Reflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification
> sadd:do-and-judge
Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop
> sadd:do-competitively
Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis
> sadd:do-in-parallel
Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification
> sadd:do-in-steps
Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification
> sadd:judge
Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation
> sadd:judge-with-debate
Evaluate solutions through multi-round debate between independent judges until consensus
> sadd:launch-sub-agent
Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification
> sadd:multi-agent-patterns
Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.
> sadd:subagent-driven-development
Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
> sadd:tree-of-thoughts
Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation
> sdd:add-task
creates draft task file in .specs/tasks/draft/ with original user intent
> sdd:brainstorm
Use when creating or developing, before writing code or implementation plans - refines rough ideas into fully-formed designs through collaborative questioning, alternative exploration, and incremental validation. Don't use during clear 'mechanical' processes
> sdd:create-ideas
Generate ideas in one shot using creative sampling
> sdd:implement
Implement a task with automated LLM-as-Judge verification for critical steps
> sdd:plan
Refine, parallelize, and verify a draft task specification into a fully planned implementation-ready task
> tdd:fix-tests
Systematically fix all failing tests after business logic changes or refactoring
> tdd:test-driven-development
Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
> tdd:write-tests
Systematically add test coverage for all local code changes using specialized review and development agents. Add tests for uncommitted changes (including untracked files), or if everything is commited, then will cover latest commit.
> tech-stack:add-typescript-best-practices
Setup TypeScript best practices and code style rules in CLAUDE.md