NeoLabHQ

> NeoLabHQ/context-engineering-kit

Hand-crafted Claude Code Skills focused on improving agent results quality. Compatible with OpenCode, Cursor, Antigravity, Gemini CLI, and others.

📦 66 skills❤️ 0 likes 941 stars📥 76 downloadsgithub →
$curl "https://skillshub.wtf/NeoLabHQ/context-engineering-kit/review-local-changes?format=md"

> about

Hand-crafted Claude Code Skills focused on improving agent results quality. Compatible with OpenCode, Cursor, Antigravity, Gemini CLI, and others.

> skills (66)

NeoLabHQ

> code-review:review-local-changes

Comprehensive review of local uncommitted changes using specialized agents with code improvement suggestions

NeoLabHQ
NeoLabHQ

> code-review:review-pr

Comprehensive pull request review using specialized agents

#git
NeoLabHQ
NeoLabHQ

> customaize-agent:agent-evaluation

Evaluate and improve Claude Code commands, skills, and agents. Use when testing prompt effectiveness, validating context engineering choices, or measuring improvement quality.

NeoLabHQ
NeoLabHQ

> customaize-agent:apply-anthropic-skill-best-practices

Comprehensive guide for skill development based on Anthropic's official best practices - use for complex skills requiring detailed structure

#ai
NeoLabHQ
NeoLabHQ

> customaize-agent:context-engineering

Understand the components, mechanics, and constraints of context in agent systems. Use when writing, editing, or optimizing commands, skills, or sub-agents prompts.

NeoLabHQ
NeoLabHQ

> customaize-agent:create-command

Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration

NeoLabHQ
NeoLabHQ

> customaize-agent:create-hook

Create and configure git hooks with intelligent project analysis, suggestions, and automated testing

NeoLabHQ
NeoLabHQ

> customaize-agent:create-rule

Use when found gap or repetative issue, that produced by you or implemenataion agent. Esentially use it each time when you say "You absolutly right, I should have done it differently." -> need create rule for this issue so it not appears again.

NeoLabHQ
NeoLabHQ

> customaize-agent:create-skill

Guide for creating effective skills. This command should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization

NeoLabHQ
NeoLabHQ

> customaize-agent:prompt-engineering

Use this skill when you writing commands, hooks, skills for Agent, or prompts for sub agents or any other LLM interaction, including optimizing prompts, improving LLM outputs, or designing production prompt templates.

NeoLabHQ
NeoLabHQ

> customaize-agent:test-prompt

Use when creating or editing any prompt (commands, hooks, skills, subagent instructions) to verify it produces desired behavior - applies RED-GREEN-REFACTOR cycle to prompt engineering using subagents for isolated testing

NeoLabHQ
NeoLabHQ

> customaize-agent:test-skill

Use when creating or editing skills, before deployment, to verify they work under pressure and resist rationalization - applies RED-GREEN-REFACTOR cycle to process documentation by running baseline without skill, writing to address failures, iterating to close loopholes

NeoLabHQ
NeoLabHQ

> customaize-agent:thought-based-reasoning

Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques (Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, Reflexion) with templates, decision matrices, and research-backed patterns

NeoLabHQ
NeoLabHQ

> ddd:setup-code-formating

Sets up code formatting rules and style guidelines in CLAUDE.md

NeoLabHQ
NeoLabHQ

> docs:update-docs

Update and maintain project documentation for local code changes using multi-agent workflow with tech-writer agents. Covers docs/, READMEs, JSDoc, and API documentation.

NeoLabHQ
NeoLabHQ

> docs:write-concisely

Apply writing rules to any documentation that humans will read. Makes your writing clearer, stronger, and more professional.

NeoLabHQ
NeoLabHQ

> fpf:actualize

Reconcile the project's FPF state with recent repository changes

NeoLabHQ
NeoLabHQ

> fpf:decay

Manage evidence freshness by identifying stale decisions and providing governance actions

NeoLabHQ
NeoLabHQ

> fpf:propose-hypotheses

Execute complete FPF cycle from hypothesis generation to decision

NeoLabHQ
NeoLabHQ

> fpf:query

Search the FPF knowledge base and display hypothesis details with assurance information

NeoLabHQ
NeoLabHQ

> fpf:reset

Reset the FPF reasoning cycle to start fresh

NeoLabHQ
NeoLabHQ

> fpf:status

Display the current state of the FPF knowledge base

NeoLabHQ
NeoLabHQ

> git:analyze-issue

Analyze a GitHub issue and create a detailed technical specification

NeoLabHQ
NeoLabHQ

> git:attach-review-to-pr

Add line-specific review comments to pull requests using GitHub CLI API

NeoLabHQ
NeoLabHQ

> git:commit

Create well-formatted commits with conventional commit messages and emoji

NeoLabHQ
NeoLabHQ

> git:compare-worktrees

Compare files and directories between git worktrees or worktree and current branch

NeoLabHQ
NeoLabHQ

> git:create-pr

Create pull requests using GitHub CLI with proper templates and formatting

NeoLabHQ
NeoLabHQ

> git:create-worktree

Create and setup git worktrees for parallel development with automatic dependency installation

NeoLabHQ
NeoLabHQ

> git:load-issues

Load all open issues from GitHub and save them as markdown files

NeoLabHQ
NeoLabHQ

> git:merge-worktree

Merge changes from worktrees into current branch with selective file checkout, cherry-picking, interactive patch selection, or manual merge

NeoLabHQ
NeoLabHQ

> git:notes

Use when adding metadata to commits without changing history, tracking review status, test results, code quality annotations, or supplementing commit messages post-hoc - provides git notes commands and patterns for attaching non-invasive metadata to Git objects.

NeoLabHQ
NeoLabHQ

> git:worktrees

Use when working on multiple branches simultaneously, context switching without stashing, reviewing PRs while developing, testing in isolation, or comparing implementations across branches - provides git worktree commands and workflow patterns for parallel development with multiple working directories.

NeoLabHQ
NeoLabHQ

> kaizen:analyse

Auto-selects best Kaizen method (Gemba Walk, Value Stream, or Muda) for target

NeoLabHQ
NeoLabHQ

> kaizen:analyse-problem

Comprehensive A3 one-page problem analysis with root cause and action plan

NeoLabHQ
NeoLabHQ

> kaizen:cause-and-effect

Systematic Fishbone analysis exploring problem causes across six categories

NeoLabHQ
NeoLabHQ

> kaizen:kaizen

Use when Code implementation and refactoring, architecturing or designing systems, process and workflow improvements, error handling and validation. Provide tehniquest to avoid over-engineering and apply iterative improvements.

NeoLabHQ
NeoLabHQ

> kaizen:plan-do-check-act

Iterative PDCA cycle for systematic experimentation and continuous improvement

NeoLabHQ
NeoLabHQ

> kaizen:root-cause-tracing

Use when errors occur deep in execution and you need to trace back to find the original trigger - systematically traces bugs backward through call stack, adding instrumentation when needed, to identify source of invalid data or incorrect behavior

NeoLabHQ
NeoLabHQ

> kaizen:why

Iterative Five Whys root cause analysis drilling from symptoms to fundamentals

NeoLabHQ
NeoLabHQ

> mcp:build-mcp

Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).

NeoLabHQ
NeoLabHQ

> mcp:setup-arxiv-mcp

Guide for setup arXiv paper search MCP server using Docker MCP

NeoLabHQ
NeoLabHQ

> mcp:setup-codemap-cli

Guide for setup Codemap CLI for intelligent codebase visualization and navigation

NeoLabHQ
NeoLabHQ

> mcp:setup-context7-mcp

Guide for setup Context7 MCP server to load documentation for specific technologies.

NeoLabHQ
NeoLabHQ

> mcp:setup-serena-mcp

Guide for setup Serena MCP server for semantic code retrieval and editing capabilities

NeoLabHQ
NeoLabHQ

> reflexion:critique

Comprehensive multi-perspective review using specialized judges with debate and consensus building

NeoLabHQ
NeoLabHQ

> reflexion:memorize

Curates insights from reflections and critiques into CLAUDE.md using Agentic Context Engineering

NeoLabHQ
NeoLabHQ

> reflexion:reflect

Reflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification

NeoLabHQ
NeoLabHQ

> sadd:do-and-judge

Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop

NeoLabHQ
NeoLabHQ

> sadd:do-competitively

Execute tasks through competitive multi-agent generation, meta-judge evaluation specification, multi-judge evaluation, and evidence-based synthesis

NeoLabHQ
NeoLabHQ

> sadd:do-in-parallel

Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification

NeoLabHQ
NeoLabHQ

> sadd:do-in-steps

Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification

NeoLabHQ
NeoLabHQ

> sadd:judge

Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation

NeoLabHQ
NeoLabHQ

> sadd:judge-with-debate

Evaluate solutions through multi-round debate between independent judges until consensus

NeoLabHQ
NeoLabHQ

> sadd:launch-sub-agent

Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification

NeoLabHQ
NeoLabHQ

> sadd:multi-agent-patterns

Design multi-agent architectures for complex tasks. Use when single-agent context limits are exceeded, when tasks decompose naturally into subtasks, or when specializing agents improves quality.

NeoLabHQ
NeoLabHQ

> sadd:subagent-driven-development

Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates

NeoLabHQ
NeoLabHQ

> sadd:tree-of-thoughts

Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation

NeoLabHQ
NeoLabHQ

> sdd:add-task

creates draft task file in .specs/tasks/draft/ with original user intent

NeoLabHQ
NeoLabHQ

> sdd:brainstorm

Use when creating or developing, before writing code or implementation plans - refines rough ideas into fully-formed designs through collaborative questioning, alternative exploration, and incremental validation. Don't use during clear 'mechanical' processes

NeoLabHQ
NeoLabHQ

> sdd:create-ideas

Generate ideas in one shot using creative sampling

NeoLabHQ
NeoLabHQ

> sdd:implement

Implement a task with automated LLM-as-Judge verification for critical steps

NeoLabHQ
NeoLabHQ

> sdd:plan

Refine, parallelize, and verify a draft task specification into a fully planned implementation-ready task

NeoLabHQ
NeoLabHQ

> tdd:fix-tests

Systematically fix all failing tests after business logic changes or refactoring

NeoLabHQ
NeoLabHQ

> tdd:test-driven-development

Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first

NeoLabHQ
NeoLabHQ

> tdd:write-tests

Systematically add test coverage for all local code changes using specialized review and development agents. Add tests for uncommitted changes (including untracked files), or if everything is commited, then will cover latest commit.

NeoLabHQ
NeoLabHQ

> tech-stack:add-typescript-best-practices

Setup TypeScript best practices and code style rules in CLAUDE.md

NeoLabHQ