> learning-opportunities
Facilitates deliberate skill development during AI-assisted coding. Offers interactive learning exercises after architectural work (new files, schema changes, refactors). Use when completing features, making design decisions, or when user asks to understand code better. Triggers on "learning exercise", "help me understand", "teach me", "why does this work", or after creating new files/modules. Do NOT use for urgent debugging, quick fixes, or when user says "just ship it".
curl "https://skillshub.wtf/tech-leads-club/agent-skills/learning-opportunities?format=md"Learning Opportunities
Facilitate deliberate skill development during AI-assisted coding sessions. Offer short, optional exercises that counteract passive consumption of AI-generated code.
When adapting techniques or making judgment calls about learning approaches, consult references/PRINCIPLES.md for the underlying learning science.
When to offer exercises
Offer an optional 10-15 minute exercise after:
- Creating new files or modules
- Database schema changes
- Architectural decisions or refactors
- Implementing unfamiliar patterns
- Any work where the user asked "why" questions during development
Always ask before starting: "Would you like to do a quick learning exercise on [topic]? About 10-15 minutes."
When NOT to offer
- User declined an exercise this session
- User already completed 2 exercises this session
- User signals urgency ("fix this quick", "just ship it", "deploy now")
- Pure debugging/hotfix context
Keep offers to one short sentence. Do not repeat or insist.
Core principle: Pause for input
This is the most important rule. After posing a question:
- Stop generating immediately. Do not continue after the question mark.
- Do not generate suggested answers, hints, examples, or encouragement like "Think about..." or "Consider..."
- Wait for the user's actual response before continuing.
Allowed after the question:
- "(Take your best guess — wrong predictions are useful data.)"
- "(Or we can skip this one.)"
After their response:
- If correct: confirm and deepen ("Exactly. And what would change if [variation]?")
- If wrong: be direct about what's incorrect, then explore the gap — this is high-value learning
Exercise types
1. Predict then observe
Ask what they predict will happen in a specific scenario before revealing the answer.
Example:
User just created auth middleware.
Agent: "Would you like a quick learning exercise on middleware patterns?"
User: "sure"
Agent: "What do you think happens when a request hits your middleware with an expired token?
(Take your best guess — wrong predictions are useful data.)"
[STOP — wait for response]
After response:
Agent: "Actually [correct behavior]. The key difference is [explanation].
What surprised you about that?"
[STOP — wait for response]
2. Generate then compare
Ask them to sketch an approach before showing the actual implementation.
Example:
Agent: "Before I show you — how would you structure the error handling for this API endpoint?"
[STOP — wait for response]
After response:
Agent: "Here's what we went with: [implementation].
What's similar to your approach? What's different, and why do you think we chose this direction?"
[STOP — wait for response]
3. Teach it back
Ask them to explain a component as if to a new team member.
Example:
Agent: "Explain how the caching layer we just built works, as if I'm a new developer joining the project."
[STOP — wait for response]
After response:
Agent: "You nailed [specific part]. One thing to refine: [specific gap]."
Hands-on code exploration
Prefer directing users to files over showing code snippets. Having learners locate code themselves builds codebase familiarity.
Adjust guidance based on demonstrated familiarity:
- Early: "Open
src/middleware/auth.ts, around line 45. What doesvalidateTokenreturn?" - Later: "Find where we handle token refresh."
- Eventually: "Where would you look to change how session expiry works?"
After they locate code, prompt self-explanation:
"You found it. Before I say anything — what do you think this line does?"
Techniques to weave in naturally
- "Why" questions: "Why did we use a Map here instead of an object?"
- Transfer prompts: "This is the strategy pattern. Where else in this codebase might it apply?"
- Varied context: "We used this for auth — how would you apply it to API rate limiting?"
- Error analysis: "Here's a bug someone might introduce — what would go wrong and why?"
Anti-patterns to avoid
- Dumping multiple questions at once
- Softening wrong answers into ambiguity ("well, that's partially right...")
- Offering exercises more than twice per session
- Making exercises feel like tests rather than exploration
- Continuing to generate after posing a question
> related_skills --same-repo
> gh-fix-ci
Use when a user asks to debug or fix failing GitHub PR checks that run in GitHub Actions. Uses `gh` to inspect checks and logs, summarize failure context, draft a fix plan, and implement only after explicit approval. Treats external providers (for example Buildkite) as out of scope and reports only the details URL. Do NOT use for addressing PR review comments (use gh-address-comments) or general CI outside GitHub Actions.
> security-threat-model
Repository-grounded threat modeling that enumerates trust boundaries, assets, attacker capabilities, abuse paths, and mitigations, and writes a concise Markdown threat model. Use when the user asks to threat model a codebase or path, enumerate threats or abuse paths, or perform AppSec threat modeling. Do NOT use for general architecture summaries, code review, security best practices (use security-best-practices), or non-security design work.
> security-ownership-map
Analyze git repositories to build a security ownership topology (people-to-file), compute bus factor and sensitive-code ownership, and export CSV/JSON for graph databases and visualization. Use when the user explicitly wants a security-oriented ownership or bus-factor analysis grounded in git history (for example: orphaned sensitive code, security maintainers, CODEOWNERS reality checks for risk, sensitive hotspots, or ownership clusters). Do NOT use for general maintainer lists, non-security own
> security-best-practices
Perform language and framework specific security best-practice reviews and suggest improvements. Use when the user explicitly requests security best practices guidance, a security review or report, or secure-by-default coding help. Supports Python, JavaScript/TypeScript, and Go. Do NOT use for general code review, debugging, threat modeling (use security-threat-model), or non-security tasks.