> discover-codebase-enhancements
Use when the user asks for a deep codebase analysis to identify and rank improvements, optimizations, architectural enhancements, or potential bugs aligned to developer, end-user, and agent jobs-to-be-done.
curl "https://skillshub.wtf/kasperjunge/agent-resources/discover-codebase-enhancements?format=md"Discover Codebase Enhancements
Overview
Spend significant time crawling and analyzing the codebase to surface high-impact improvements. Center findings on the jobs-to-be-done of the codebase, developers, end users, and AI agents working in the repo.
Inputs (ask if missing, max 5)
- Target area or scope (whole repo or specific modules)
- Primary user jobs-to-be-done and business goals
- Known pain points or incidents
- Constraints (time, risk tolerance, release window)
- Evidence sources allowed (tests, metrics, logs)
Jobs-to-Be-Done Lens
- Codebase: reliability, simplicity, maintainability
- Developers: speed, clarity, safe changes
- End users: correctness, performance, usability
- AI agents: discoverability, consistency, explicit patterns
Workflow
- Deep crawl
- Read architecture docs, READMEs, key modules, and tests.
- Search for hotspots (TODO/FIXME, large files, duplication, complex flows).
- Evidence gathering
- Note error-prone areas, missing tests, performance risks, and coupling.
- Capture references to files/functions and concrete symptoms.
- Opportunity synthesis
- Group findings by theme: correctness, performance, DX, architecture, tests, tooling.
- Impact scoring
- Rate impact, effort, risk, and evidence strength.
- Ranked recommendations
- Present top enhancements with rationale and expected outcomes.
Output Format
## Codebase Enhancement Discovery
### Context Summary
[1-3 sentences]
### JTBD Summary
- Codebase: ...
- Developers: ...
- End users: ...
- AI agents: ...
### Evidence Sources
- Files/modules reviewed: ...
- Patterns searched: ...
- Tests or metrics considered: ...
### Ranked Enhancements
1) [Enhancement]
- Category: ...
- Impact: high | Effort: medium | Risk: low | Evidence: moderate
- Rationale: ...
- Affected areas: ...
### Quick Wins
- ...
### Open Questions
- ...
Quick Reference
- Spend more time exploring than feels necessary.
- Prefer evidence-backed findings over speculation.
- Center recommendations on user and developer outcomes.
Common Mistakes
- Skimming without enough code context
- Listing fixes without evidence or impact scoring
- Ignoring AI agent or developer workflows
- Recommending changes that fight existing architecture
> related_skills --same-repo
> skriv-som-kasper
Skriv, omskriv eller redigér tekst i Kaspers personlige skrivestil. Brug når brugeren beder om at skrive "som Kasper" eller ønsker stil-efterligning/tilpasning baseret på konkrete skriveeksempler.
> refactor-for-determinism
Design or refactor skills by separating deterministic and non-deterministic steps. Use when creating or improving skills, especially to move repeatable workflows into scripts/ and update SKILL.md to call them.
> ideate-solutions
Use after opportunities are defined to generate and evaluate multiple product solution concepts before validating assumptions. Triggers when you need a set of distinct solution options tied to outcomes and opportunities.
> discover-outcomes
Use at the start of product strategy to define or refine desired outcomes and success metrics (e.g., for Opportunity Solution Trees or continuous discovery) before selecting opportunities or solutions.