> python-code-review
Comprehensive Python code review focused on bugs, correctness, security, maintainability, and actionable fixes. Use when a user asks for a review of Python files, wants severity-rated findings, wants before/after fix suggestions, or wants verification that implementation matches an active plan document (if one exists). Start by applying read-repo-rules to AGENTS.md, docs/REPO_STYLE.md, docs/PYTHON_STYLE.md, and docs/CHANGELOG.md so review guidance follows repository rules.
curl "https://skillshub.wtf/vosslab/vosslab-skills/python-code-review?format=md"Python Code Review
Workflow
- Verify
AGENTS.md,docs/REPO_STYLE.md,docs/PYTHON_STYLE.md, anddocs/CHANGELOG.mdexist. - Read those files and summarize repo rules in four one-sentence lines with prefixes:
AGENTS:,REPO_STYLE:,PYTHON_STYLE:,CHANGELOG:. - Inspect changed files first (
git diff,git status --short), then inspect related call sites. - If the repo has
docs/active_plans/, identify the active plan document(s) that govern the change. Otherwise, skip plan-conformance steps. - If an active plan exists, map code and tests to plan requirements, acceptance criteria, and stated constraints. Otherwise, skip this step.
- Prioritize findings by severity: plan mismatch/regressions (if applicable), correctness/safety, then maintainability.
- Provide concrete, minimal fixes with before/after examples when a fix is straightforward.
- Flag uncertainty explicitly and ask targeted review questions for unclear logic or contracts.
Review Output Contract
- Report findings first, ordered by severity.
- For each finding include:
- Severity (
P1critical,P2high,P3medium,P4low) - File path and line reference
- Risk and likely impact
- Recommended change
- Severity (
- After findings, include:
- Open questions
- Test gaps and residual risk
- Brief summary
What To Check
- Plan conformance: implementation and tests match active plan scope, ordering, and acceptance criteria.
- Plan drift: behavior changed without corresponding plan/changelog updates, or plan claims complete while code is partial.
- Correctness: edge cases, off-by-one logic, stale assumptions, API misuse, compatibility breaks.
- Security: unsafe eval/exec, command injection, path traversal, deserialization hazards, weak validation.
- Maintainability: dead code, hidden coupling, unclear naming, duplicated logic, brittle tests.
- Performance only when materially relevant.
Fix Guidance
- Prefer small, local edits that preserve behavior unless a bug requires behavior change.
- Keep fixes aligned with repo Python style and test conventions.
- Add or adjust tests for each behavior-changing fix.
> related_skills --same-repo
> webwork-writer
Create, edit, and lint WeBWorK PG/PGML questions following docs/webwork guidance, HTML whitelist constraints, and renderer-based lint checks. Use for tasks like authoring new PGML problems, adjusting randomization or grading, fixing PGML rendering issues, and running renderer API linting.
> unit-test-starter
Generate thorough Python 3 pytest unit tests across a repo by scanning every *.py file and each function, writing one test module per source file while skipping IO/network behavior and documenting gaps.
> skill-writing-guide
Guide for authoring Agent Skills (SKILL.md). Covers the open standard format, required frontmatter, directory layout, progressive disclosure, description writing, and best practices. Use when creating a new skill, improving an existing skill, or learning how skills work.
> readme-fix
Standardize README.md to match repo conventions. Brief purpose, quick start, and links to docs/. Keep content verifiable, concise, and ASCII. Use when README.md drifted or is missing key pointers.