Contributing Guide¶
Thank you for your interest in contributing to the V-Model Extension Pack! Whether you're fixing a bug, adding a feature, improving documentation, or writing tests — every contribution is valued.
Ways to contribute
- 🐛 Bug reports — found something broken? Open an issue
- 💡 Feature requests — have an idea? Suggest it
- 📝 Documentation — typos, clarifications, examples
- 🧪 Tests — expand coverage across BATS, Pester, or evals
- 🔧 Code — new commands, script improvements, validators
Development Setup¶
Prerequisites¶
- Spec Kit v0.1.0+ (Python ≥ 3.11)
- Git
- Bash (Linux/macOS) or PowerShell (Windows)
Getting Started¶
-
Fork and clone the repository:
-
Set up a test project:
-
Install the extension in development mode:
-
Verify the installation:
Project Structure¶
spec-kit-v-model/
├── commands/ # Slash command definitions (14 AI prompts)
├── templates/ # Output file templates for generated artifacts
├── scripts/
│ ├── bash/ # Helper scripts (Linux/macOS) — 13 scripts
│ ├── powershell/ # Helper scripts (Windows) — 13 scripts
│ └── python/ # Python helper scripts
├── tests/
│ ├── bats/ # BATS-core Bash unit tests (364 tests)
│ ├── pester/ # Pester PowerShell unit tests (347 tests)
│ ├── fixtures/ # Shared test data & golden examples
│ └── evals/ # DeepEval prompt evaluations (89 structural + 42 LLM)
├── docs/ # Additional documentation
├── .github/
│ ├── agents/ # Agent definitions for all 14 commands
│ └── workflows/ # CI and evaluation pipelines
├── extension.yml # Extension manifest
├── config-template.yml # Configuration template
└── pyproject.toml # Python project config (pytest, deepeval)
How to Add a New Command¶
This project uses its own V-Model extension for development. When adding a new feature, follow this spec-driven workflow:
- Specify —
/speckit.specify <description>creates a feature branch andspec.mdwith user stories and requirements. - Requirements —
/speckit.v-model.requirementsatomizes the spec into traceableREQ-NNNidentifiers. - Acceptance —
/speckit.v-model.acceptancegenerates paired test cases (ATP) and BDD scenarios (SCN) with 100% coverage validation. - Design — Walk down the V-Model levels as needed:
/speckit.v-model.system-design→ system-level components (SYS-NNN)/speckit.v-model.architecture-design→ architecture elements (ARCH-NNN)/speckit.v-model.module-design→ module-level designs (MOD-NNN)
- Test Plans — Generate paired test plans at each level:
/speckit.v-model.system-test→ system test procedures (STP)/speckit.v-model.integration-test→ integration test procedures (ITP)/speckit.v-model.unit-test→ unit test procedures (UTP)
- Trace —
/speckit.v-model.tracebuilds the traceability matrix at each level (Matrix A + B + C + D). - Implement — Use spec-kit core (
/speckit.plan,/speckit.tasks,/speckit.implement) to execute the design. - Verify — Run validation scripts and tests to ensure coverage.
All artifacts live in specs/{feature}/. See the README for a detailed walkthrough.
Testing Requirements¶
The project has a comprehensive test suite across four layers. All tests must pass before a PR can be merged.
Test Architecture¶
| Layer | Framework | Tests | What It Validates |
|---|---|---|---|
| BATS | bats-core | 364 | Bash script logic: setup, coverage validation, impact analysis, matrix building, diff detection, peer review check, test result ingestion, audit report building |
| Pester | Pester 5 | 347 | PowerShell script parity with Bash — identical behavior across platforms |
| Structural evals | pytest + DeepEval | 89 | ID format/hierarchy, template conformance, BDD scenario completeness, impact analysis graph properties |
| LLM evals | pytest + DeepEval GEval | 42 | Requirements quality (IEEE 29148), BDD quality, traceability completeness |
Running Tests¶
Adding Tests¶
- New BATS test — Add to
tests/bats/following existing patterns. Usetest_helper.bashfor fixtures. - New Pester test — Mirror the BATS test in
tests/pester/for PowerShell parity. - New eval test — Add to
tests/evals/test_*_eval.py. Mark with@pytest.mark.structural(deterministic) or@pytest.mark.eval(LLM). - New fixture — Add directory under
tests/fixtures/with V-Model fixture files.
CI Pipelines¶
ci.yml— Runs on every push/PR: BATS tests + structural validators (Ubuntu), Pester tests (Windows)evals.yml— Structural evals run weekly; LLM evals run on manual dispatch
Coding Conventions¶
Command Files (commands/*.md)¶
Commands are AI prompts, not executable code:
- Be precise with instructions — the AI follows them literally
- Reference JSON keys exactly as the setup script outputs them (e.g.,
VMODEL_DIR) - Delegate deterministic tasks to scripts — never ask the AI to count or validate coverage
- Include examples of expected input/output
Helper Scripts (scripts/)¶
Scripts handle all deterministic logic:
- Maintain parity between Bash and PowerShell — both must produce identical output
- Use the base-key matching pattern for ID cross-referencing (see
req_base_key()/atp_base_key()) - Output JSON when
--jsonflag is passed — match existing key names exactly - Test with category prefixes — always verify with
REQ-NF-001,REQ-IF-001, etc.
ID Schema¶
The four-tier ID schema is a core architectural decision. Any changes must preserve:
- Self-documenting lineage:
SCN-001-A1→ATP-001-A→REQ-001 - Category prefix support:
REQ-NF-001,ATP-NF-001-A,SCN-NF-001-A1 - Permanent IDs: Never renumber — gaps are acceptable
Templates (templates/)¶
Keep templates minimal, consistent with the ID schema, and documented with HTML comments.
Pull Request Process¶
-
Create a branch from
main: -
Make your changes — follow the guidelines above.
-
Test your changes — run the relevant test suites.
-
Commit with a descriptive message:
-
Push and open a Pull Request against
main.
What reviewers look for
- All existing tests pass
- New code has corresponding tests
- Bash/PowerShell parity is maintained for script changes
- ID schema conventions are followed
- Documentation is updated if behavior changes
Related Pages¶
- Code of Conduct — Community standards and expectations
- Security Policy — How to report vulnerabilities
- Changelog — Version history and release notes
- Roadmap — What's coming next
License¶
By contributing, you agree that your contributions will be licensed under the MIT License.