Provide a JIRA epic or BRD — get a publication-ready Solution Design Document in hours, not weeks.
Works with any input
Capillary has 200+ endpoints across multiple versions. The SA must manually identify the right one for each requirement, set up credentials, craft a test, and confirm it actually works — before a single SDD line is written.
Every SA structures diagrams, tables, and decisions differently. Missing sections, incomplete decision records, or inconsistent data mappings mean extra review rounds and delayed handoffs to clients.
Before the first draft, the SA pieces together information from JIRA, BRD PDFs, Confluence pages, email threads, and colleague notes. There is no single source of truth — context is often incomplete or contradictory.
Many requirements can be satisfied through native Capillary platform configuration — no custom development needed. But without a structured framework, SAs spend extra cycles confirming what is configurable versus what needs a custom build.
The SDD Writer skill solves all four. It gathers full context automatically, applies a structured decision framework to every requirement, verifies APIs against live Capillary documentation, and enforces a consistent document structure — all without manual effort.
Core rule: Every requirement is assigned to the LOWEST viable tier — the simplest solution that meets the need. Moving to a higher tier always requires a written justification.
Paste a JIRA epic ID, share a BRD, or describe requirements in plain text. Add --dry-run to get a Solutioning Brief (research only) before committing to full generation.
Parallel agents read JIRA stories, search Confluence, crawl public docs, and fetch API schemas via MCP. Each use case gets a dedicated analysis agent. Every fact is tagged with a [CIT-xxx] citation.
Two mandatory SA checkpoints — after analysis and after draft — ensure the architect stays in control. Design gates pause for confirmation on tier escalations. Writing agents produce Layer 2+ process flows with field-level detail.
Self-verification against a 39-item checklist, then optional 12-dimension AI review scoring Document Quality + Developer Readiness. Save locally or publish to Confluence with explicit confirmation.
Session persistence: SA answers and feedback are saved to files that persist across regenerations. If you re-run the skill for the same brand, it pre-loads all prior answers — no question is ever asked twice. Progress tracking enables resume after context compaction.
Both formats maintain the same depth of detail for every solution — process flow diagrams, sequence diagrams, API specifications, and data field mappings. Lite SDD = same rigour, smaller scope.
--dry-run flagvalidate_api.py (4b), evaluate design gates (4c). Dry-run stops here → Solutioning Brief (4d)A complete, publication-ready Markdown file with all sections, diagrams, API specs, data field mappings, architecture decision records, and non-functional requirements.
The skill never publishes automatically. Local Markdown is always the primary artefact.
MCP-only API fidelity — every Capillary API field name comes from MCP schema fetch. Zero hallucinated endpoints. Schema hash verification catches drift.
Citation-backed traceability — every claim tagged [CIT-xxx] tracing to BRD, JIRA, Confluence, MCP, public docs, or SA answers. Citation Index appendix in every SDD.
39-item self-verification — structure, APIs, citations, process flow depth, batch flows, critical data integrity, diagram routing, and developer readiness all checked before delivery.
12-dimension AI review — optional /sdd-review scores Document Quality (6D) + Developer Readiness (6D). Grade A–F with remediation checklist.
Critical data rule — infrastructure values (org IDs, storage accounts, Kafka topics, email addresses) are never invented. Unconfirmed items read [CONFIRM WITH CLIENT].
200+ Capillary endpoints searched automatically to find the exact API that fits — no manual documentation digging or trial-and-error.
Endpoints are verified against live Capillary documentation before being documented. Bad paths are flagged immediately — not discovered during client review.
Every SDD follows the same structure: all required sections, decision records, non-functional requirements, data mappings, and realistic API examples.
Full SDD for new integrations. Lite SDD for additive changes. The 5-tier Golden Path prevents over-engineering on every requirement.
Paste a JIRA epic, share a BRD, or describe requirements in plain text — the skill adapts to your input automatically.
New Full SDD: 1–1.5 weeks → hours. Change Request Lite SDD: 2–3 days → under an hour. From requirement to publication-ready Markdown.
8 citation types (BRD, JIRA, Confluence, MCP, Public Docs, SA, Feedback, Inferred). Every factual claim tagged [CIT-xxx]. Citation Index appendix in every SDD. Target: 80%+ coverage.
One research agent per use case runs in parallel — discovers APIs via MCP, crawls public docs, identifies gaps, recommends tiers. Results compiled into Analysis Briefs for writing agents.
Two mandatory SA review checkpoints (post-analysis, post-draft). Design gates pause on tier escalations and ADR decisions. The SA stays in control of every critical architectural choice.
Separate /sdd-review skill scores 6 Document Quality + 6 Developer Readiness dimensions. Weighted scoring (40/60), A–F grading, actionable remediation checklist.
Progress file tracks pipeline status + serialized registries for context compaction resume. SA answers file persists across regenerations — questions answered once are never re-asked.
Absolute rule: zero Capillary API field names from built-in knowledge. Schema hash verification in Step 6 catches any drift between MCP fetch and SDD content.
Every process flow step includes: actor + endpoint, request fields with source path, response fields with downstream usage, error branches with HTTP status + recovery action, and inline citations.
--dry-run runs Steps 0–4 only, outputs a Solutioning Brief for SA review. Regeneration detects existing SDD + answers — only changed sections are rewritten.
Architecture and complex flowcharts route to Excalidraw for richer visuals. Mermaid fallback always included. Sequence and ER diagrams stay in Mermaid. Color-mapped to 6 semantic classes.
SA correction feedback is captured, categorized, and persisted in feedback-log.md. Active entries auto-apply as constraints in all future sessions — corrections compound, not repeat.
After SDD approval, automatically generate team-specific Low-Level Design documents — Neo LLD, Connect+ LLD, Vulcan LLD, Custom Service LLD — with block chain specs, MongoDB schemas, and deployment contracts derived from the SDD.
Connect client or Capillary custom-logic repos so the skill understands existing Neo/Connect+ patterns and previously built integrations — avoiding re-documenting what is already built and surfacing implementation constraints early.
Auto-fire the skill when a JIRA epic transitions to In Progress or a custom SDD Required status — zero-touch initiation, SDD draft in Confluence before the SA's kickoff call even begins.
Auto-link related SDDs across clients and regions. When documenting a "mobile app enrolment" for a new brand, surface how 5 other brands solved the same flow — reuse patterns, avoid reinventing.