📐
Capillary Technologies · Solution Architecture

Capillary SDD Writer

Provide a JIRA epic or BRD — get a publication-ready Solution Design Document in hours, not weeks.

Multi-Agent Research Citation-Backed Traceability 5-Tier Golden Path 12D Quality Review SA Q&A Persistence

Works with any input

📋 JIRA Epic ID · 📄 BRD Document · 💬 Requirements Text
The SA Challenge

What Makes SDD Writing Hard

🔍

Too Many APIs to Navigate

Capillary has 200+ endpoints across multiple versions. The SA must manually identify the right one for each requirement, set up credentials, craft a test, and confirm it actually works — before a single SDD line is written.

200+
endpoints to navigate
3–5
manual steps per API confirmed
⏱️

Slow & Inconsistent

Every SA structures diagrams, tables, and decisions differently. Missing sections, incomplete decision records, or inconsistent data mappings mean extra review rounds and delayed handoffs to clients.

Change Request SDD
2–3 days
New Project Full SDD
1–1.5 weeks
📚

Scattered Context

Before the first draft, the SA pieces together information from JIRA, BRD PDFs, Confluence pages, email threads, and colleague notes. There is no single source of truth — context is often incomplete or contradictory.

4–6
sources consulted before the first draft
⚙️

Config vs Custom — Hard to Judge

Many requirements can be satisfied through native Capillary platform configuration — no custom development needed. But without a structured framework, SAs spend extra cycles confirming what is configurable versus what needs a custom build.

Adds 1–2 extra review cycles
before the right approach is confirmed
💡

The SDD Writer skill solves all four. It gathers full context automatically, applies a structured decision framework to every requirement, verifies APIs against live Capillary documentation, and enforces a consistent document structure — all without manual effort.

Solution Strategy

The Golden Path: 5-Tier Decision Framework

Core rule: Every requirement is assigned to the LOWEST viable tier — the simplest solution that meets the need. Moving to a higher tier always requires a written justification.

Tier 1
Platform Configuration
"Just configure it"
No code or custom development needed. The requirement is met through Capillary's built-in platform settings. Handled by the Capillary Config Team. Examples: loyalty tier rules, earn & burn settings, standard campaigns, push notification templates.
Config Team
Tier 2
Direct Integration
"Connect your systems"
Your client system calls Capillary directly using standard, real-time API endpoints. The skill automatically discovers and verifies the correct endpoint version. Examples: register a customer, record a transaction, check redeemable points.
Real-time API
Tier 3
Smart Workflows
"Custom logic, no heavy dev"
Low-code orchestration for requirements that need custom logic without full development. Merges data from multiple APIs, adds business rules. Not suitable for: loops over large data sets, bulk processing, or background jobs.
Low-Code (Neo)
Tier 4
Batch & Events
"Scheduled & event-driven"
Automated background processing for file imports, scheduled syncs, and event-triggered flows — where the client does not need an instant response. Examples: nightly SFTP member sync, tier-upgrade event processing, CSV import pipelines.
Async (Connect+)
Tier 5
Custom Build
"Bespoke solution"
Full custom development on AWS — only when Tiers 1–4 cannot meet the requirement due to confirmed technical blockers. Every Tier 5 decision requires a documented justification explaining why lower tiers were insufficient.
ADR Required
How It Works

From Requirement to Publication-Ready SDD

1

You Provide

Paste a JIRA epic ID, share a BRD, or describe requirements in plain text. Add --dry-run to get a Solutioning Brief (research only) before committing to full generation.

2

Multi-Agent Research

Parallel agents read JIRA stories, search Confluence, crawl public docs, and fetch API schemas via MCP. Each use case gets a dedicated analysis agent. Every fact is tagged with a [CIT-xxx] citation.

3

SA Checkpoints & Writing

Two mandatory SA checkpoints — after analysis and after draft — ensure the architect stays in control. Design gates pause for confirmation on tier escalations. Writing agents produce Layer 2+ process flows with field-level detail.

4

Verify, Deliver & Review

Self-verification against a 39-item checklist, then optional 12-dimension AI review scoring Document Quality + Developer Readiness. Save locally or publish to Confluence with explicit confirmation.

💬

Session persistence: SA answers and feedback are saved to files that persist across regenerations. If you re-run the skill for the same brand, it pre-loads all prior answers — no question is ever asked twice. Progress tracking enables resume after context compaction.

Right-Sized Documentation

Full vs Lite SDD — Scale-Based Decision

📋

Full SDD

Use when any of these apply
  • New brand or client onboarding
  • New system integration being built
  • More than 3 new workflow flows
  • New batch processing pipeline
  • Any Tier 5 custom development
  • Multi-phase project
  • New loyalty program architecture
Output — 11 sections
vs
📄

Lite SDD

Use only when all of these are true
  • An existing Full SDD covers the base integration
  • The change is additive or a correction only
  • 2–3 new flows or fewer
  • No new external systems being integrated
Output — 7 sections
💡

Both formats maintain the same depth of detail for every solution — process flow diagrams, sequence diagrams, API specifications, and data field mappings. Lite SDD = same rigour, smaller scope.

Under the Hood

10-Step Generation Pipeline

PRE-FLIGHT
Steps A – D
Resolve skill directory, detect resume from progress file, detect SDD regeneration, parse --dry-run flag
Progress File SA Answers
STEP 0
MCP & Registries
Health-check 3 MCP servers, enforce MCP-only API rule, load feedback & SA answers, init progress tracker, check Excalidraw availability
Atlassian MCP Capillary Docs MCP Mermaid MCP
STEPS 1 – 2
Research & Discovery
Parse input (JIRA / BRD / URL / text), SA interview (1c), infrastructure collection (1e), search Confluence, fetch API schemas, crawl public docs (2b), dispatch parallel analysis agents per use case (2c)
JIRA Epic BRD Public Docs Analysis Agents
CHECKPOINT 1 — SA Reviews Tiers & Gaps
STEPS 3 – 4
Decide & Validate
Full vs Lite SDD decision (3), map every requirement to lowest viable tier via Golden Path (4), validate endpoints with validate_api.py (4b), evaluate design gates (4c). Dry-run stops here → Solutioning Brief (4d)
Golden Path API Validator Design Gates
STEPS 5 – 6
Write & Verify
Check mandatory field sources (5a), dispatch writing subagents per use case with Opus (5b), consolidate API Reference (5c), generate & validate diagrams (Mermaid / Excalidraw), self-verify against 39-item checklist (6)
Writing Agents Section Template 39-Item Checklist
CHECKPOINT 2 — SA Reviews Draft & Citations
STEPS 7 – 9
Deliver & Review
Assemble & write final SDD file (7), optional 12D developer readiness review (7b), optional Confluence publish with explicit confirmation (8), output summary with metrics (9)
SDD Review Confluence MCP
41
sub-steps
3
MCP servers
5
registries
2
SA checkpoints
8
citation types
Output & Delivery

What You Get

Primary Output
output-sdd/{Brand}-SDD-{date}.md

A complete, publication-ready Markdown file with all sections, diagrams, API specs, data field mappings, architecture decision records, and non-functional requirements.

Optional: Publish to Confluence
💾
Saved locally
💬
Path proposed
SA › Region › Brand
🔐
You confirm
"confirm"
Published

The skill never publishes automatically. Local Markdown is always the primary artefact.

Quality Built In — Every Time

MCP-only API fidelity — every Capillary API field name comes from MCP schema fetch. Zero hallucinated endpoints. Schema hash verification catches drift.

Citation-backed traceability — every claim tagged [CIT-xxx] tracing to BRD, JIRA, Confluence, MCP, public docs, or SA answers. Citation Index appendix in every SDD.

39-item self-verification — structure, APIs, citations, process flow depth, batch flows, critical data integrity, diagram routing, and developer readiness all checked before delivery.

12-dimension AI review — optional /sdd-review scores Document Quality (6D) + Developer Readiness (6D). Grade A–F with remediation checklist.

Critical data rule — infrastructure values (org IDs, storage accounts, Kafka topics, email addresses) are never invented. Unconfirmed items read [CONFIRM WITH CLIENT].

Impact

Why This Matters for Solution Architects

🔍

API Discovery Automated

200+ Capillary endpoints searched automatically to find the exact API that fits — no manual documentation digging or trial-and-error.

APIs Confirmed Before You Write

Endpoints are verified against live Capillary documentation before being documented. Bad paths are flagged immediately — not discovered during client review.

📐

Consistent Every Time

Every SDD follows the same structure: all required sections, decision records, non-functional requirements, data mappings, and realistic API examples.

🎯

Right Architecture, Right Size

Full SDD for new integrations. Lite SDD for additive changes. The 5-tier Golden Path prevents over-engineering on every requirement.

🤖

Works How You Work

Paste a JIRA epic, share a BRD, or describe requirements in plain text — the skill adapts to your input automatically.

🚀

Hours, Not Weeks

New Full SDD: 1–1.5 weeks → hours. Change Request Lite SDD: 2–3 days → under an hour. From requirement to publication-ready Markdown.

/capillary-sdd-writer <EPIC-ID-OR-BRD>
Companion Skills
/sdd-review  12-dimension quality & developer readiness scoring
/lld  Team-specific Low-Level Design from SDD contracts
/excalidraw-diagram  Rich architecture & data flow visuals
Built on Claude Opus 4.6 · MCP · Capillary Docs API  ·  Skill for Claude Code & claude.ai
v2 Enhancements — Delivered

What's New in v2

🔗 Citation-Backed Traceability

8 citation types (BRD, JIRA, Confluence, MCP, Public Docs, SA, Feedback, Inferred). Every factual claim tagged [CIT-xxx]. Citation Index appendix in every SDD. Target: 80%+ coverage.

🤖 Multi-Agent Use Case Analysis

One research agent per use case runs in parallel — discovers APIs via MCP, crawls public docs, identifies gaps, recommends tiers. Results compiled into Analysis Briefs for writing agents.

🚦 SA Design Gates & Checkpoints

Two mandatory SA review checkpoints (post-analysis, post-draft). Design gates pause on tier escalations and ADR decisions. The SA stays in control of every critical architectural choice.

📊 12-Dimension AI Review

Separate /sdd-review skill scores 6 Document Quality + 6 Developer Readiness dimensions. Weighted scoring (40/60), A–F grading, actionable remediation checklist.

💾 Progress & SA Q&A Persistence

Progress file tracks pipeline status + serialized registries for context compaction resume. SA answers file persists across regenerations — questions answered once are never re-asked.

🔒 MCP-Only API Fidelity

Absolute rule: zero Capillary API field names from built-in knowledge. Schema hash verification in Step 6 catches any drift between MCP fetch and SDD content.

📝 Layer 2+ Process Flows

Every process flow step includes: actor + endpoint, request fields with source path, response fields with downstream usage, error branches with HTTP status + recovery action, and inline citations.

🧪 Dry-Run & Diff-Based Regen

--dry-run runs Steps 0–4 only, outputs a Solutioning Brief for SA review. Regeneration detects existing SDD + answers — only changed sections are rewritten.

🎨 Excalidraw Diagram Routing

Architecture and complex flowcharts route to Excalidraw for richer visuals. Mermaid fallback always included. Sequence and ER diagrams stay in Mermaid. Color-mapped to 6 semantic classes.

📂 Feedback Loop System

SA correction feedback is captured, categorized, and persisted in feedback-log.md. Active entries auto-apply as constraints in all future sessions — corrections compound, not repeat.

What's Next

Roadmap: Beyond v2

1
SDD-to-LLD Pipeline In Progress

After SDD approval, automatically generate team-specific Low-Level Design documents — Neo LLD, Connect+ LLD, Vulcan LLD, Custom Service LLD — with block chain specs, MongoDB schemas, and deployment contracts derived from the SDD.

2
Git Repo Integration

Connect client or Capillary custom-logic repos so the skill understands existing Neo/Connect+ patterns and previously built integrations — avoiding re-documenting what is already built and surfacing implementation constraints early.

3
JIRA Lifecycle Triggers

Auto-fire the skill when a JIRA epic transitions to In Progress or a custom SDD Required status — zero-touch initiation, SDD draft in Confluence before the SA's kickoff call even begins.

4
Cross-Client Pattern Library

Auto-link related SDDs across clients and regions. When documenting a "mobile app enrolment" for a new brand, surface how 5 other brands solved the same flow — reuse patterns, avoid reinventing.

/capillary-sdd-writer <EPIC-ID-OR-BRD>
Built on Claude Opus 4.6 · MCP · Capillary Docs API  ·  Skill for Claude Code & claude.ai