Skip to main content

Overview

The Writing Plans skill creates detailed implementation plans that assume the engineer has zero context for your codebase. Every step includes exact file paths, complete code, and expected output.
Plans are written for skilled developers who know almost nothing about your toolset or problem domain.

When to Use

Use when you have a spec or requirements for a multi-step task, before touching code. Typical workflow:
  1. Brainstorming creates the design
  2. Writing Plans creates the implementation plan
  3. Executing Plans or Subagent-Driven Development implements it

Key Principles

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):
  • “Write the failing test” - step
  • “Run it to make sure it fails” - step
  • “Implement the minimal code to make the test pass” - step
  • “Run the tests and make sure they pass” - step
  • “Commit” - step
Don’t write “Add validation” - write the exact validation code. Don’t write “Fix the bug” - write the exact fix.

Complete Information

Every task includes:
  • Exact file paths (not “the config file”)
  • Complete code (not “add error handling”)
  • Exact commands with expected output
  • References to relevant skills with @ syntax

Core Values

  • DRY (Don’t Repeat Yourself)
  • YAGNI (You Aren’t Gonna Need It)
  • TDD (Test-Driven Development)
  • Frequent commits

Plan Structure

Header (Required)

Every plan MUST start with this header:
# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---
# Metrics Tracking Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

**Goal:** Add usage metrics tracking with export to JSON and CSV formats.

**Architecture:** Three-layer architecture with CLI, Service, and Storage layers. Data flows downward only. SQLite for persistence.

**Tech Stack:** Python 3.11, Click for CLI, SQLite3, pytest

---

Task Structure

Each task follows this template:
### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

**Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected
```

**Step 2: Run test to verify it fails**

Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"

**Step 3: Write minimal implementation**

```python
def function(input):
    return expected
```

**Step 4: Run test to verify it passes**

Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS

**Step 5: Commit**

```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```

Complete Example

### Task 1: Metrics Collector

**Files:**
- Create: `src/metrics/collector.py`
- Test: `tests/metrics/test_collector.py`

**Step 1: Write the failing test**

Create `tests/metrics/test_collector.py`:

```python
import pytest
from metrics.collector import MetricsCollector

def test_track_event():
    collector = MetricsCollector()
    collector.track('user_login', {'user_id': 123})
    
    events = collector.get_events()
    assert len(events) == 1
    assert events[0]['event'] == 'user_login'
    assert events[0]['data']['user_id'] == 123
```

**Step 2: Run test to verify it fails**

Run: `pytest tests/metrics/test_collector.py -v`
Expected: FAIL with "No module named 'metrics.collector'"

**Step 3: Write minimal implementation**

Create `src/metrics/collector.py`:

```python
class MetricsCollector:
    def __init__(self):
        self._events = []
    
    def track(self, event: str, data: dict) -> None:
        self._events.append({
            'event': event,
            'data': data
        })
    
    def get_events(self) -> list:
        return self._events
```

**Step 4: Run test to verify it passes**

Run: `pytest tests/metrics/test_collector.py -v`
Expected: PASS

**Step 5: Commit**

```bash
git add src/metrics/collector.py tests/metrics/test_collector.py
git commit -m "feat: add MetricsCollector with track method"
```

File Location

Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
# Create directory if needed
mkdir -p docs/plans

# Save plan
cat > docs/plans/2026-03-09-metrics-tracking.md

Execution Handoff

After saving the plan, offer execution choice:
1

Plan Complete

“Plan complete and saved to docs/plans/<filename>.md. Two execution options:”
2

Option 1: Subagent-Driven (this session)

  • Stay in current session
  • Dispatch fresh subagent per task
  • Review between tasks
  • Fast iteration
If chosen: Use superpowers:subagent-driven-development
3

Option 2: Parallel Session (separate)

  • Open new session with executing-plans
  • Batch execution with checkpoints
  • No context switch back to this session
If chosen: Guide them to open new session, which uses superpowers:executing-plans

Best Practices

Always Include

Exact Paths

src/metrics/collector.pyNot: “the collector file”

Complete Code

Full function implementationNot: “add error handling”

Exact Commands

pytest tests/metrics/test_collector.py::test_track -vNot: “run the tests”

Expected Output

Expected: PASSNot: “make sure it works”

Reference Skills

Use @ syntax to reference relevant skills:
**Step 1: Write failing test**

Follow @superpowers:test-driven-development
- Write test first
- Watch it fail
- Then implement

Common Mistakes

Don’t:
  • Write “Add validation” (write the exact validation code)
  • Write “Fix the bug” (write the exact fix)
  • Write “Run tests” (specify exact command and file)
  • Write “Commit changes” (specify exact files to commit)
  • Assume they know where files are
  • Assume they know what libraries to use

Checklist

Before considering plan complete:
  • Header includes REQUIRED SUB-SKILL line
  • Goal stated in one sentence
  • Architecture approach described (2-3 sentences)
  • Tech stack listed
  • Each task has exact file paths
  • Each step includes complete code (not “add X”)
  • Each command is exact with expected output
  • Steps are bite-sized (2-5 minutes each)
  • TDD cycle followed (write test, run, implement, run, commit)
  • References to relevant skills included
  • Saved to docs/plans/YYYY-MM-DD-<feature>.md
A good plan can be executed by someone with zero context about your project. If they’d need to ask clarifying questions, add more detail.