1
0

Compare commits

..

21 Commits

Author SHA1 Message Date
48c6154fab 更新环境 2026-02-02 09:08:37 +08:00
3174f306bb 优化表结构,增加表数据 2026-01-30 17:57:12 +08:00
b90d030899 尝试加载数据 2026-01-29 18:10:40 +08:00
9f06ebf87d 完成macd图的绘制 2026-01-29 10:41:53 +08:00
d4db4f3021 移除不使用的依赖 2026-01-29 10:32:42 +08:00
579298d16e 优化动态绘制图表参数 2026-01-28 17:49:44 +08:00
735f8858ba 手动绘制动态图表 2026-01-28 17:10:45 +08:00
fafbb6a1a9 实现基本图表和指标叠加显示 2026-01-28 15:07:28 +08:00
0db4155a3c 重构回测代码架构,新增批量回测功能 2026-01-28 14:10:14 +08:00
173a566f8b 优化上下文信息 2026-01-28 13:59:52 +08:00
e8ed2ddfe5 测试买卖信息 2026-01-28 10:55:11 +08:00
5afb8ddcd1 修复卖出逻辑的实现,不允许卖空 2026-01-28 10:30:28 +08:00
64bfd031b3 修复预热天数的实现 2026-01-28 10:08:27 +08:00
a2a261769b 优化代码风格 2026-01-28 09:54:38 +08:00
5cc140259e 修复代码问题 2026-01-28 09:46:44 +08:00
9a46bd7e4c 完成macd策略 2026-01-28 00:10:43 +08:00
407b70bd0e 优化命令行输出效果 2026-01-27 22:25:22 +08:00
d8159af1d2 优化图表颜色为红涨绿跌 2026-01-27 22:22:13 +08:00
4e4bb1ab6e 修复回测图像生成 2026-01-27 21:51:48 +08:00
5c4a70d7f0 完成回测脚本 2026-01-27 18:30:41 +08:00
53e72e2f84 增加openspec支持 2026-01-27 18:16:18 +08:00
86 changed files with 17004 additions and 964 deletions

1
.gitignore vendored
View File

@@ -139,3 +139,4 @@ dmypy.json
.pyre/
.pytype/
cython_debug/
output

7
.idea/dataSources.xml generated
View File

@@ -8,5 +8,12 @@
<jdbc-url>jdbc:postgresql://81.71.3.24:6785/leopard_dev</jdbc-url>
<working-dir>$ProjectFileDir$</working-dir>
</data-source>
<data-source source="LOCAL" name="leopard.sqlite" uuid="c9e16f8e-81be-45cf-847c-47a6750eeee2">
<driver-ref>sqlite.xerial</driver-ref>
<synchronize>true</synchronize>
<jdbc-driver>org.sqlite.JDBC</jdbc-driver>
<jdbc-url>jdbc:sqlite:$USER_HOME$/Documents/leopard_data/leopard.sqlite</jdbc-url>
<working-dir>$ProjectFileDir$</working-dir>
</data-source>
</component>
</project>

7
.idea/data_source_mapping.xml generated Normal file
View File

@@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="DataSourcePerFileMappings">
<file url="file://$APPLICATION_CONFIG_DIR$/consoles/db/bd7b5f2a-eb99-4aad-81ec-1fec76b3d7fc/console.sql" value="bd7b5f2a-eb99-4aad-81ec-1fec76b3d7fc" />
<file url="file://$APPLICATION_CONFIG_DIR$/consoles/db/c9e16f8e-81be-45cf-847c-47a6750eeee2/console.sql" value="c9e16f8e-81be-45cf-847c-47a6750eeee2" />
</component>
</project>

6
.idea/db-forest-config.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="db-tree-configuration">
<option name="data" value="" />
</component>
</project>

6
.idea/markdown.xml generated Normal file
View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="MarkdownSettings">
<option name="splitLayout" value="SHOW_PREVIEW" />
</component>
</project>

1
.idea/sqldialects.xml generated
View File

@@ -1,6 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="SqlDialectMappings">
<file url="file://$PROJECT_DIR$/sql/initial.sql" dialect="SQLite" />
<file url="PROJECT" dialect="PostgreSQL" />
</component>
</project>

View File

@@ -0,0 +1,157 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- **tdd**: spec, tests, implementation, docs
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,124 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven", "tdd")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
**tdd schema** (spec → tests → implementation → docs):
- **spec.md**: Feature specification defining what to build.
- **tests/*.test.ts**: Write tests BEFORE implementation (TDD red phase).
- **src/*.ts**: Implement to make tests pass (TDD green phase).
- **docs/*.md**: Document the implemented feature.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,75 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- "tdd" or "test-driven" → use `--schema tdd`
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven, `spec` for tdd).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/archive/YYYY-MM-DD--<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/archive/YYYY-MM-DD--<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,146 @@
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- **tdd**: spec, tests, implementation, docs
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,150 @@
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,235 @@
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,113 @@
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven", "tdd")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
**tdd schema** (spec → tests → implementation → docs):
- **spec.md**: Feature specification defining what to build.
- **tests/*.test.ts**: Write tests BEFORE implementation (TDD red phase).
- **src/*.ts**: Implement to make tests pass (TDD green phase).
- **docs/*.md**: Document the implemented feature.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,167 @@
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,87 @@
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,63 @@
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- "tdd" or "test-driven" → use `--schema tdd`
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,518 @@
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/archive/YYYY-MM-DD--<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/archive/YYYY-MM-DD--<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,127 @@
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,157 @@
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,150 @@
---
description: Implement tasks from an OpenSpec change (Experimental)
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- **tdd**: spec, tests, implementation, docs
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,154 @@
---
description: Archive a completed change in the experimental workflow
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute `/opsx:sync` logic. Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use /opsx:sync approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,239 @@
---
description: Archive multiple completed changes at once
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,117 @@
---
description: Continue working on a change - create the next artifact (Experimental)
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name after `/opsx:continue` (e.g., `/opsx:continue add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven", "tdd")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Run `/opsx:continue` to create the next artifact"
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
**tdd schema** (spec → tests → implementation → docs):
- **spec.md**: Feature specification defining what to build.
- **tests/*.test.ts**: Write tests BEFORE implementation (TDD red phase).
- **src/*.ts**: Implement to make tests pass (TDD green phase).
- **docs/*.md**: Document the implemented feature.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,171 @@
---
description: Enter explore mode - think through ideas, investigate problems, clarify requirements
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? `/opsx:new` or `/opsx:ff`"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,91 @@
---
description: Create a change and generate all artifacts needed for implementation in one go
---
Fast-forward through artifact creation - generate everything needed to start implementation.
**Input**: The argument after `/opsx:ff` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use the `template` as a starting point, filling in based on context
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,67 @@
---
description: Start a new change using the experimental artifact workflow (OPSX)
---
Start a new change using the experimental artifact-driven approach.
**Input**: The argument after `/opsx:new` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- "tdd" or "test-driven" → use `--schema tdd`
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema. Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Run `/opsx:continue` or just describe what this change is about and I'll draft it."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest using `/opsx:continue` instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,522 @@
---
description: Guided onboarding - walk through a complete OpenSpec workflow cycle with narration
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/archive/YYYY-MM-DD--<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/archive/YYYY-MM-DD--<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,131 @@
---
description: Sync delta specs from a change to main specs
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name after `/opsx:sync` (e.g., `/opsx:sync add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,161 @@
---
description: Verify implementation matches change artifacts before archiving
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name after `/opsx:verify` (e.g., `/opsx:verify add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

View File

@@ -0,0 +1,157 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- **tdd**: spec, tests, implementation, docs
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]` → `- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, execute /opsx:sync logic (use the openspec-sync-specs skill). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@@ -0,0 +1,246 @@
---
name: openspec-bulk-archive-change
description: Archive multiple completed changes at once. Use when archiving several parallel changes.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Archive multiple completed changes in a single operation.
This skill allows you to batch-archive changes, handling spec conflicts intelligently by checking the codebase to determine what's actually implemented.
**Input**: None required (prompts for selection)
**Steps**
1. **Get active changes**
Run `openspec list --json` to get all active changes.
If no active changes exist, inform user and stop.
2. **Prompt for change selection**
Use **AskUserQuestion tool** with multi-select to let user choose changes:
- Show each change with its schema
- Include an option for "All changes"
- Allow any number of selections (1+ works, 2+ is the typical use case)
**IMPORTANT**: Do NOT auto-select. Always let the user choose.
3. **Batch validation - gather status for all selected changes**
For each selected change, collect:
a. **Artifact status** - Run `openspec status --change "<name>" --json`
- Parse `schemaName` and `artifacts` list
- Note which artifacts are `done` vs other states
b. **Task completion** - Read `openspec/changes/<name>/tasks.md`
- Count `- [ ]` (incomplete) vs `- [x]` (complete)
- If no tasks file exists, note as "No tasks"
c. **Delta specs** - Check `openspec/changes/<name>/specs/` directory
- List which capability specs exist
- For each, extract requirement names (lines matching `### Requirement: <name>`)
4. **Detect spec conflicts**
Build a map of `capability -> [changes that touch it]`:
```
auth -> [change-a, change-b] <- CONFLICT (2+ changes)
api -> [change-c] <- OK (only 1 change)
```
A conflict exists when 2+ selected changes have delta specs for the same capability.
5. **Resolve conflicts agentically**
**For each conflict**, investigate the codebase:
a. **Read the delta specs** from each conflicting change to understand what each claims to add/modify
b. **Search the codebase** for implementation evidence:
- Look for code implementing requirements from each delta spec
- Check for related files, functions, or tests
c. **Determine resolution**:
- If only one change is actually implemented -> sync that one's specs
- If both implemented -> apply in chronological order (older first, newer overwrites)
- If neither implemented -> skip spec sync, warn user
d. **Record resolution** for each conflict:
- Which change's specs to apply
- In what order (if both)
- Rationale (what was found in codebase)
6. **Show consolidated status table**
Display a table summarizing all changes:
```
| Change | Artifacts | Tasks | Specs | Conflicts | Status |
|---------------------|-----------|-------|---------|-----------|--------|
| schema-management | Done | 5/5 | 2 delta | None | Ready |
| project-config | Done | 3/3 | 1 delta | None | Ready |
| add-oauth | Done | 4/4 | 1 delta | auth (!) | Ready* |
| add-verify-skill | 1 left | 2/5 | None | None | Warn |
```
For conflicts, show the resolution:
```
* Conflict resolution:
- auth spec: Will apply add-oauth then add-jwt (both implemented, chronological order)
```
For incomplete changes, show warnings:
```
Warnings:
- add-verify-skill: 1 incomplete artifact, 3 incomplete tasks
```
7. **Confirm batch operation**
Use **AskUserQuestion tool** with a single confirmation:
- "Archive N changes?" with options based on status
- Options might include:
- "Archive all N changes"
- "Archive only N ready changes (skip incomplete)"
- "Cancel"
If there are incomplete changes, make clear they'll be archived with warnings.
8. **Execute archive for each confirmed change**
Process changes in the determined order (respecting conflict resolution):
a. **Sync specs** if delta specs exist:
- Use the openspec-sync-specs approach (agent-driven intelligent merge)
- For conflicts, apply in resolved order
- Track if sync was done
b. **Perform the archive**:
```bash
mkdir -p openspec/changes/archive
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
c. **Track outcome** for each change:
- Success: archived successfully
- Failed: error during archive (record error)
- Skipped: user chose not to archive (if applicable)
9. **Display summary**
Show final results:
```
## Bulk Archive Complete
Archived 3 changes:
- schema-management-cli -> archive/2026-01-19-schema-management-cli/
- project-config -> archive/2026-01-19-project-config/
- add-oauth -> archive/2026-01-19-add-oauth/
Skipped 1 change:
- add-verify-skill (user chose not to archive incomplete)
Spec sync summary:
- 4 delta specs synced to main specs
- 1 conflict resolved (auth: applied both in chronological order)
```
If any failures:
```
Failed 1 change:
- some-change: Archive directory already exists
```
**Conflict Resolution Examples**
Example 1: Only one implemented
```
Conflict: specs/auth/spec.md touched by [add-oauth, add-jwt]
Checking add-oauth:
- Delta adds "OAuth Provider Integration" requirement
- Searching codebase... found src/auth/oauth.ts implementing OAuth flow
Checking add-jwt:
- Delta adds "JWT Token Handling" requirement
- Searching codebase... no JWT implementation found
Resolution: Only add-oauth is implemented. Will sync add-oauth specs only.
```
Example 2: Both implemented
```
Conflict: specs/api/spec.md touched by [add-rest-api, add-graphql]
Checking add-rest-api (created 2026-01-10):
- Delta adds "REST Endpoints" requirement
- Searching codebase... found src/api/rest.ts
Checking add-graphql (created 2026-01-15):
- Delta adds "GraphQL Schema" requirement
- Searching codebase... found src/api/graphql.ts
Resolution: Both implemented. Will apply add-rest-api specs first,
then add-graphql specs (chronological order, newer takes precedence).
```
**Output On Success**
```
## Bulk Archive Complete
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
- <change-2> -> archive/YYYY-MM-DD-<change-2>/
Spec sync summary:
- N delta specs synced to main specs
- No conflicts (or: M conflicts resolved)
```
**Output On Partial Success**
```
## Bulk Archive Complete (partial)
Archived N changes:
- <change-1> -> archive/YYYY-MM-DD-<change-1>/
Skipped M changes:
- <change-2> (user chose not to archive incomplete)
Failed K changes:
- <change-3>: Archive directory already exists
```
**Output When No Changes**
```
## No Changes to Archive
No active changes found. Use `/opsx:new` to create a new change.
```
**Guardrails**
- Allow any number of changes (1+ is fine, 2+ is the typical use case)
- Always prompt for selection, never auto-select
- Detect spec conflicts early and resolve by checking codebase
- When both changes are implemented, apply specs in chronological order
- Skip spec sync only when implementation is missing (warn user)
- Show clear per-change status before confirming
- Use single confirmation for entire batch
- Track and report all outcomes (success/skip/fail)
- Preserve .openspec.yaml when moving to archive
- Archive directory target uses current date: YYYY-MM-DD-<name>
- If archive target exists, fail that change but continue with others

View File

@@ -0,0 +1,124 @@
---
name: openspec-continue-change
description: Continue working on an OpenSpec change by creating the next artifact. Use when the user wants to progress their change, create the next artifact, or continue their workflow.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Continue working on a change by creating the next artifact.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes sorted by most recently modified. Then use the **AskUserQuestion tool** to let the user select which change to work on.
Present the top 3-4 most recently modified changes as options, showing:
- Change name
- Schema (from `schema` field if present, otherwise "spec-driven")
- Status (e.g., "0/5 tasks", "complete", "no tasks")
- How recently it was modified (from `lastModified` field)
Mark the most recently modified change as "(Recommended)" since it's likely what the user wants to continue.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check current status**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand current state. The response includes:
- `schemaName`: The workflow schema being used (e.g., "spec-driven", "tdd")
- `artifacts`: Array of artifacts with their status ("done", "ready", "blocked")
- `isComplete`: Boolean indicating if all artifacts are complete
3. **Act based on status**:
---
**If all artifacts are complete (`isComplete: true`)**:
- Congratulate the user
- Show final status including the schema used
- Suggest: "All artifacts created! You can now implement this change or archive it."
- STOP
---
**If artifacts are ready to create** (status shows artifacts with `status: "ready"`):
- Pick the FIRST artifact with `status: "ready"` from the status output
- Get its instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- Parse the JSON. The key fields are:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- **Create the artifact file**:
- Read any completed dependency files for context
- Use `template` as the structure - fill in its sections
- Apply `context` and `rules` as constraints when writing - but do NOT copy them into the file
- Write to the output path specified in instructions
- Show what was created and what's now unlocked
- STOP after creating ONE artifact
---
**If no artifacts are ready (all blocked)**:
- This shouldn't happen with a valid schema
- Show status and suggest checking for issues
4. **After creating an artifact, show progress**
```bash
openspec status --change "<name>"
```
**Output**
After each invocation, show:
- Which artifact was created
- Schema workflow being used
- Current progress (N/M complete)
- What artifacts are now unlocked
- Prompt: "Want to continue? Just ask me to continue or tell me what to do next."
**Artifact Creation Guidelines**
The artifact types and their purpose depend on the schema. Use the `instruction` field from the instructions output to understand what to create.
Common artifact patterns:
**spec-driven schema** (proposal → specs → design → tasks):
- **proposal.md**: Ask user about the change if not clear. Fill in Why, What Changes, Capabilities, Impact.
- The Capabilities section is critical - each capability listed will need a spec file.
- **specs/*.md**: Create one spec per capability listed in the proposal.
- **design.md**: Document technical decisions, architecture, and implementation approach.
- **tasks.md**: Break down implementation into checkboxed tasks.
**tdd schema** (spec → tests → implementation → docs):
- **spec.md**: Feature specification defining what to build.
- **tests/*.test.ts**: Write tests BEFORE implementation (TDD red phase).
- **src/*.ts**: Implement to make tests pass (TDD green phase).
- **docs/*.md**: Document the implemented feature.
For other schemas, follow the `instruction` field from the CLI output.
**Guardrails**
- Create ONE artifact per invocation
- Always read dependency artifacts before creating a new one
- Never skip artifacts or create out of order
- If context is unclear, ask the user before creating
- Verify the artifact file exists after writing before marking progress
- Use the schema's artifact sequence, don't assume specific artifact names
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output

View File

@@ -0,0 +1,290 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first (e.g., start a change with `/opsx:new` or `/opsx:ff`). You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create one?"
→ Can transition to `/opsx:new` or `/opsx:ff`
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into action**: "Ready to start? /opsx:new or /opsx:ff"
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change: /opsx:new <name>
- Fast-forward to tasks: /opsx:ff <name>
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@@ -0,0 +1,101 @@
---
name: openspec-ff-change
description: Fast-forward through OpenSpec artifact creation. Use when the user wants to quickly create all artifacts needed for implementation without stepping through each one individually.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Fast-forward through artifact creation - generate everything needed to start implementation in one go.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "✓ Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, suggest continuing that change instead
- Verify each artifact file exists after writing before proceeding to next

View File

@@ -0,0 +1,75 @@
---
name: openspec-new-change
description: Start a new OpenSpec change using the experimental artifact workflow. Use when the user wants to create a new feature, fix, or modification with a structured step-by-step approach.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Start a new change using the experimental artifact-driven approach.
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Determine the workflow schema**
Use the default schema (omit `--schema`) unless the user explicitly requests a different workflow.
**Use a different schema only if the user mentions:**
- "tdd" or "test-driven" → use `--schema tdd`
- A specific schema name → use `--schema <name>`
- "show workflows" or "what workflows" → run `openspec schemas --json` and let them choose
**Otherwise**: Omit `--schema` to use the default.
3. **Create the change directory**
```bash
openspec new change "<name>"
```
Add `--schema <name>` only if the user requested a specific workflow.
This creates a scaffolded change at `openspec/changes/<name>/` with the selected schema.
4. **Show the artifact status**
```bash
openspec status --change "<name>"
```
This shows which artifacts need to be created and which are ready (dependencies satisfied).
5. **Get instructions for the first artifact**
The first artifact depends on the schema (e.g., `proposal` for spec-driven, `spec` for tdd).
Check the status output to find the first artifact with status "ready".
```bash
openspec instructions <first-artifact-id> --change "<name>"
```
This outputs the template and context for creating the first artifact.
6. **STOP and wait for user direction**
**Output**
After completing the steps, summarize:
- Change name and location
- Schema/workflow being used and its artifact sequence
- Current status (0/N artifacts complete)
- The template for the first artifact
- Prompt: "Ready to create the first artifact? Just describe what this change is about and I'll draft it, or ask me to continue."
**Guardrails**
- Do NOT create any artifacts yet - just show the instructions
- Do NOT advance beyond showing the first artifact template
- If the name is invalid (not kebab-case), ask for a valid name
- If a change with that name already exists, suggest continuing that change instead
- Pass --schema if using a non-default workflow

View File

@@ -0,0 +1,529 @@
---
name: openspec-onboard
description: Guided onboarding for OpenSpec - walk through a complete workflow cycle with narration and real codebase work.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Guide the user through their first complete OpenSpec workflow cycle. This is a teaching experience—you'll do real work in their codebase while explaining each step.
---
## Preflight
Before starting, check if OpenSpec is initialized:
```bash
openspec status --json 2>&1 || echo "NOT_INITIALIZED"
```
**If not initialized:**
> OpenSpec isn't set up in this project yet. Run `openspec init` first, then come back to `/opsx:onboard`.
Stop here if not initialized.
---
## Phase 1: Welcome
Display:
```
## Welcome to OpenSpec!
I'll walk you through a complete change cycle—from idea to implementation—using a real task in your codebase. Along the way, you'll learn the workflow by doing it.
**What we'll do:**
1. Pick a small, real task in your codebase
2. Explore the problem briefly
3. Create a change (the container for our work)
4. Build the artifacts: proposal → specs → design → tasks
5. Implement the tasks
6. Archive the completed change
**Time:** ~15-20 minutes
Let's start by finding something to work on.
```
---
## Phase 2: Task Selection
### Codebase Analysis
Scan the codebase for small improvement opportunities. Look for:
1. **TODO/FIXME comments** - Search for `TODO`, `FIXME`, `HACK`, `XXX` in code files
2. **Missing error handling** - `catch` blocks that swallow errors, risky operations without try-catch
3. **Functions without tests** - Cross-reference `src/` with test directories
4. **Type issues** - `any` types in TypeScript files (`: any`, `as any`)
5. **Debug artifacts** - `console.log`, `console.debug`, `debugger` statements in non-debug code
6. **Missing validation** - User input handlers without validation
Also check recent git activity:
```bash
git log --oneline -10 2>/dev/null || echo "No git history"
```
### Present Suggestions
From your analysis, present 3-4 specific suggestions:
```
## Task Suggestions
Based on scanning your codebase, here are some good starter tasks:
**1. [Most promising task]**
Location: `src/path/to/file.ts:42`
Scope: ~1-2 files, ~20-30 lines
Why it's good: [brief reason]
**2. [Second task]**
Location: `src/another/file.ts`
Scope: ~1 file, ~15 lines
Why it's good: [brief reason]
**3. [Third task]**
Location: [location]
Scope: [estimate]
Why it's good: [brief reason]
**4. Something else?**
Tell me what you'd like to work on.
Which task interests you? (Pick a number or describe your own)
```
**If nothing found:** Fall back to asking what the user wants to build:
> I didn't find obvious quick wins in your codebase. What's something small you've been meaning to add or fix?
### Scope Guardrail
If the user picks or describes something too large (major feature, multi-day work):
```
That's a valuable task, but it's probably larger than ideal for your first OpenSpec run-through.
For learning the workflow, smaller is better—it lets you see the full cycle without getting stuck in implementation details.
**Options:**
1. **Slice it smaller** - What's the smallest useful piece of [their task]? Maybe just [specific slice]?
2. **Pick something else** - One of the other suggestions, or a different small task?
3. **Do it anyway** - If you really want to tackle this, we can. Just know it'll take longer.
What would you prefer?
```
Let the user override if they insist—this is a soft guardrail.
---
## Phase 3: Explore Demo
Once a task is selected, briefly demonstrate explore mode:
```
Before we create a change, let me quickly show you **explore mode**—it's how you think through problems before committing to a direction.
```
Spend 1-2 minutes investigating the relevant code:
- Read the file(s) involved
- Draw a quick ASCII diagram if it helps
- Note any considerations
```
## Quick Exploration
[Your brief analysis—what you found, any considerations]
┌─────────────────────────────────────────┐
│ [Optional: ASCII diagram if helpful] │
└─────────────────────────────────────────┘
Explore mode (`/opsx:explore`) is for this kind of thinking—investigating before implementing. You can use it anytime you need to think through a problem.
Now let's create a change to hold our work.
```
**PAUSE** - Wait for user acknowledgment before proceeding.
---
## Phase 4: Create the Change
**EXPLAIN:**
```
## Creating a Change
A "change" in OpenSpec is a container for all the thinking and planning around a piece of work. It lives in `openspec/changes/<name>/` and holds your artifacts—proposal, specs, design, tasks.
Let me create one for our task.
```
**DO:** Create the change with a derived kebab-case name:
```bash
openspec new change "<derived-name>"
```
**SHOW:**
```
Created: `openspec/changes/<name>/`
The folder structure:
```
openspec/changes/<name>/
├── proposal.md ← Why we're doing this (empty, we'll fill it)
├── design.md ← How we'll build it (empty)
├── specs/ ← Detailed requirements (empty)
└── tasks.md ← Implementation checklist (empty)
```
Now let's fill in the first artifact—the proposal.
```
---
## Phase 5: Proposal
**EXPLAIN:**
```
## The Proposal
The proposal captures **why** we're making this change and **what** it involves at a high level. It's the "elevator pitch" for the work.
I'll draft one based on our task.
```
**DO:** Draft the proposal content (don't save yet):
```
Here's a draft proposal:
---
## Why
[1-2 sentences explaining the problem/opportunity]
## What Changes
[Bullet points of what will be different]
## Capabilities
### New Capabilities
- `<capability-name>`: [brief description]
### Modified Capabilities
<!-- If modifying existing behavior -->
## Impact
- `src/path/to/file.ts`: [what changes]
- [other files if applicable]
---
Does this capture the intent? I can adjust before we save it.
```
**PAUSE** - Wait for user approval/feedback.
After approval, save the proposal:
```bash
openspec instructions proposal --change "<name>" --json
```
Then write the content to `openspec/changes/<name>/proposal.md`.
```
Proposal saved. This is your "why" document—you can always come back and refine it as understanding evolves.
Next up: specs.
```
---
## Phase 6: Specs
**EXPLAIN:**
```
## Specs
Specs define **what** we're building in precise, testable terms. They use a requirement/scenario format that makes expected behavior crystal clear.
For a small task like this, we might only need one spec file.
```
**DO:** Create the spec file:
```bash
mkdir -p openspec/changes/<name>/specs/<capability-name>
```
Draft the spec content:
```
Here's the spec:
---
## ADDED Requirements
### Requirement: <Name>
<Description of what the system should do>
#### Scenario: <Scenario name>
- **WHEN** <trigger condition>
- **THEN** <expected outcome>
- **AND** <additional outcome if needed>
---
This format—WHEN/THEN/AND—makes requirements testable. You can literally read them as test cases.
```
Save to `openspec/changes/<name>/specs/<capability>/spec.md`.
---
## Phase 7: Design
**EXPLAIN:**
```
## Design
The design captures **how** we'll build it—technical decisions, tradeoffs, approach.
For small changes, this might be brief. That's fine—not every change needs deep design discussion.
```
**DO:** Draft design.md:
```
Here's the design:
---
## Context
[Brief context about the current state]
## Goals / Non-Goals
**Goals:**
- [What we're trying to achieve]
**Non-Goals:**
- [What's explicitly out of scope]
## Decisions
### Decision 1: [Key decision]
[Explanation of approach and rationale]
---
For a small task, this captures the key decisions without over-engineering.
```
Save to `openspec/changes/<name>/design.md`.
---
## Phase 8: Tasks
**EXPLAIN:**
```
## Tasks
Finally, we break the work into implementation tasks—checkboxes that drive the apply phase.
These should be small, clear, and in logical order.
```
**DO:** Generate tasks based on specs and design:
```
Here are the implementation tasks:
---
## 1. [Category or file]
- [ ] 1.1 [Specific task]
- [ ] 1.2 [Specific task]
## 2. Verify
- [ ] 2.1 [Verification step]
---
Each checkbox becomes a unit of work in the apply phase. Ready to implement?
```
**PAUSE** - Wait for user to confirm they're ready to implement.
Save to `openspec/changes/<name>/tasks.md`.
---
## Phase 9: Apply (Implementation)
**EXPLAIN:**
```
## Implementation
Now we implement each task, checking them off as we go. I'll announce each one and occasionally note how the specs/design informed the approach.
```
**DO:** For each task:
1. Announce: "Working on task N: [description]"
2. Implement the change in the codebase
3. Reference specs/design naturally: "The spec says X, so I'm doing Y"
4. Mark complete in tasks.md: `- [ ]``- [x]`
5. Brief status: "✓ Task N complete"
Keep narration light—don't over-explain every line of code.
After all tasks:
```
## Implementation Complete
All tasks done:
- [x] Task 1
- [x] Task 2
- [x] ...
The change is implemented! One more step—let's archive it.
```
---
## Phase 10: Archive
**EXPLAIN:**
```
## Archiving
When a change is complete, we archive it. This moves it from `openspec/changes/` to `openspec/archive/YYYY-MM-DD--<name>/`.
Archived changes become your project's decision history—you can always find them later to understand why something was built a certain way.
```
**DO:**
```bash
openspec archive "<name>"
```
**SHOW:**
```
Archived to: `openspec/archive/YYYY-MM-DD--<name>/`
The change is now part of your project's history. The code is in your codebase, the decision record is preserved.
```
---
## Phase 11: Recap & Next Steps
```
## Congratulations!
You just completed a full OpenSpec cycle:
1. **Explore** - Thought through the problem
2. **New** - Created a change container
3. **Proposal** - Captured WHY
4. **Specs** - Defined WHAT in detail
5. **Design** - Decided HOW
6. **Tasks** - Broke it into steps
7. **Apply** - Implemented the work
8. **Archive** - Preserved the record
This same rhythm works for any size change—a small fix or a major feature.
---
## Command Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems before/during work |
| `/opsx:new` | Start a new change, step through artifacts |
| `/opsx:ff` | Fast-forward: create all artifacts at once |
| `/opsx:continue` | Continue working on an existing change |
| `/opsx:apply` | Implement tasks from a change |
| `/opsx:verify` | Verify implementation matches artifacts |
| `/opsx:archive` | Archive a completed change |
---
## What's Next?
Try `/opsx:new` or `/opsx:ff` on something you actually want to build. You've got the rhythm now!
```
---
## Graceful Exit Handling
### User wants to stop mid-way
If the user says they need to stop, want to pause, or seem disengaged:
```
No problem! Your change is saved at `openspec/changes/<name>/`.
To pick up where we left off later:
- `/opsx:continue <name>` - Resume artifact creation
- `/opsx:apply <name>` - Jump to implementation (if tasks exist)
The work won't be lost. Come back whenever you're ready.
```
Exit gracefully without pressure.
### User just wants command reference
If the user says they just want to see the commands or skip the tutorial:
```
## OpenSpec Quick Reference
| Command | What it does |
|---------|--------------|
| `/opsx:explore` | Think through problems (no code changes) |
| `/opsx:new <name>` | Start a new change, step by step |
| `/opsx:ff <name>` | Fast-forward: all artifacts at once |
| `/opsx:continue <name>` | Continue an existing change |
| `/opsx:apply <name>` | Implement tasks |
| `/opsx:verify <name>` | Verify implementation |
| `/opsx:archive <name>` | Archive when done |
Try `/opsx:new` to start your first change, or `/opsx:ff` if you want to move fast.
```
Exit gracefully.
---
## Guardrails
- **Follow the EXPLAIN → DO → SHOW → PAUSE pattern** at key transitions (after explore, after proposal draft, after tasks, after archive)
- **Keep narration light** during implementation—teach without lecturing
- **Don't skip phases** even if the change is small—the goal is teaching the workflow
- **Pause for acknowledgment** at marked points, but don't over-pause
- **Handle exits gracefully**—never pressure the user to continue
- **Use real codebase tasks**—don't simulate or use fake examples
- **Adjust scope gently**—guide toward smaller tasks but respect user choice

View File

@@ -0,0 +1,138 @@
---
name: openspec-sync-specs
description: Sync delta specs from a change to main specs. Use when the user wants to update main specs with changes from a delta spec, without archiving the change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Sync delta specs from a change to main specs.
This is an **agent-driven** operation - you will read delta specs and directly edit main specs to apply the changes. This allows intelligent merging (e.g., adding a scenario without copying the entire requirement).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have delta specs (under `specs/` directory).
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Find delta specs**
Look for delta spec files in `openspec/changes/<name>/specs/*/spec.md`.
Each delta spec file contains sections like:
- `## ADDED Requirements` - New requirements to add
- `## MODIFIED Requirements` - Changes to existing requirements
- `## REMOVED Requirements` - Requirements to remove
- `## RENAMED Requirements` - Requirements to rename (FROM:/TO: format)
If no delta specs found, inform user and stop.
3. **For each delta spec, apply changes to main specs**
For each capability with a delta spec at `openspec/changes/<name>/specs/<capability>/spec.md`:
a. **Read the delta spec** to understand the intended changes
b. **Read the main spec** at `openspec/specs/<capability>/spec.md` (may not exist yet)
c. **Apply changes intelligently**:
**ADDED Requirements:**
- If requirement doesn't exist in main spec → add it
- If requirement already exists → update it to match (treat as implicit MODIFIED)
**MODIFIED Requirements:**
- Find the requirement in main spec
- Apply the changes - this can be:
- Adding new scenarios (don't need to copy existing ones)
- Modifying existing scenarios
- Changing the requirement description
- Preserve scenarios/content not mentioned in the delta
**REMOVED Requirements:**
- Remove the entire requirement block from main spec
**RENAMED Requirements:**
- Find the FROM requirement, rename to TO
d. **Create new main spec** if capability doesn't exist yet:
- Create `openspec/specs/<capability>/spec.md`
- Add Purpose section (can be brief, mark as TBD)
- Add Requirements section with the ADDED requirements
4. **Show summary**
After applying all changes, summarize:
- Which capabilities were updated
- What changes were made (requirements added/modified/removed/renamed)
**Delta Spec Format Reference**
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL do something new.
#### Scenario: Basic case
- **WHEN** user does X
- **THEN** system does Y
## MODIFIED Requirements
### Requirement: Existing Feature
#### Scenario: New scenario to add
- **WHEN** user does A
- **THEN** system does B
## REMOVED Requirements
### Requirement: Deprecated Feature
## RENAMED Requirements
- FROM: `### Requirement: Old Name`
- TO: `### Requirement: New Name`
```
**Key Principle: Intelligent Merging**
Unlike programmatic merging, you can apply **partial updates**:
- To add a scenario, just include that scenario under MODIFIED - don't copy existing scenarios
- The delta represents *intent*, not a wholesale replacement
- Use your judgment to merge changes sensibly
**Output On Success**
```
## Specs Synced: <change-name>
Updated main specs:
**<capability-1>**:
- Added requirement: "New Feature"
- Modified requirement: "Existing Feature" (added 1 scenario)
**<capability-2>**:
- Created new spec file
- Added requirement: "Another Feature"
Main specs are now updated. The change remains active - archive when implementation is complete.
```
**Guardrails**
- Read both delta and main specs before making changes
- Preserve existing content not mentioned in delta
- If something is unclear, ask for clarification
- Show what you're changing as you go
- The operation should be idempotent - running twice should give same result

View File

@@ -0,0 +1,168 @@
---
name: openspec-verify-change
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.0.0"
---
Verify that an implementation matches the change artifacts (specs, tasks, design).
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show changes that have implementation tasks (tasks artifact exists).
Include the schema used for each change if available.
Mark changes with incomplete tasks as "(In Progress)".
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven", "tdd")
- Which artifacts exist for this change
3. **Get the change directory and load artifacts**
```bash
openspec instructions apply --change "<name>" --json
```
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
4. **Initialize verification report structure**
Create a report structure with three dimensions:
- **Completeness**: Track tasks and spec coverage
- **Correctness**: Track requirement implementation and scenario coverage
- **Coherence**: Track design adherence and pattern consistency
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
5. **Verify Completeness**
**Task Completion**:
- If tasks.md exists in contextFiles, read it
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
- Count complete vs total tasks
- If incomplete tasks exist:
- Add CRITICAL issue for each incomplete task
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
**Spec Coverage**:
- If delta specs exist in `openspec/changes/<name>/specs/`:
- Extract all requirements (marked with "### Requirement:")
- For each requirement:
- Search codebase for keywords related to the requirement
- Assess if implementation likely exists
- If requirements appear unimplemented:
- Add CRITICAL issue: "Requirement not found: <requirement name>"
- Recommendation: "Implement requirement X: <description>"
6. **Verify Correctness**
**Requirement Implementation Mapping**:
- For each requirement from delta specs:
- Search codebase for implementation evidence
- If found, note file paths and line ranges
- Assess if implementation matches requirement intent
- If divergence detected:
- Add WARNING: "Implementation may diverge from spec: <details>"
- Recommendation: "Review <file>:<lines> against requirement X"
**Scenario Coverage**:
- For each scenario in delta specs (marked with "#### Scenario:"):
- Check if conditions are handled in code
- Check if tests exist covering the scenario
- If scenario appears uncovered:
- Add WARNING: "Scenario not covered: <scenario name>"
- Recommendation: "Add test or implementation for scenario: <description>"
7. **Verify Coherence**
**Design Adherence**:
- If design.md exists in contextFiles:
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
- Verify implementation follows those decisions
- If contradiction detected:
- Add WARNING: "Design decision not followed: <decision>"
- Recommendation: "Update implementation or revise design.md to match reality"
- If no design.md: Skip design adherence check, note "No design.md to verify against"
**Code Pattern Consistency**:
- Review new code for consistency with project patterns
- Check file naming, directory structure, coding style
- If significant deviations found:
- Add SUGGESTION: "Code pattern deviation: <details>"
- Recommendation: "Consider following project pattern: <example>"
8. **Generate Verification Report**
**Summary Scorecard**:
```
## Verification Report: <change-name>
### Summary
| Dimension | Status |
|--------------|------------------|
| Completeness | X/Y tasks, N reqs|
| Correctness | M/N reqs covered |
| Coherence | Followed/Issues |
```
**Issues by Priority**:
1. **CRITICAL** (Must fix before archive):
- Incomplete tasks
- Missing requirement implementations
- Each with specific, actionable recommendation
2. **WARNING** (Should fix):
- Spec/design divergences
- Missing scenario coverage
- Each with specific recommendation
3. **SUGGESTION** (Nice to fix):
- Pattern inconsistencies
- Minor improvements
- Each with specific recommendation
**Final Assessment**:
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
- If all clear: "All checks passed. Ready for archive."
**Verification Heuristics**
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
**Graceful Degradation**
- If only tasks.md exists: verify task completion only, skip spec/design checks
- If tasks + specs exist: verify completeness and correctness, skip design
- If full artifacts: verify all three dimensions
- Always note which checks were skipped and why
**Output Format**
Use clear markdown with:
- Table for summary scorecard
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
- Code references in format: `file.ts:123`
- Specific, actionable recommendations
- No vague suggestions like "consider reviewing"

File diff suppressed because one or more lines are too long

120
backtest_command.py Executable file
View File

@@ -0,0 +1,120 @@
#!/usr/bin/env python3
import argparse
import sys
import tabulate
import backtest_core
def parse_arguments():
parser = argparse.ArgumentParser(description="量化回测工具", formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("--codes", type=str, nargs="+", required=True, help="股票代码列表 (如: 000001.SZ 600000.SH)", )
parser.add_argument("--start-date", type=str, required=True, help="回测开始日期 (格式: YYYY-MM-DD)", )
parser.add_argument("--end-date", type=str, required=True, help="回测结束日期 (格式: YYYY-MM-DD)", )
parser.add_argument("--strategy-file", type=str, required=True, help="策略文件路径 (如: strategy.py)", )
parser.add_argument("--cash", type=float, default=100000, help="初始资金 (默认: 100000)", )
parser.add_argument("--commission", type=float, default=0.002, help="手续费率 (默认: 0.002)", )
parser.add_argument("--warmup-days", type=int, default=365, help="预热天数 (默认: 365约一年)", )
parser.add_argument("--output-dir", type=str, default=None, help="HTML 图表输出目录 (可选,为每个股票生成 {code}.html)", )
return parser.parse_args()
def format_single_result(result: backtest_core.BacktestResult):
print("=" * 60)
print(f"股票代码: {result.code}")
print("=" * 60)
indicator_mapping = {
"最终收益": f"{result.equity_final:.2f}",
"峰值收益": f"{result.equity_peak:.2f}",
"总收益率(%": f"{result.return_pct:.2f}",
"买入并持有收益率(%": f"{result.buy_hold_return_pct:.2f}",
"年化收益率(%": f"{result.return_ann_pct:.2f}",
"年化波动率(%": f"{result.volatility_ann_pct:.2f}",
"索提诺比率": f"{result.sortino_ratio:.2f}",
"卡尔玛比率": f"{result.calmar_ratio:.2f}",
"最大回撤(%": f"{result.max_drawdown_pct:.2f}",
"平均回撤(%": f"{result.avg_drawdown_pct:.2f}",
"最大回撤持续时长": f"{result.max_drawdown_duration:.0f}",
"平均回撤持续时长": f"{result.avg_drawdown_duration:.0f}",
"总交易次数": f"{result.num_trades:.0f}",
"胜率(%": f"{result.win_rate_pct:.2f}",
"系统质量数": f"{result.sqn:.2f}",
}
for name, value in indicator_mapping.items():
print(f"{name}: {value}")
print("=" * 60)
def format_batch_results(results: list[backtest_core.BacktestResult]):
table_data = []
for result in results:
table_data.append(
[
result.code,
f"{result.return_pct:.2f}",
f"{result.buy_hold_return_pct:.2f}",
f"{result.return_ann_pct:.2f}",
f"{result.volatility_ann_pct:.2f}",
f"{result.win_rate_pct:.2f}",
f"{result.max_drawdown_pct:.2f}",
f"{result.sortino_ratio:.2f}",
f"{result.num_trades:.0f}",
f"{result.sqn:.2f}",
]
)
headers = [
"股票代码",
"收益率%",
"买入持有%",
"年化收益%",
"年化波动%",
"胜率%",
"最大回撤%",
"索提诺比率",
"交易次数",
"SQN",
]
print(tabulate.tabulate(table_data, headers=headers, tablefmt="grid"))
def main():
args = parse_arguments()
try:
results = backtest_core.run_batch_backtest(
codes=args.codes,
start_date=args.start_date,
end_date=args.end_date,
strategy_file=args.strategy_file,
cash=args.cash,
commission=args.commission,
warmup_days=args.warmup_days,
output_dir=args.output_dir,
show_progress=True,
)
if len(results) == 1:
format_single_result(results[0])
else:
format_batch_results(results)
if args.output_dir:
print(f"\n图表已保存到: {args.output_dir}/")
except Exception as e:
print(f"\n错误: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

208
backtest_core.py Normal file
View File

@@ -0,0 +1,208 @@
import dataclasses
import importlib.util
import os
from typing import Optional
import pandas as pd
from tqdm import tqdm
import config
@dataclasses.dataclass
class BacktestResult:
code: str
equity_final: float
equity_peak: float
return_pct: float
buy_hold_return_pct: float
return_ann_pct: float
volatility_ann_pct: float
sortino_ratio: float
calmar_ratio: float
max_drawdown_pct: float
avg_drawdown_pct: float
max_drawdown_duration: float
avg_drawdown_duration: float
num_trades: int
win_rate_pct: float
sqn: float
def load_data_from_db(code: str, start_date: str, end_date: str) -> pd.DataFrame:
import sqlalchemy
import urllib.parse
encoded_password = urllib.parse.quote_plus(config.DB_PASSWORD)
conn_str = f"postgresql://{config.DB_USER}:{encoded_password}@{config.DB_HOST}:{config.DB_PORT}/{config.DB_NAME}"
engine = sqlalchemy.create_engine(conn_str)
try:
query = f"""
SELECT trade_date,
open * factor AS "Open",
close * factor AS "Close",
high * factor AS "High",
low * factor AS "Low",
volume AS "Volume",
COALESCE(factor, 1.0) AS factor
FROM leopard_daily daily
LEFT JOIN leopard_stock stock ON stock.id = daily.stock_id
WHERE stock.code = '{code}'
AND daily.trade_date BETWEEN '{start_date} 00:00:00'
AND '{end_date} 23:59:59'
ORDER BY daily.trade_date
"""
df = pd.read_sql(query, engine)
if len(df) == 0:
raise ValueError(f"未找到股票 {code} 在指定时间范围内的数据")
df["trade_date"] = pd.to_datetime(df["trade_date"], format="%Y-%m-%d")
df.set_index("trade_date", inplace=True)
return df
finally:
engine.dispose()
def load_strategy(strategy_file: str):
module_name = strategy_file.replace(".py", "").replace("/", ".")
spec = importlib.util.spec_from_file_location(module_name, strategy_file)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
if not hasattr(module, "calculate_indicators"):
raise AttributeError(f"策略文件 {strategy_file} 缺少 calculate_indicators 函数")
if not hasattr(module, "get_strategy"):
raise AttributeError(f"策略文件 {strategy_file} 缺少 get_strategy 函数")
calculate_indicators = module.calculate_indicators
strategy_class = module.get_strategy()
if not isinstance(strategy_class, type):
raise TypeError("get_strategy() 必须返回一个类")
from backtesting import Strategy
if not issubclass(strategy_class, Strategy):
raise TypeError("策略类必须继承 backtesting.Strategy")
return calculate_indicators, strategy_class
def apply_color_scheme():
import backtesting._plotting as plotting
plotting.BULL_COLOR = config.BULL_COLOR
plotting.BEAR_COLOR = config.BEAR_COLOR
def run_backtest(
code: str,
start_date: str,
end_date: str,
strategy_file: str,
cash: float = config.DEFAULT_CASH,
commission: float = config.DEFAULT_COMMISSION,
warmup_days: int = config.DEFAULT_WARMUP_DAYS,
output_dir: Optional[str] = None,
) -> BacktestResult:
warmup_start_date = (pd.to_datetime(start_date) - pd.Timedelta(days=warmup_days)).strftime("%Y-%m-%d")
data = load_data_from_db(code, warmup_start_date, end_date)
calculate_indicators, strategy_class = load_strategy(strategy_file)
data = calculate_indicators(data)
data = data.loc[start_date:end_date]
from backtesting import Backtest
bt = Backtest(data, strategy_class, cash=cash, commission=commission, finalize_trades=True)
stats = bt.run()
apply_color_scheme()
if output_dir:
os.makedirs(output_dir, exist_ok=True)
output_path = os.path.join(output_dir, f"{code}.html")
bt.plot(filename=output_path, open_browser=False)
def _safe_float(value, default=0):
if value is None:
return default
try:
return float(value)
except (TypeError, ValueError):
return default
def _safe_int(value, default=0):
if value is None:
return default
try:
return int(value)
except (TypeError, ValueError):
return default
def _safe_timedelta(value, default=0):
if value is None:
return default
try:
return float(value.total_seconds() / 86400)
except (TypeError, AttributeError):
return default
return BacktestResult(
code=code,
equity_final=_safe_float(stats.get("Equity Final [$]"), 0),
equity_peak=_safe_float(stats.get("Equity Peak [$]"), 0),
return_pct=_safe_float(stats.get("Return [%]"), 0),
buy_hold_return_pct=_safe_float(stats.get("Buy & Hold Return [%]"), 0),
return_ann_pct=_safe_float(stats.get("Return (Ann.) [%]"), 0),
volatility_ann_pct=_safe_float(stats.get("Volatility (Ann.) [%]"), 0),
sortino_ratio=_safe_float(stats.get("Sortino Ratio"), 0),
calmar_ratio=_safe_float(stats.get("Calmar Ratio"), 0),
max_drawdown_pct=_safe_float(stats.get("Max. Drawdown [%]"), 0),
avg_drawdown_pct=_safe_float(stats.get("Avg. Drawdown [%]"), 0),
max_drawdown_duration=_safe_timedelta(stats.get("Max. Drawdown Duration"), 0),
avg_drawdown_duration=_safe_timedelta(stats.get("Avg. Drawdown Duration"), 0),
num_trades=_safe_int(stats.get("# Trades"), 0),
win_rate_pct=_safe_float(stats.get("Win Rate [%]"), 0),
sqn=_safe_float(stats.get("SQN"), 0),
)
def run_batch_backtest(
codes: list[str],
start_date: str,
end_date: str,
strategy_file: str,
cash: float = config.DEFAULT_CASH,
commission: float = config.DEFAULT_COMMISSION,
warmup_days: int = config.DEFAULT_WARMUP_DAYS,
output_dir: Optional[str] = None,
show_progress: bool = True,
) -> list[BacktestResult]:
results = []
codes_iter = tqdm(codes, desc="批量回测") if show_progress else codes
for code in codes_iter:
result = run_backtest(
code=code,
start_date=start_date,
end_date=end_date,
strategy_file=strategy_file,
cash=cash,
commission=commission,
warmup_days=warmup_days,
output_dir=output_dir,
)
results.append(result)
return results

20
config.py Normal file
View File

@@ -0,0 +1,20 @@
"""
配置文件
集中管理数据库配置、默认回测参数、图表配色
"""
DB_HOST = "81.71.3.24"
DB_PORT = 6785
DB_NAME = "leopard_dev"
DB_USER = "leopard"
DB_PASSWORD = "9NEzFzovnddf@PyEP?e*AYAWnCyd7UhYwQK$pJf>7?ccFiN^x4$eKEZ5~E<7<+~X"
DEFAULT_CASH = 100000
DEFAULT_COMMISSION = 0.002
DEFAULT_WARMUP_DAYS = 365
from bokeh.colors.named import tomato, lime
BULL_COLOR = tomato
BEAR_COLOR = lime

136
data.py Normal file
View File

@@ -0,0 +1,136 @@
from datetime import date, datetime, timedelta
from time import sleep
from sqlalchemy import Column, Double, Integer, String, create_engine
from sqlalchemy.orm import DeclarativeBase, Session
from tushare import pro_api
TUSHARE_API_KEY = '64ebff4fa679167600b905ee45dd88e76f3963c0ff39157f3f085f0e'
class Base(DeclarativeBase):
pass
class Stock(Base):
__tablename__ = 'stock'
code = Column(String, primary_key=True, comment="代码")
name = Column(String, comment="名称")
fullname = Column(String, comment="全名")
market = Column(String, comment="市场")
exchange = Column(String, comment="交易所")
industry = Column(String, comment="行业")
list_date = Column(String, comment="上市日期")
class Daily(Base):
__tablename__ = 'daily'
code = Column(String, primary_key=True)
trade_date = Column(String, primary_key=True)
open = Column(Double)
close = Column(Double)
high = Column(Double)
low = Column(Double)
previous_close = Column(Double)
turnover = Column(Double)
volume = Column(Integer)
price_change_amount = Column(Double)
factor = Column(Double)
def main():
print("开始更新数据")
engine = create_engine(f"sqlite:////Users/lanyuanxiaoyao/Documents/leopard_data/leopard.sqlite")
try:
Stock.metadata.create_all(engine, checkfirst=True)
Daily.metadata.create_all(engine, checkfirst=True)
pro = pro_api(TUSHARE_API_KEY)
# with engine.connect() as connection:
# stocks = pro.stock_basic(list_status="L", market="主板", fields="ts_code,name,fullname,market,exchange,industry,list_date")
# for row in stocks.itertuples():
# stmt = insert(Stock).values(
# code=row.ts_code,
# name=row.name,
# fullname=row.fullname,
# market=row.market,
# exchange=row.exchange,
# industry=row.industry,
# list_date=row.list_date,
# )
# stmt = stmt.on_conflict_do_update(
# index_elements=["code"],
# set_={
# "name": stmt.excluded.name,
# "fullname": stmt.excluded.fullname,
# "market": stmt.excluded.market,
# "exchange": stmt.excluded.exchange,
# "industry": stmt.excluded.industry,
# "list_date": stmt.excluded.list_date,
# },
# )
# print(stmt)
# connection.execute(stmt)
# connection.commit()
#
# print("清理行情数据")
# connection.execute(text("delete from daily where code not in (select distinct code from stock)"))
# connection.commit()
#
# print("清理财务数据")
# connection.execute(text("delete from finance_indicator where code not in (select distinct code from stock)"))
# connection.commit()
with Session(engine) as session:
stock_codes = [row[0] for row in session.query(Stock.code).all()]
latest_date = session.query(Daily.trade_date).order_by(Daily.trade_date.desc()).first()
if latest_date is None:
latest_date = '1990-12-19'
else:
latest_date = latest_date.trade_date
latest_date = datetime.strptime(latest_date, '%Y-%m-%d').date()
current_date = date.today() - timedelta(days=1)
delta = (current_date - latest_date).days
print(f"最新数据日期:{latest_date},当前日期:{current_date},待更新天数:{delta}")
if delta > 0:
update_dates = []
for i in range(delta):
latest_date = latest_date + timedelta(days=1)
update_dates.append(latest_date.strftime('%Y%m%d'))
for target_date in update_dates:
print(f"正在采集:{target_date}")
dailies = pro.daily(trade_date=target_date)
dailies.set_index("ts_code", inplace=True)
factors = pro.adj_factor(trade_date=target_date)
factors.set_index("ts_code", inplace=True)
results = dailies.join(factors, lsuffix="_daily", rsuffix="_factor", how="left")
rows = []
for row in results.itertuples():
if row.Index in stock_codes:
rows.append(
Daily(
code=row.Index,
trade_date=datetime.strptime(target_date, '%Y%m%d').strftime("%Y-%m-%d"),
open=row.open,
close=row.close,
high=row.high,
low=row.low,
previous_close=row.pre_close,
turnover=row.amount,
volume=row.vol,
price_change_amount=row.pct_chg,
factor=row.adj_factor,
)
)
session.add_all(rows)
session.commit()
sleep(1)
finally:
engine.dispose()
if __name__ == '__main__':
main()

295
note_refactor.md Normal file
View File

@@ -0,0 +1,295 @@
# 回测代码重构说明
## 概述
本次重构将原有的单一文件 `backtest.py` 拆分为模块化架构,提升代码复用性和可维护性。
## 文件结构变化
### 新增文件
1. **config.py** - 配置管理模块
- 数据库配置DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD
- 默认回测参数DEFAULT_CASH, DEFAULT_COMMISSION, DEFAULT_WARMUP_DAYS
- 图表配色BULL_COLOR, BEAR_COLOR
2. **backtest_core.py** - 核心回测引擎
- `BacktestResult` 数据类:结构化回测结果
- `load_data_from_db()`:从数据库加载历史数据
- `load_strategy()`:动态加载策略文件
- `apply_color_scheme()`:应用图表配色
- `run_backtest()`:单股票回测函数
- `run_batch_backtest()`:批量回测函数(串行执行)
3. **backtest_command.py** - 命令行界面
- `parse_arguments()`:解析命令行参数
- `format_single_result()`:详细格式输出(单股票)
- `format_batch_results()`:表格格式输出(多股票,使用 tabulate
- `main()`:主流程编排
### 删除文件
1. **backtest.py** - 原有单一文件284 行)
## 接口变化
### 新增 API
```python
# 单股票回测
result = backtest_core.run_backtest(
code='000001.SZ',
start_date='2024-01-01',
end_date='2024-12-31',
strategy_file='strategies/sma_strategy.py',
cash=100000,
commission=0.002,
warmup_days=365,
output_dir=None # 可选,为 None 时不生成图表
)
# 批量回测
results = backtest_core.run_batch_backtest(
codes=['000001.SZ', '600000.SH'],
start_date='2024-01-01',
end_date='2024-12-31',
strategy_file='strategies/sma_strategy.py',
cash=100000,
commission=0.002,
warmup_days=365,
output_dir='output/', # 可选,为每个股票生成 {code}.html
show_progress=True # 可选,是否显示 tqdm 进度条
)
```
### 新增数据结构
```python
@dataclasses.dataclass
class BacktestResult:
code: str
equity_final: float
equity_peak: float
return_pct: float
buy_hold_return_pct: float
return_ann_pct: float
volatility_ann_pct: float
sortino_ratio: float
calmar_ratio: float
max_drawdown_pct: float
avg_drawdown_pct: float
max_drawdown_duration: float
avg_drawdown_duration: float
num_trades: int
win_rate_pct: float
sqn: float
```
## 命令行使用方式变化
### 旧方式(已删除)
```bash
python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2024-12-31 --strategy-file strategy.py
```
### 新方式
```bash
uv run python backtest_command.py --codes 000001.SZ --start-date 2024-01-01 --end-date 2024-12-31 --strategy-file strategies/sma_strategy.py
```
### 参数变化
| 参数名 | 变化 | 说明 |
|--------|--------|------|
| `--code` | 改为 `--codes` | 从单一参数改为多值参数(`nargs='+'` |
| `--output` | 改为 `--output-dir` | 指定目录而非文件路径 |
### 新增参数
- `--output-dir`:指定图表输出目录(可选)
- 单股票时:生成 `{code}.html` 在指定目录
- 多股票时:为每个股票生成 `{code}.html` 在指定目录
- 不指定时不生成图表
## 输出格式变化
### 单股票输出
保持原有的详细格式输出,每个指标单独一行:
```
============================================================
股票代码: 000001.SZ
============================================================
最终收益: 100981.58
峰值收益: 103731.54
总收益率(%: 0.98
...
============================================================
```
### 多股票输出
新增表格格式输出(使用 tabulategrid 格式):
```
+------------+-----------+---------+-------------+------------+-------+
| 股票代码 | 收益率% | 胜率% | 最大回撤% | 交易次数 | SQN |
+============+===========+=========+=============+============+=======+
| 000001.SZ | 0.98 | 100 | -2.65 | 1 | nan |
| 600000.SH | 0.04 | 100 | -1.5 | 1 | nan |
+------------+-----------+---------+-------------+------------+-------+
```
### 进度条
多股票回测时显示 tqdm 进度条:
```
批量回测: 50%|█████ | 1/2 [00:07<00:07, 7.82s/it]
```
## 依赖变化
### 新增依赖
- `tabulate`:表格格式化
- 版本0.9.0
- 用途:批量回测结果的表格化输出
- `tqdm`:进度条显示
- 版本4.67.1
- 用途:批量回测时的实时进度反馈
## 特性增强
### 新增功能
1. **批量回测**:支持传入多个股票代码进行串行回测
- 命令:`--codes 000001.SZ 600000.SH`
- 输出:表格化结果对比
- 进度条:实时显示回测进度
2. **图表生成**:为每个股票生成独立 HTML 图表
- 参数:`--output-dir output/`
- 输出:`{code}.html` 在指定目录
- 自动创建目录:`os.makedirs(output_dir, exist_ok=True)`
3. **进度条显示**:使用 tqdm 提供实时反馈
- 多股票时自动显示
- 可通过 `show_progress=False` 禁用
## 兼容性说明
### BREAKING CHANGES
1. **命令行入口变化**
- 旧:`python backtest.py`
- 新:`uv run python backtest_command.py`
2. **参数名称变化**
- `--code``--codes`(从单值改为多值)
### 兼容性保证
- 所有原有功能完整保留
- 核心回测逻辑无变化
- 策略加载方式不变
- 数据访问接口不变
## 代码行数对比
| 文件 | 旧行数 | 新行数 | 变化 |
|------|---------|---------|------|
| backtest.py | 284 | - | -284 |
| config.py | - | 20 | +20 |
| backtest_core.py | - | ~200 | +200 |
| backtest_command.py | - | ~120 | +120 |
| **总计** | **284** | **~340** | **+56** |
## 迁移指南
### 对于开发者
如果需要在其他模块中调用回测功能:
```python
from backtest_core import run_backtest, run_batch_backtest, BacktestResult
# 单股票回测
result = run_backtest(
code='000001.SZ',
start_date='2024-01-01',
end_date='2024-12-31',
strategy_file='strategies/sma_strategy.py'
)
# 批量回测
results = run_batch_backtest(
codes=['000001.SZ', '600000.SH'],
start_date='2024-01-01',
end_date='2024-12-31',
strategy_file='strategies/sma_strategy.py'
)
# 访问结果
print(result.return_pct)
print(result.win_rate_pct)
```
### 对于终端用户
**单股票回测示例:**
```bash
uv run python backtest_command.py \
--codes 000001.SZ \
--start-date 2024-01-01 \
--end-date 2024-12-31 \
--strategy-file strategies/sma_strategy.py
```
**多股票回测示例:**
```bash
uv run python backtest_command.py \
--codes 000001.SZ 600000.SH \
--start-date 2024-01-01 \
--end-date 2024-12-31 \
--strategy-file strategies/sma_strategy.py
```
**生成图表示例:**
```bash
uv run python backtest_command.py \
--codes 000001.SZ \
--start-date 2024-01-01 \
--end-date 2024-12-31 \
--strategy-file strategies/sma_strategy.py \
--output-dir output/
```
## 错误处理
- **立即失败策略**:遇到第一个错误立即停止,不继续执行其他股票
- **友好错误提示**:捕获异常并打印清晰的错误信息
- **退出状态码**:成功返回 0失败返回非零
- **回溯信息**:打印完整的堆栈跟踪以便调试
## 性能考虑
- **串行执行**:当前采用串行执行,确保简单可靠
- **未来扩展**未来可改为并行执行ThreadPoolExecutor以提升性能
- **数据加载**:每次回测创建独立的数据库连接,避免连接池复杂度
## 总结
本次重构实现了:
- ✅ 代码模块化:核心逻辑与 CLI 界面分离
- ✅ 可复用性:提供标准化 API 供其他模块调用
- ✅ 功能增强:支持批量回测和图表生成
- ✅ 用户体验:表格化结果和进度条显示
- ✅ 代码质量:更清晰的模块划分和类型提示

1088
notebook/backtest.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,455 @@
{
"cells": [
{
"metadata": {
"ExecuteTime": {
"end_time": "2026-01-30T05:41:51.291397Z",
"start_time": "2026-01-30T04:34:22.917761Z"
}
},
"cell_type": "code",
"source": [
"import urllib.parse\n",
"\n",
"import pandas as pd\n",
"import sqlalchemy\n",
"from sqlalchemy import text\n",
"from sqlalchemy.orm import DeclarativeBase, Session\n",
"\n",
"postgresql_engin = sqlalchemy.create_engine(\n",
" f\"postgresql://leopard:{urllib.parse.quote_plus(\"9NEzFzovnddf@PyEP?e*AYAWnCyd7UhYwQK$pJf>7?ccFiN^x4$eKEZ5~E<7<+~X\")}@81.71.3.24:6785/leopard\"\n",
")\n",
"sqlite_engine = sqlalchemy.create_engine(f\"sqlite:////Users/lanyuanxiaoyao/Documents/leopard_data/leopard.sqlite\")\n",
"\n",
"\n",
"class Base(DeclarativeBase):\n",
" pass\n",
"\n",
"\n",
"class Daily(Base):\n",
" __tablename__ = 'daily'\n",
"\n",
" code = sqlalchemy.Column(sqlalchemy.String, primary_key=True)\n",
" trade_date = sqlalchemy.Column(sqlalchemy.Date, primary_key=True)\n",
" open = sqlalchemy.Column(sqlalchemy.Double)\n",
" close = sqlalchemy.Column(sqlalchemy.Double)\n",
" high = sqlalchemy.Column(sqlalchemy.Double)\n",
" low = sqlalchemy.Column(sqlalchemy.Double)\n",
" previous_close = sqlalchemy.Column(sqlalchemy.Double)\n",
" turnover = sqlalchemy.Column(sqlalchemy.Double)\n",
" volume = sqlalchemy.Column(sqlalchemy.Integer)\n",
" price_change_amount = sqlalchemy.Column(sqlalchemy.Double)\n",
" factor = sqlalchemy.Column(sqlalchemy.Double)\n",
"\n",
"\n",
"try:\n",
" with Session(postgresql_engin) as pg_session:\n",
" results = pg_session.execute(text(\"select distinct trade_date from leopard_daily\")).fetchall()\n",
" results = list(map(lambda x: x[0].strftime(\"%Y-%m-%d\"), results))\n",
" dates = [results[i: i + 30] for i in range(0, len(results), 30)]\n",
"\n",
" for index, date in enumerate(dates):\n",
" print(date)\n",
" daily_df = pd.read_sql(\n",
" f\"\"\"\n",
" select code,\n",
" trade_date,\n",
" open,\n",
" close,\n",
" high,\n",
" low,\n",
" previous_close,\n",
" turnover,\n",
" volume,\n",
" price_change_amount,\n",
" factor\n",
" from leopard_daily d\n",
" left join leopard_stock s on d.stock_id = s.id\n",
" where d.trade_date in ('{\"','\".join(date)}')\n",
" \"\"\",\n",
" postgresql_engin\n",
" )\n",
" with Session(sqlite_engine) as session:\n",
" rows = []\n",
" for _, row in daily_df.iterrows():\n",
" rows.append(\n",
" Daily(\n",
" code=row[\"code\"],\n",
" trade_date=row[\"trade_date\"],\n",
" open=row[\"open\"],\n",
" close=row[\"close\"],\n",
" high=row[\"high\"],\n",
" low=row[\"low\"],\n",
" previous_close=row[\"previous_close\"],\n",
" turnover=row[\"turnover\"],\n",
" volume=row[\"volume\"],\n",
" price_change_amount=row[\"price_change_amount\"],\n",
" factor=row[\"factor\"]\n",
" )\n",
" )\n",
" session.add_all(rows)\n",
" session.commit()\n",
"finally:\n",
" postgresql_engin.dispose()\n",
" sqlite_engine.dispose()"
],
"id": "48821306efc640a1",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['2025-12-25', '2025-12-26', '2025-12-29', '2025-12-30', '2025-12-31', '2026-01-05', '2026-01-06', '2026-01-07', '2026-01-08', '2026-01-09']\n"
]
}
],
"execution_count": 22
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2026-01-30T09:24:09.859231Z",
"start_time": "2026-01-30T09:24:09.746912Z"
}
},
"cell_type": "code",
"source": [
"import tushare as ts\n",
"\n",
"pro = ts.pro_api(\"64ebff4fa679167600b905ee45dd88e76f3963c0ff39157f3f085f0e\")\n",
"# stocks = pro.stock_basic(ts_code=\"600200.SH\", list_status=\"D\", fields=\"ts_code,name,fullname,market,exchange,industry,list_date,delist_date\")\n",
"# stocks"
],
"id": "ed58a1faaf2cdb8e",
"outputs": [],
"execution_count": 34
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2026-01-30T07:14:29.897120Z",
"start_time": "2026-01-30T07:14:29.664124Z"
}
},
"cell_type": "code",
"source": "# stocks.to_csv(\"dlist.csv\")",
"id": "3c8c0a38d6b2992e",
"outputs": [],
"execution_count": 24
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2026-01-30T09:46:34.808300Z",
"start_time": "2026-01-30T09:46:34.129412Z"
}
},
"cell_type": "code",
"source": [
"daily_df = pro.daily(trade_date=\"20251231\")\n",
"daily_df.set_index(\"ts_code\", inplace=True)\n",
"factor_df = pro.adj_factor(trade_date=\"20251231\")\n",
"factor_df.set_index(\"ts_code\", inplace=True)"
],
"id": "c052a945869aa329",
"outputs": [],
"execution_count": 50
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2026-01-30T09:46:36.697015Z",
"start_time": "2026-01-30T09:46:36.642975Z"
}
},
"cell_type": "code",
"source": [
"result_df = daily_df.join(factor_df, lsuffix=\"_daily\", rsuffix=\"_factor\", how=\"left\")\n",
"result_df\n",
"# factor_df"
],
"id": "d61ee80d2cd9f06b",
"outputs": [
{
"data": {
"text/plain": [
" trade_date_daily open high low close pre_close change \\\n",
"ts_code \n",
"000001.SZ 20251231 11.48 11.49 11.40 11.41 11.48 -0.07 \n",
"000002.SZ 20251231 4.66 4.68 4.62 4.65 4.62 0.03 \n",
"000004.SZ 20251231 11.30 11.35 11.07 11.08 11.27 -0.19 \n",
"000006.SZ 20251231 9.95 10.03 9.69 9.95 9.86 0.09 \n",
"000007.SZ 20251231 11.72 11.75 11.28 11.44 11.62 -0.18 \n",
"... ... ... ... ... ... ... ... \n",
"920978.BJ 20251231 37.64 38.39 36.88 36.90 37.78 -0.88 \n",
"920981.BJ 20251231 32.20 32.29 31.75 31.96 32.07 -0.11 \n",
"920982.BJ 20251231 233.00 238.49 232.10 233.70 234.80 -1.10 \n",
"920985.BJ 20251231 7.32 7.35 7.17 7.19 7.30 -0.11 \n",
"920992.BJ 20251231 17.33 17.60 17.29 17.39 17.38 0.01 \n",
"\n",
" pct_chg vol amount trade_date_factor adj_factor \n",
"ts_code \n",
"000001.SZ -0.6098 590620.37 675457.357 20251231 134.5794 \n",
"000002.SZ 0.6494 1075561.25 499883.113 20251231 181.7040 \n",
"000004.SZ -1.6859 18056.00 20248.567 20251231 4.0640 \n",
"000006.SZ 0.9128 270369.08 267758.676 20251231 39.7400 \n",
"000007.SZ -1.5491 80556.00 92109.366 20251231 8.2840 \n",
"... ... ... ... ... ... \n",
"920978.BJ -2.3293 33945.04 126954.937 20251231 1.2885 \n",
"920981.BJ -0.3430 8237.16 26301.206 20251231 1.4343 \n",
"920982.BJ -0.4685 5210.09 122452.646 20251231 4.2831 \n",
"920985.BJ -1.5068 35174.30 25350.257 20251231 1.6280 \n",
"920992.BJ 0.0575 6991.87 12193.445 20251231 1.4932 \n",
"\n",
"[5458 rows x 12 columns]"
],
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>trade_date_daily</th>\n",
" <th>open</th>\n",
" <th>high</th>\n",
" <th>low</th>\n",
" <th>close</th>\n",
" <th>pre_close</th>\n",
" <th>change</th>\n",
" <th>pct_chg</th>\n",
" <th>vol</th>\n",
" <th>amount</th>\n",
" <th>trade_date_factor</th>\n",
" <th>adj_factor</th>\n",
" </tr>\n",
" <tr>\n",
" <th>ts_code</th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" <th></th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>000001.SZ</th>\n",
" <td>20251231</td>\n",
" <td>11.48</td>\n",
" <td>11.49</td>\n",
" <td>11.40</td>\n",
" <td>11.41</td>\n",
" <td>11.48</td>\n",
" <td>-0.07</td>\n",
" <td>-0.6098</td>\n",
" <td>590620.37</td>\n",
" <td>675457.357</td>\n",
" <td>20251231</td>\n",
" <td>134.5794</td>\n",
" </tr>\n",
" <tr>\n",
" <th>000002.SZ</th>\n",
" <td>20251231</td>\n",
" <td>4.66</td>\n",
" <td>4.68</td>\n",
" <td>4.62</td>\n",
" <td>4.65</td>\n",
" <td>4.62</td>\n",
" <td>0.03</td>\n",
" <td>0.6494</td>\n",
" <td>1075561.25</td>\n",
" <td>499883.113</td>\n",
" <td>20251231</td>\n",
" <td>181.7040</td>\n",
" </tr>\n",
" <tr>\n",
" <th>000004.SZ</th>\n",
" <td>20251231</td>\n",
" <td>11.30</td>\n",
" <td>11.35</td>\n",
" <td>11.07</td>\n",
" <td>11.08</td>\n",
" <td>11.27</td>\n",
" <td>-0.19</td>\n",
" <td>-1.6859</td>\n",
" <td>18056.00</td>\n",
" <td>20248.567</td>\n",
" <td>20251231</td>\n",
" <td>4.0640</td>\n",
" </tr>\n",
" <tr>\n",
" <th>000006.SZ</th>\n",
" <td>20251231</td>\n",
" <td>9.95</td>\n",
" <td>10.03</td>\n",
" <td>9.69</td>\n",
" <td>9.95</td>\n",
" <td>9.86</td>\n",
" <td>0.09</td>\n",
" <td>0.9128</td>\n",
" <td>270369.08</td>\n",
" <td>267758.676</td>\n",
" <td>20251231</td>\n",
" <td>39.7400</td>\n",
" </tr>\n",
" <tr>\n",
" <th>000007.SZ</th>\n",
" <td>20251231</td>\n",
" <td>11.72</td>\n",
" <td>11.75</td>\n",
" <td>11.28</td>\n",
" <td>11.44</td>\n",
" <td>11.62</td>\n",
" <td>-0.18</td>\n",
" <td>-1.5491</td>\n",
" <td>80556.00</td>\n",
" <td>92109.366</td>\n",
" <td>20251231</td>\n",
" <td>8.2840</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>920978.BJ</th>\n",
" <td>20251231</td>\n",
" <td>37.64</td>\n",
" <td>38.39</td>\n",
" <td>36.88</td>\n",
" <td>36.90</td>\n",
" <td>37.78</td>\n",
" <td>-0.88</td>\n",
" <td>-2.3293</td>\n",
" <td>33945.04</td>\n",
" <td>126954.937</td>\n",
" <td>20251231</td>\n",
" <td>1.2885</td>\n",
" </tr>\n",
" <tr>\n",
" <th>920981.BJ</th>\n",
" <td>20251231</td>\n",
" <td>32.20</td>\n",
" <td>32.29</td>\n",
" <td>31.75</td>\n",
" <td>31.96</td>\n",
" <td>32.07</td>\n",
" <td>-0.11</td>\n",
" <td>-0.3430</td>\n",
" <td>8237.16</td>\n",
" <td>26301.206</td>\n",
" <td>20251231</td>\n",
" <td>1.4343</td>\n",
" </tr>\n",
" <tr>\n",
" <th>920982.BJ</th>\n",
" <td>20251231</td>\n",
" <td>233.00</td>\n",
" <td>238.49</td>\n",
" <td>232.10</td>\n",
" <td>233.70</td>\n",
" <td>234.80</td>\n",
" <td>-1.10</td>\n",
" <td>-0.4685</td>\n",
" <td>5210.09</td>\n",
" <td>122452.646</td>\n",
" <td>20251231</td>\n",
" <td>4.2831</td>\n",
" </tr>\n",
" <tr>\n",
" <th>920985.BJ</th>\n",
" <td>20251231</td>\n",
" <td>7.32</td>\n",
" <td>7.35</td>\n",
" <td>7.17</td>\n",
" <td>7.19</td>\n",
" <td>7.30</td>\n",
" <td>-0.11</td>\n",
" <td>-1.5068</td>\n",
" <td>35174.30</td>\n",
" <td>25350.257</td>\n",
" <td>20251231</td>\n",
" <td>1.6280</td>\n",
" </tr>\n",
" <tr>\n",
" <th>920992.BJ</th>\n",
" <td>20251231</td>\n",
" <td>17.33</td>\n",
" <td>17.60</td>\n",
" <td>17.29</td>\n",
" <td>17.39</td>\n",
" <td>17.38</td>\n",
" <td>0.01</td>\n",
" <td>0.0575</td>\n",
" <td>6991.87</td>\n",
" <td>12193.445</td>\n",
" <td>20251231</td>\n",
" <td>1.4932</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>5458 rows × 12 columns</p>\n",
"</div>"
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 51
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

1157
notebook/indicator.ipynb Normal file

File diff suppressed because one or more lines are too long

82
notebook/sqlalchemy.ipynb Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-01-27

View File

@@ -0,0 +1,441 @@
# Design: Refactor Backtest Script
## Context
### 当前状态
现有的回测系统基于 Jupyter Notebook (`backtest.ipynb`),包含以下手动执行步骤:
1. 通过 SQL magic 查询数据库获取股票价格数据(含复权)
2. 数据预处理(重命名列、设置索引)
3. 计算技术指标SMA10, SMA30, SMA60, SMA120
4. 定义策略类SmaCross金叉买入、死叉卖出
5. 执行回测并打印结果
6. 生成交互式图表Bokeh
### 约束条件
- 数据库PostgreSQL (leopard_dev@81.71.3.24)
- 数据表:`leopard_daily` (日线数据), `leopard_stock` (股票信息)
- 回测引擎:`backtesting` Python 库
- 复权逻辑:`price * factor`factor 从数据库获取)
- 输出格式:中文标签 + Bokeh HTML 图表
### 利益相关者
- 量化研究员:需要快速测试不同策略、不同股票的回测表现
- 策略开发者:需要独立开发策略,通过标准接口集成
- 运维人员:需要支持批量自动化回测任务
## Goals / Non-Goals
### Goals
1. **命令行化执行** - 通过命令行参数完成回测,无需交互式环境
2. **策略模块化** - 策略逻辑与主流程分离,支持动态加载不同策略文件
3. **参数化配置** - 支持股票代码、时间范围、初始资金、手续费率等参数
4. **简化的数据访问** - 保持简单的数据库连接逻辑,不引入过度抽象
5. **清晰的结果输出** - 控制台中文统计 + 可选的 HTML 图表文件
### Non-Goals
- ❌ 不支持多时间周期(仅日线)
- ❌ 不支持多股票组合回测(仅单股票)
- ❌ 不支持参数优化(固定策略参数)
- ❌ 不支持实盘交易接口
- ❌ 不引入复杂的依赖注入或插件系统
- ❌ 不实现 Web UI 或 API 接口
## Decisions
### D1: 文件结构 - 单一入口文件 + 策略文件
**决策**:
- `backtest.py` - 包含所有主流程逻辑(参数解析、数据加载、回测执行、结果输出)
- `strategy.py` - 策略模板(指标计算函数 + 策略类)
- 可选 `strategies/` 目录 - 存放其他策略文件
**理由**:
- 用户要求简化文件数量,保持流程集中
- 单一入口文件便于理解和维护
- 策略文件独立,便于多人协作开发
**替代方案**:
- 将数据加载、结果输出拆分为独立模块 - 被用户拒绝("设计的文件太多了,需要简化"
---
### D2: 策略接口 - 两个必需函数 + 策略类
**决策**: 策略文件必须提供:
1. **`calculate_indicators(data)` 函数**
```python
def calculate_indicators(data: pd.DataFrame) -> pd.DataFrame:
"""计算策略所需的技术指标,返回添加了指标列的 DataFrame"""
```
2. **`get_strategy()` 函数**
```python
def get_strategy() -> type:
"""返回策略类Strategy 的子类)"""
```
3. **策略类定义**
```python
from backtesting import Strategy
class MyStrategy(Strategy):
def init(self):
"""注册指标到 backtesting 框架"""
pass
def next(self):
"""每个时间步的决策逻辑"""
pass
```
**理由**:
- 将指标计算与交易逻辑分离,主流程可以预处理所有数据
- `get_strategy()` 函数提供清晰的加载接口
- 遵循 `backtesting` 库的接口规范
**替代方案**:
- 将 `calculate_indicators` 作为策略类的方法 - 问题:主流程无法先计算指标,必须在 Strategy 类中注册
---
### D3: 策略动态加载 - 使用 `importlib`
**决策**:
```python
import importlib.util
def load_strategy(strategy_file):
"""动态加载策略文件"""
spec = importlib.util.spec_from_file_location(module_name, strategy_file)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
calculate_indicators = module.calculate_indicators
strategy_class = module.get_strategy()
return calculate_indicators, strategy_class
```
**理由**:
- 支持任意路径的策略文件(如 `strategy.py`, `strategies/macd.py`
- 无需预定义策略列表或配置文件
- Python 标准库,无额外依赖
**替代方案**:
- 约定式加载(所有策略放在 `strategies/` 目录) - 灵活性不足
- 配置文件映射策略名称和文件路径 - 增加维护成本
---
### D4: 数据库连接 - 简化 SQLAlchemy 连接
**决策**:
```python
import sqlalchemy
conn_str = f"postgresql://{user}:{password}@{host}/{database}"
engine = sqlalchemy.create_engine(conn_str)
df = pd.read_sql(query, engine)
engine.dispose()
```
**理由**:
- 用户要求"数据库访问保持简单,不需要太多抽象"
- SQLAlchemy 提供基础连接池和 SQL 注入防护
- 支持参数化查询(未来扩展)
**SQL 查询**:
```sql
SELECT
trade_date,
open * factor AS Open,
close * factor AS Close,
high * factor AS High,
low * factor AS Low,
volume AS Volume,
COALESCE(factor, 1.0) AS factor
FROM leopard_daily daily
LEFT JOIN leopard_stock stock ON stock.id = daily.stock_id
WHERE stock.code = '{code}'
AND daily.trade_date BETWEEN '{start_date} 00:00:00'
AND '{end_date} 23:59:59'
ORDER BY daily.trade_date
```
**替代方案**:
- 直接使用 `psycopg2` - 需要手动处理游标和类型转换
- 引入 ORM 模型 - 过度抽象,与"保持简单"要求矛盾
---
### D5: 执行顺序 - 先计算指标,再执行回测
**决策**:
```
1. load_data_from_db() → 获取原始价格数据
2. calculate_indicators(data) → 添加指标列到 DataFrame
3. Backtest(data, strategy_class) → 执行回测
```
**理由**:
- 指标计算与回测分离,便于调试和验证
- 避免在 Strategy 类的 `init()` 中重复计算
- 支持可视化指标(如果需要)
**示例流程**:
```python
data = load_data_from_db('000001.SZ', '2024-01-01', '2025-12-31')
# data 包含: Open, High, Low, Close, Volume, factor
data = calculate_indicators(data)
# data 新增: sma10, sma30, sma60, sma120
bt = Backtest(data, SmaCross, cash=100000, commission=0.002)
stats = bt.run()
```
**替代方案**:
- 在 Strategy 类的 `init()` 中计算指标 - 导致指标逻辑分散,难以调试
---
### D6: 输出格式 - 控制台 + 可选 HTML 文件
**决策**:
**控制台输出**:
- 始终打印回测统计信息(中文格式化)
- 使用 notebook 中定义的 `INDICATOR_MAPPING` 映射
**HTML 输出**:
- 仅当指定 `--output` 参数时生成
- 使用 `backtesting` 库的 `bt.plot(filename=..., show=False)` 方法
- 生成独立的 HTML 文件,无需浏览器环境
**理由**:
- 用户要求"输出包括命令行输出和 html 文件输出,使用一个参数控制"
- 控制台输出便于快速查看HTML 文件便于分享和详细分析
- `show=False` 确保在无头环境中也能生成文件
**示例用法**:
```bash
# 仅控制台输出
python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py
# 控制台 + HTML 文件
python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py --output result.html
```
**替代方案**:
- 始终生成 HTML 文件 - 增加不必要的磁盘 I/O
- 自动在浏览器打开 - 不适用于服务器环境
---
### D8: 预热天数 - 命令行参数控制
**决策**:
```python
parser.add_argument('--warmup-days', type=int, default=365,
help='预热天数(默认: 365约一年')
```
**执行逻辑**:
1. 用户从数据库查询的日期范围:`--start-date` 到 `--end-date`
2. 回测前,从数据中截取最后 N 天(由 `--warmup-days` 指定)
3. 截取的数据用于指标计算和回测
**理由**:
- 用户明确要求:"如果命令行参数指定了,就用参数指定的时长,否则默认预热时长为一年"
- 简化实现,不需要自动计算各策略所需的最长预热期
- 灵活性高,用户可根据需要调整预热天数
- 避免复杂化:不解析策略代码以确定最长指标周期
**示例**:
```python
# 查询 2024-01-01 到 2025-12-31 的数据2 年)
data = load_data_from_db('000001.SZ', '2024-01-01', '2025-12-31') # 约 500 条记录
# 默认预热 365 天,取最后 1 年的数据用于回测
data = data.iloc[-365:] # 2025-01-01 到 2025-12-31
# 用户指定预热 180 天
data = data.iloc[-180:] # 2025-07-01 到 2025-12-31
```
**替代方案**:
- 自动计算策略所需的最长指标周期 - 需要解析策略代码,复杂度高
- 不截取数据,依赖策略自己处理 NaN - 但用户明确要求预热天数控制
---
### D7: 数据库凭证 - 环境变量
**决策**:
```python
# 数据库配置(开发环境,直接硬编码)
DB_HOST = '81.71.3.24'
DB_NAME = 'leopard_dev'
DB_USER = 'your_username'
DB_PASSWORD = 'your_password'
```
**理由**:
- 用户明确要求:"数据库凭证不使用环境变量,开发人员直接硬编码到代码里即可"
- 开发环境仅内部使用,无安全风险
- 简化实现,无需环境变量管理
- 不引入额外的配置文件或库
**替代方案**:
- 使用环境变量 - 用户明确拒绝
- 使用配置文件 - 增加维护成本,用户明确不需要
---
## Risks / Trade-offs
### R1: SQL 注入风险
**风险**: 当前查询使用字符串拼接,存在 SQL 注入风险
**缓解措施**:
- 用户要求"数据库访问保持简单",暂不实现参数化查询
- 文档中明确说明输入格式(股票代码、日期)
- 后续可在 `load_data_from_db()` 中添加输入验证
---
### R2: 策略文件加载失败
**风险**: 动态加载策略文件时,文件不存在或代码错误会导致运行时崩溃
**缓解措施**:
- 使用 `try-except` 捕获 `ImportError` 和 `AttributeError`
- 提供清晰的错误信息:"策略文件 {file} 加载失败: {error}"
- 在文档中说明策略文件的标准接口
---
### R3: 指标计算性能
**风险**: 大数据集(如 10 年日线数据)计算指标可能较慢
**缓解措施**:
- 使用 pandas 的向量化操作(已实现)
- 考虑在文档中提示:首次运行可能较慢,后续可缓存指标数据
- 当前不优化(属于非目标范围)
---
### R4: 策略接口兼容性
**风险**: 用户编写的策略文件可能不符合接口要求(缺少 `calculate_indicators` 或 `get_strategy`
**缓解措施**:
- 提供 `strategy.py` 作为标准模板
- 在 `load_strategy()` 中进行接口检查
- 运行时捕获 `AttributeError` 并提示缺失的函数
---
### R5: 图表生成失败
**风险**: Bokeh 生成 HTML 文件时可能因数据格式或依赖问题失败
**缓解措施**:
- 仅在用户指定 `--output` 参数时才尝试生成图表
- 使用 `try-except` 捕获异常,不影响统计信息输出
- 错误提示:"图表生成失败,但回测已完成: {error}"
---
### R6: 时区和日期处理
**风险**: 数据库中的日期与用户输入的日期可能存在时区差异
**缓解措施**:
- 当前 SQL 查询使用 `BETWEEN 'start_date 00:00:00' AND 'end_date 23:59:59'` 覆盖全天
- 假设数据库和用户输入使用相同的时区(本地时间)
- 文档中说明日期格式为 `YYYY-MM-DD`
---
## Resolved Decisions
1. **数据库凭证管理**: ✅ 已决定 - 直接硬编码在代码中
- 实现方式:在 backtest.py 中定义 DB_HOST, DB_NAME, DB_USER, DB_PASSWORD 常量
- 不使用环境变量、不使用配置文件
- 开发人员可直接修改代码中的凭证
- 无安全风险(仅开发环境内部使用)
2. **错误处理详细程度**: ✅ 已决定 - 仅打印到控制台,不写入日志文件
- 实现方式:所有错误信息直接使用 `print()` 输出到 stdout/stderr
- 不引入日志库logging
- 保持输出简洁,便于管道处理
3. **指标预热期**: ✅ 已决定 - 通过 `--warmup-days` 命令行参数控制
- 实现方式:默认 365 天(约 1 年),用户可指定其他值
- 不自动计算策略所需的最长指标周期
- 使用 `data.iloc[-warmup_days:]` 截取数据
4. **多策略并行**: ✅ 已决定 - 不支持一次回测运行多个策略
- 实现方式:每次命令执行只支持单个策略文件
- 如需对比策略,用户需多次执行命令
- 不实现多进程/多线程并行回测
---
## Implementation Overview
### 核心流程
```
main()
├─ parse_arguments() # 解析命令行参数
├─ load_data_from_db() # 从数据库获取价格数据
│ └─ 返回 DataFrame: [Open, High, Low, Close, Volume, factor]
├─ load_strategy() # 动态加载策略文件
│ └─ 返回: (calculate_indicators, strategy_class)
├─ calculate_indicators(data) # 计算技术指标
│ └─ 返回添加了指标列的 DataFrame
├─ Backtest(data, strategy) # 执行回测
│ └─ 返回 stats 对象
├─ print_stats(stats) # 控制台输出中文统计
└─ bt.plot(filename=..., show=False) # 可选:生成 HTML 图表
```
### 文件结构
```
leopard_analysis/
├── backtest.py # 主流程脚本
├── strategy.py # SMA 策略模板
├── strategies/ # 其他策略(可选)
│ ├── macd_strategy.py
│ ├── rsi_strategy.py
│ └── ...
├── .env # 数据库凭证(可选)
├── requirements.txt # 依赖列表
└── README.md # 使用说明(可选)
```
### 依赖关系
```
backtest.py
├─ argparse # 命令行参数解析
├─ sqlalchemy # 数据库连接
├─ pandas # 数据处理
├─ importlib # 动态模块加载
└─ backtesting # 回测引擎
strategy.py
├─ pandas # DataFrame 操作
├─ backtesting # Strategy 基类
└─ backtesting.lib # crossover 等工具函数
```

View File

@@ -0,0 +1,83 @@
# Proposal: Refactor Backtest Script
## Why
当前回测系统使用 Jupyter Notebook (`backtest.ipynb`) 手动执行,存在以下问题:
- 不支持自动化批量回测,无法通过命令行调用
- 策略逻辑与数据获取混在一起,难以复用和切换
- 缺乏参数化配置,每次回测需要手动修改代码
- 无法方便地对比不同策略在同一股票、不同时间段的表现
通过将回测流程重构为命令行工具,可以实现:
- 支持命令行参数化调用,便于批量执行
- 策略模块化,支持动态加载不同的策略文件
- 简化数据加载逻辑,专注于回测核心流程
- 提高代码可维护性和可扩展性
## What Changes
### 新增文件
1. **backtest.py** - 主流程脚本
- 命令行参数解析 (`--code`, `--start-date`, `--end-date`, `--strategy-file`, `--cash`, `--commission`, `--output`)
- 数据库连接与数据加载(查询复权后的价格数据)
- 动态加载策略文件(通过 `importlib`
- 执行回测(使用 `backtesting` 库)
- 结果输出:
- 控制台中文格式化统计信息
- HTML 图表文件(可选,通过 `--output` 参数控制)
2. **strategy.py** - 策略模板文件
- `calculate_indicators(data)` 函数:计算策略所需的技术指标(如 SMA、MACD
- `get_strategy()` 函数:返回策略类
- `SmaCross` 类:继承 `backtesting.Strategy`,实现交易逻辑(金叉买入、死叉卖出)
### 主要功能特性
- **动态策略加载**:通过 `--strategy-file` 参数指定任意策略文件
- **简化的数据库访问**:直接 SQL 查询获取数据,不引入额外抽象
- **指标计算策略化**:每个策略文件自己定义需要计算的指标
- **结果输出控制**:默认控制台输出,通过 `--output` 参数生成 HTML 图表
### 现有文件变更
- 无(新增文件,不修改现有 Notebook
## Capabilities
### New Capabilities
- **backtest-cli**: 命令行回测工具,支持通过参数化方式执行量化回测
- **strategy-loading**: 动态加载策略模块,支持从指定路径导入策略类和指标计算函数
- **data-fetching**: 从 PostgreSQL 数据库获取股票历史价格数据,自动处理复权
### Modified Capabilities
- 无(不涉及现有规范级别的需求变更)
## Impact
### 代码影响
- 新增 `backtest.py` 作为主入口文件
- 新增 `strategy.py` 作为策略模板
- 可选新增 `strategies/` 目录存放其他策略文件
### 依赖影响
**新增依赖**
- `sqlalchemy` - 数据库连接
- `backtesting` - 回测引擎
- `pandas`, `numpy` - 数据处理(已存在于 Notebook 中)
### API/系统影响
- 无外部 API 变更
- 数据库查询逻辑从 Notebook 迁移到 Python 脚本
- 输出从 Notebook 交互式展示改为命令行 + HTML 文件
### 用户影响
- 用户可以通过命令行执行回测,无需打开 Jupyter Notebook
- 策略开发者可以独立开发策略文件,通过约定接口集成到主流程
- 回测结果以 HTML 文件形式保存,便于分享和查看

View File

@@ -0,0 +1,195 @@
# Spec: Backtest CLI
## ADDED Requirements
### Requirement: 命令行参数解析
回测脚本 SHALL 通过命令行参数接收用户输入,参数 SHALL 包含股票代码、时间范围、策略文件、回测参数等。
#### Scenario: 基础回测执行
- **WHEN** 用户执行 `python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py`
- **THEN** 系统解析所有必需参数,无错误提示
- **THEN** 开始执行回测流程
- **THEN** 回测完成后输出统计信息到控制台
#### Scenario: 可选参数未指定
- **WHEN** 用户未指定 `--cash` 参数
- **THEN** 系统使用默认值 100000 作为初始资金
- **WHEN** 用户未指定 `--commission` 参数
- **THEN** 系统使用默认值 0.002 作为手续费率
- **WHEN** 用户未指定 `--output` 参数
- **THEN** 系统不生成 HTML 图表文件
#### Scenario: 必需参数缺失
- **WHEN** 用户未提供 `--code` 参数
- **THEN** 系统输出错误信息:"错误: 需要以下参数: --code"
- **THEN** 系统退出并返回非零状态码
- **WHEN** 用户未提供 `--start-date``--end-date` 参数
- **THEN** 系统输出对应的错误信息
- **THEN** 系统退出并返回非零状态码
#### Scenario: 自定义参数值
- **WHEN** 用户指定 `--cash 500000 --commission 0.001 --output result.html`
- **THEN** 系统使用指定的 500000 作为初始资金
- **THEN** 系统使用指定的 0.001 作为手续费率
- **THEN** 回测完成后生成 HTML 图表到 result.html
---
### Requirement: 数据库数据加载
回测脚本 SHALL 从 PostgreSQL 数据库加载指定股票的历史价格数据,并自动处理复权。
#### Scenario: 成功加载数据
- **WHEN** 用户指定有效的股票代码和时间范围
- **THEN** 系统连接数据库并执行查询
- **THEN** 返回 DataFrame包含列: [Open, High, Low, Close, Volume, factor]
- **THEN** DataFrame 的索引为 trade_date (DatetimeIndex)
- **THEN** 数据已应用复权计算price * factor
#### Scenario: 数据库连接失败
- **WHEN** 数据库连接失败(凭证错误、网络问题等)
- **THEN** 系统捕获异常并输出错误信息:"数据库连接失败: {error}"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 未找到股票数据
- **WHEN** 指定的股票代码或时间范围内无数据
- **THEN** 系统抛出 ValueError: "未找到股票 {code} 在指定时间范围内的数据"
- **THEN** 主流程捕获异常并输出友好错误信息
- **THEN** 系统退出并返回非零状态码
#### Scenario: 数据验证
- **WHEN** 数据库返回的 DataFrame 为空
- **THEN** 系统提示数据为空并退出
- **WHEN** 数据库返回的 DataFrame 少于 10 条记录
- **THEN** 系统提示数据不足并退出
---
### Requirement: 策略动态加载
回测脚本 SHALL 支持动态加载指定路径的策略文件,并验证策略接口。
#### Scenario: 加载有效策略文件
- **WHEN** 用户指定 `--strategy-file strategy.py`
- **THEN** 系统通过 importlib 加载该模块
- **THEN** 系统获取模块的 `calculate_indicators` 函数
- **THEN** 系统调用模块的 `get_strategy()` 函数获取策略类
- **THEN** 系统返回 (calculate_indicators, strategy_class) 元组
#### Scenario: 策略文件不存在
- **WHEN** 用户指定的策略文件路径不存在
- **THEN** 系统捕获 FileNotFoundError
- **THEN** 输出错误信息:"策略文件 {file} 不存在"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 策略接口不完整
- **WHEN** 策略文件缺少 `calculate_indicators` 函数
- **THEN** 系统捕获 AttributeError
- **THEN** 输出错误信息:"策略文件 {file} 缺少 calculate_indicators 函数"
- **THEN** 系统退出并返回非零状态码
- **WHEN** 策略文件缺少 `get_strategy` 函数
- **THEN** 系统捕获 AttributeError
- **THEN** 输出错误信息:"策略文件 {file} 缺少 get_strategy 函数"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 加载子目录中的策略
- **WHEN** 用户指定 `--strategy-file strategies/macd_strategy.py`
- **THEN** 系统正确加载子目录中的策略模块
- **THEN** 系统成功获取策略类和指标计算函数
---
### Requirement: 指标计算
回测脚本 SHALL 在执行回测前调用策略的指标计算函数,将技术指标添加到数据集中。
#### Scenario: 成功计算指标
- **WHEN** 系统调用 `calculate_indicators(data)`
- **THEN** 函数接收包含 [Open, High, Low, Close, Volume, factor] 的 DataFrame
- **THEN** 函数计算策略所需的指标(如 SMA, MACD, RSI
- **THEN** 函数返回添加了指标列的 DataFrame
- **THEN** DataFrame 保留原始列,新增指标列
#### Scenario: 指标计算产生 NaN 值
- **WHEN** 滚动窗口计算导致前 N 行的指标值为 NaN
- **THEN** DataFrame 包含 NaN 值(系统不自动删除)
- **THEN** Backtest 框架在回测时会跳过 NaN 值的行
#### Scenario: 指标计算函数抛出异常
- **WHEN** `calculate_indicators(data)` 执行时抛出异常
- **THEN** 主流程捕获异常
- **THEN** 输出错误信息:"指标计算失败: {error}"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 回测执行
回测脚本 SHALL 使用 backtesting 库执行回测,传入数据、策略和参数。
#### Scenario: 成功执行回测
- **WHEN** 系统调用 `Backtest(data, strategy_class, cash=..., commission=...).run()`
- **THEN** Backtest 初始化时调用策略类的 `init()` 方法
- **THEN** Backtest 逐个时间步调用策略类的 `next()` 方法
- **THEN** 系统返回包含回测统计信息的 stats 对象
#### Scenario: 回测参数传递
- **WHEN** 用户指定 `--cash 500000 --commission 0.001`
- **THEN** Backtest 实例化时使用 cash=500000
- **THEN** Backtest 实例化时使用 commission=0.001
- **THEN** Backtest 实例化时使用 finalize_trades=True
#### Scenario: 回测运行时错误
- **WHEN** 策略的 `next()` 方法执行时抛出异常
- **THEN** backtesting 库捕获异常
- **THEN** 系统输出错误信息和堆栈跟踪
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 结果输出
回测脚本 SHALL 将回测统计信息格式化输出到控制台,并可选生成 HTML 图表文件。
#### Scenario: 控制台输出
- **WHEN** 回测成功完成
- **THEN** 系统调用 `print_stats(stats)` 函数
- **THEN** 系统输出回测统计信息,使用中文标签
- **THEN** 输出内容包括:最终收益、总收益率、年化收益率、最大回撤、胜率等
- **THEN** 数值格式化(保留 2 位小数)
#### Scenario: 生成 HTML 图表
- **WHEN** 用户指定 `--output result.html`
- **THEN** 系统调用 `bt.plot(filename='result.html', show=False)`
- **THEN** 系统生成 HTML 文件到 result.html
- **THEN** 系统输出提示:"图表已保存到: result.html"
- **THEN** 图表包含价格曲线、资金曲线、买卖信号等
#### Scenario: 不生成 HTML 图表
- **WHEN** 用户未指定 `--output` 参数
- **THEN** 系统不调用 bt.plot() 方法
- **THEN** 系统不生成任何图表文件
- **THEN** 系统仅输出控制台统计信息
#### Scenario: 图表生成失败
- **WHEN** bt.plot() 方法执行时抛出异常
- **THEN** 系统捕获异常
- **THEN** 系统输出警告:"图表生成失败,但回测已完成: {error}"
- **THEN** 系统不影响控制台统计信息的输出
- **THEN** 系统正常退出(返回状态码 0
---
### Requirement: 错误处理
回测脚本 SHALL 对所有可能的错误进行捕获和处理,提供友好的错误提示。
#### Scenario: 数据库错误
- **WHEN** 数据库操作抛出 sqlalchemy.exc.SQLAlchemyError
- **THEN** 系统输出错误信息:"数据库错误: {error}"
- **THEN** 系统退出并返回状态码 2
#### Scenario: 文件操作错误
- **WHEN** 图表文件保存失败(权限、磁盘空间等)
- **THEN** 系统输出错误信息:"文件操作错误: {error}"
- **THEN** 系统退出并返回状态码 3
#### Scenario: 未预期的错误
- **WHEN** 发生其他未捕获的异常
- **THEN** 系统输出错误信息:"未知错误: {error}"
- **THEN** 系统输出完整的堆栈跟踪
- **THEN** 系统退出并返回状态码 1

View File

@@ -0,0 +1,280 @@
# Spec: Data Fetching
## ADDED Requirements
### Requirement: 数据库连接配置
系统 SHALL 通过硬编码常量管理数据库连接参数(开发环境)。
#### Scenario: 使用硬编码常量
- **WHEN** 系统在 backtest.py 中定义数据库配置
- **THEN** 系统定义 DB_HOST, DB_NAME, DB_USER, DB_PASSWORD 常量
- **THEN** DB_HOST 值 SHALL 为数据库主机地址(如 '81.71.3.24'
- **THEN** DB_NAME 值 SHALL 为数据库名称(如 'leopard_dev'
- **THEN** DB_USER 值 SHALL 为数据库用户名
- **THEN** DB_PASSWORD 值 SHALL 为数据库密码
#### Scenario: 构建连接字符串
- **WHEN** 系统创建 SQLAlchemy 连接
- **THEN** 系统使用硬编码的常量构建连接字符串
- **THEN** 连接字符串格式 SHALL 为 `postgresql://{user}:{password}@{host}/{database}`
- **THEN** 不从环境变量读取任何凭证
#### Scenario: 修改数据库凭证
- **WHEN** 开发人员需要更换数据库或凭证
- **THEN** 开发人员直接修改 backtest.py 中的常量值
- **THEN** 修改后脚本使用新凭证连接数据库
---
### Requirement: 数据库连接建立
系统 SHALL 使用 SQLAlchemy 创建 PostgreSQL 数据库连接。
#### Scenario: 成功建立连接
- **WHEN** 凭证正确且数据库可访问
- **THEN** 系统使用 `sqlalchemy.create_engine(conn_str)` 创建引擎
- **THEN** 连接字符串格式 SHALL 为 `postgresql://{user}:{password}@{host}/{database}`
- **THEN** 系统成功创建引擎对象
- **THEN** 系统可用于执行查询
#### Scenario: 连接字符串构建
- **WHEN** 系统构建 PostgreSQL 连接字符串
- **THEN** 连接字符串 SHALL 正确编码特殊字符(密码中的 @, : 等)
- **THEN** 连接字符串 SHALL 使用标准 URI 格式
- **THEN** 连接字符串 SHALL 不包含额外选项(仅基础连接参数)
#### Scenario: 数据库连接失败
- **WHEN** 凭证错误或数据库不可达
- **THEN** SQLAlchemy 抛出 `sqlalchemy.exc.OperationalError`
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"数据库连接失败: {error}"
- **THEN** 系统退出并返回状态码 2
#### Scenario: 连接池管理
- **WHEN** 系统创建引擎对象
- **THEN** SQLAlchemy SHALL 自动管理连接池
- **THEN** 查询后连接 SHALL 自动返回池中
- **THEN** 系统 SHALL 在查询完成后调用 `engine.dispose()` 清理
---
### Requirement: SQL 查询构建
系统 SHALL 构建参数化的 SQL 查询以获取股票历史数据。
#### Scenario: 基础查询结构
- **WHEN** 系统构建查询
- **THEN** 查询 SHALL 选择 trade_date, Open, High, Low, Close, Volume, factor
- **THEN** 查询 SHALL 连接 leopard_daily 和 leopard_stock 表
- **THEN** 查询 SHALL 按 stock.code 过滤
- **THEN** 查询 SHALL 按 trade_date 范围过滤
- **THEN** 查询 SHALL 按 trade_date 升序排序
#### Scenario: 复权价格计算
- **WHEN** 系统计算复权价格
- **THEN** Open SHALL 计算为 `open * factor`
- **THEN** Close SHALL 计算为 `close * factor`
- **THEN** High SHALL 计算为 `high * factor`
- **THEN** Low SHALL 计算为 `low * factor`
- **THEN** Volume SHALL 直接使用原始值(不复权)
- **THEN** factor SHALL 使用 `COALESCE(factor, 1.0)` 处理 NULL 值
#### Scenario: 参数化股票代码
- **WHEN** 用户指定股票代码(如 '000001.SZ'
- **THEN** 查询 WHERE 子句 SHALL 使用 `stock.code = '{code}'`
- **THEN** 代码 SHALL 精确匹配(不使用 LIKE
- **THEN** 查询 SHALL 返回匹配股票的所有日线数据
#### Scenario: 参数化日期范围
- **WHEN** 用户指定开始日期 '2024-01-01' 和结束日期 '2025-12-31'
- **THEN** 查询 WHERE 子句 SHALL 使用 `BETWEEN '{start_date} 00:00:00' AND '{end_date} 23:59:59'`
- **THEN** 00:00:00 和 23:59:59 SHALL 覆盖全天
- **THEN** 日期格式 SHALL 为 YYYY-MM-DD HH:MM:SS
#### Scenario: 完整 SQL 查询
- **WHEN** 系统执行数据加载
- **THEN** 查询 SHALL 为:
```sql
SELECT
trade_date,
open * factor AS Open,
close * factor AS Close,
high * factor AS High,
low * factor AS Low,
volume AS Volume,
COALESCE(factor, 1.0) AS factor
FROM leopard_daily daily
LEFT JOIN leopard_stock stock ON stock.id = daily.stock_id
WHERE stock.code = '{code}'
AND daily.trade_date BETWEEN '{start_date} 00:00:00'
AND '{end_date} 23:59:59'
ORDER BY daily.trade_date
```
---
### Requirement: 数据查询执行
系统 SHALL 使用 pandas 的 `read_sql` 函数执行 SQL 查询并返回 DataFrame。
#### Scenario: 成功执行查询
- **WHEN** SQL 查询有效且数据存在
- **THEN** 系统调用 `pd.read_sql(query, engine)`
- **THEN** 系统返回 DataFrame 对象
- **THEN** DataFrame SHALL 包含查询结果的所有列
- **THEN** DataFrame 行数 SHALL 匹配数据库返回的记录数
#### Scenario: 数据类型处理
- **WHEN** pandas 读取 SQL 结果
- **THEN** trade_date SHALL 自动转换为 datetime 类型
- **THEN** Open, High, Low, Close, Volume SHALL 为 float 类型
- **THEN** factor SHALL 为 float 类型
- **THEN** 系统不需要手动类型转换(除日期索引设置)
#### Scenario: 查询返回空结果
- **WHEN** 指定股票代码或日期范围无数据
- **THEN** `read_sql` 返回空 DataFrame0 行)
- **THEN** 系统检查 `len(df) == 0`
- **THEN** 系统抛出 ValueError: "未找到股票 {code} 在指定时间范围内的数据"
#### Scenario: SQL 语法错误
- **WHEN** SQL 查询包含语法错误
- **THEN** SQLAlchemy 抛出 `sqlalchemy.exc.ProgrammingError`
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"SQL 查询错误: {error}"
- **THEN** 系统退出并返回状态码 2
---
### Requirement: 数据格式转换
系统 SHALL 将查询结果转换为 backtesting 库要求的格式。
#### Scenario: 设置日期索引
- **WHEN** DataFrame 加载完成
- **THEN** 系统调用 `df.set_index('trade_date', inplace=True)`
- **THEN** DataFrame 的索引 SHALL 为 DatetimeIndex
- **THEN** 索引 SHALL 不再是数值索引
- **THEN** backtesting 库 SHALL 能正确处理日期范围
#### Scenario: 列名格式化
- **WHEN** DataFrame 加载完成
- **THEN** 列名 SHALL 为 ['Open', 'High', 'Low', 'Close', 'Volume', 'factor']
- **THEN** 列名 SHALL 遵循 backtesting 库要求(首字母大写)
- **THEN** 列名 SHALL 与 SQL 查询中的别名一致
#### Scenario: 数据验证
- **WHEN** 系统准备返回 DataFrame
- **THEN** 系统验证 DataFrame 包含必需列
- **THEN** 系统验证 'Open', 'High', 'Low', 'Close', 'Volume' 列存在
- **THEN** 系统验证索引为 DatetimeIndex
- **WHEN** 验证失败
- **THEN** 系统抛出 ValueError: "数据格式不符合要求"
---
### Requirement: 数据清理
系统 SHALL 清理数据以确保回测质量。
#### Scenario: 删除 NULL 值行
- **WHEN** DataFrame 包含 NULL 或 NaN 值
- **THEN** 系统调用 `df.dropna()` 删除
- **THEN** 任何包含 NaN 的行 SHALL 被删除
- **THEN** 返回的 DataFrame SHALL 不包含 NULL 值
#### Scenario: 数据完整性检查
- **WHEN** DataFrame 加载完成
- **THEN** 系统检查 trade_date 连续性
- **THEN** 系统检查无重复日期
- **WHEN** 发现异常
- **THEN** 系统输出警告:"数据存在异常: {detail}"
#### Scenario: 最小数据量验证
- **WHEN** DataFrame 行数少于 10
- **THEN** 系统输出错误:"数据不足,至少需要 10 天数据"
- **THEN** 系统抛出 ValueError
- **THEN** 主流程捕获并退出
---
### Requirement: 资源管理
系统 SHALL 正确管理数据库连接和内存资源。
#### Scenario: 引擎创建和清理
- **WHEN** 系统开始数据加载
- **THEN** 系统创建 SQLAlchemy 引擎对象
- **THEN** 系统使用引擎执行查询
- **WHEN** 查询完成
- **THEN** 系统调用 `engine.dispose()` 关闭连接池
- **THEN** 系统释放所有数据库连接
#### Scenario: 异常情况下的资源清理
- **WHEN** 查询过程中抛出异常
- **THEN** 系统在 finally 块中调用 `engine.dispose()`
- **THEN** 所有连接 SHALL 被正确关闭
- **THEN** 系统不会泄漏数据库连接
---
### Requirement: 错误处理和日志
系统 SHALL 提供清晰的错误信息和调试支持。
#### Scenario: 连接错误信息
- **WHEN** 数据库连接失败
- **THEN** 错误信息 SHALL 包含数据库主机和端口
- **THEN** 错误信息 SHALL 区分网络错误和认证错误
- **THEN** 系统提示用户检查凭证和网络连接
#### Scenario: 查询错误信息
- **WHEN** SQL 查询失败
- **THEN** 错误信息 SHALL 包含失败的 SQL 语句
- **THEN** 错误信息 SHALL 包含数据库返回的错误详情
- **THEN** 系统提示用户检查表结构和数据
#### Scenario: 数据格式错误信息
- **WHEN** 返回的 DataFrame 不符合要求
- **THEN** 错误信息 SHALL 列出缺失的列
- **THEN** 错误信息 SHALL 提示期望的格式
- **THEN** 系统建议用户检查数据库表结构
---
### Requirement: 函数接口
`load_data_from_db` 函数 SHALL 提供清晰的调用接口。
#### Scenario: 函数签名
- **WHEN** 主流程调用 `load_data_from_db(code, start_date, end_date)`
- **THEN** 函数接收三个字符串参数
- **THEN** `code` 为股票代码(如 '000001.SZ'
- **THEN** `start_date` 为开始日期(如 '2024-01-01'
- **THEN** `end_date` 为结束日期(如 '2025-12-31'
#### Scenario: 返回值
- **WHEN** 数据加载成功
- **THEN** 函数返回 pandas.DataFrame
- **THEN** DataFrame 索引为 DatetimeIndextrade_date
- **THEN** DataFrame 包含 ['Open', 'High', 'Low', 'Close', 'Volume', 'factor'] 列
#### Scenario: 异常抛出
- **WHEN** 数据加载失败
- **THEN** 函数 SHALL 抛出异常(不捕获)
- **THEN** 异常类型 SHALL 为 ValueError业务逻辑错误
- **THEN** 主流程负责捕获和处理异常
---
### Requirement: 性能考虑
系统 SHALL 优化数据加载性能以支持大数据集。
#### Scenario: 使用 pandas 向量化操作
- **WHEN** 执行复权计算
- **THEN** 计算 SHALL 使用 pandas 向量化操作
- **THEN** 不使用循环逐行计算
- **THEN** 10 年数据(约 2500 行) SHALL 在 1 秒内加载
#### Scenario: 索引优化
- **WHEN** 设置 DataFrame 索引
- **THEN** `set_index()` 操作 SHALL 高效(使用底层数组拷贝)
- **THEN** 日期索引 SHALL 支持快速范围查询
#### Scenario: 内存管理
- **WHEN** 加载大数据集
- **THEN** 系统 SHALL 及时调用 `engine.dispose()` 释放连接
- **THEN** DataFrame SHALL 使用 pandas 内部优化存储
- **THEN** 内存占用 SHALL 合理10 年数据约几 MB

View File

@@ -0,0 +1,225 @@
# Spec: Strategy Loading
## ADDED Requirements
### Requirement: 策略文件接口
策略文件 SHALL 提供两个必需的接口:指标计算函数和策略类获取函数。
#### Scenario: 标准策略文件结构
- **WHEN** 用户创建策略文件
- **THEN** 文件 SHALL 包含 `calculate_indicators(data)` 函数
- **THEN** 文件 SHALL 包含 `get_strategy()` 函数
- **THEN** 文件 SHALL 包含一个继承 `backtesting.Strategy` 的类
- **THEN** 所有三个组件 SHALL 在同一文件中
#### Scenario: calculate_indicators 函数签名
- **WHEN** 主流程调用 `calculate_indicators(data)`
- **THEN** 函数接收一个参数data (pandas.DataFrame)
- **THEN** 函数返回一个 pandas.DataFrame
- **THEN** 返回的 DataFrame SHALL 包含原始列和新增的指标列
- **THEN** 函数 SHALL 修改输入的 DataFrame不创建副本
#### Scenario: get_strategy 函数签名
- **WHEN** 主流程调用 `get_strategy()`
- **THEN** 函数不接收参数
- **THEN** 函数返回一个类对象
- **THEN** 返回的类 SHALL 继承自 `backtesting.Strategy`
---
### Requirement: 指标计算函数
`calculate_indicators` 函数 SHALL 计算策略所需的技术指标,并将结果添加到 DataFrame 中。
#### Scenario: SMA 指标计算
- **WHEN** 策略需要简单移动平均线指标
- **THEN** 函数使用 `data['Close'].rolling(window=N).mean()` 计算
- **THEN** 函数将结果存储为 `data['smaN']`
- **THEN** N 为具体的周期(如 10, 30, 60, 120
#### Scenario: MACD 指标计算
- **WHEN** 策略需要 MACD 指标
- **THEN** 函数使用 `data['Close'].ewm(span=12).mean()` 计算 EMA12
- **THEN** 函数使用 `data['Close'].ewm(span=26).mean()` 计算 EMA26
- **THEN** 函数计算 MACD = EMA12 - EMA26
- **THEN** 函数计算 Signal = MACD.ewm(span=9).mean()
- **THEN** 函数将结果存储为 `data['macd']`, `data['macd_signal']`, `data['macd_hist']`
#### Scenario: RSI 指标计算
- **WHEN** 策略需要 RSI 指标
- **THEN** 函数计算价格变化 delta = data['Close'].diff()
- **THEN** 函数计算 gain = delta.where(delta > 0, 0)
- **THEN** 函数计算 loss = -delta.where(delta < 0, 0)
- **THEN** 函数计算平均收益和平均损失
- **THEN** 函数计算 RS = average_gain / average_loss
- **THEN** 函数计算 RSI = 100 - (100 / (1 + RS))
- **THEN** 函数将结果存储为 `data['rsi']`
#### Scenario: 多指标计算
- **WHEN** 策略需要多个技术指标
- **THEN** 函数按顺序计算每个指标
- **THEN** 函数将所有指标列添加到 DataFrame
- **THEN** DataFrame 最终包含原始列 + 所有指标列
- **THEN** 计算顺序 SHALL 遵循指标间的依赖关系(如 MACD 依赖 EMA
#### Scenario: 指标列命名约定
- **WHEN** 函数添加指标列到 DataFrame
- **THEN** 列名 SHALL 使用小写和下划线(如 `sma10`, `macd_signal`
- **THEN** 列名 SHALL 与策略类的 `init()` 方法中引用的名称一致
- **THEN** 列名 SHALL 避免与原始列冲突
---
### Requirement: 策略类定义
策略类 SHALL 继承 `backtesting.Strategy`,并实现 `init()``next()` 方法。
#### Scenario: 策略类继承
- **WHEN** 用户定义策略类
- **THEN** 类 SHALL 显式继承 `backtesting.Strategy`
- **THEN** 类 SHALL 定义类属性作为可配置参数
- **THEN** 类名 SHALL 使用大驼峰命名(如 `SmaCross`, `MacdStrategy`
#### Scenario: init 方法实现
- **WHEN** Backtest 框架初始化策略时
- **THEN** 系统调用策略类的 `init()` 方法
- **THEN** `init()` 方法 SHALL 使用 `self.I()` 注册指标
- **THEN** `self.I(lambda x: x, self.data.column_name)` SHALL 引用 DataFrame 中的指标列
- **THEN** `init()` 方法 SHALL 不执行数据计算
#### Scenario: next 方法实现 - 金叉买入
- **WHEN** 短期均线上穿长期均线(金叉)
- **THEN** `next()` 方法 SHALL 调用 `self.position.close()` 平仓
- **THEN** `next()` 方法 SHALL 调用 `self.buy()` 开多仓
- **THEN** `next()` 方法 SHALL 使用 `crossover()` 函数检测交叉
#### Scenario: next 方法实现 - 死叉卖出
- **WHEN** 短期均线下穿长期均线(死叉)
- **THEN** `next()` 方法 SHALL 调用 `self.position.close()` 平仓
- **THEN** `next()` 方法 SHALL 调用 `self.sell()` 开空仓
- **THEN** `next()` 方法 SHALL 使用 `crossover()` 函数检测交叉
#### Scenario: next 方法实现 - 避免重复开仓
- **WHEN** 策略已持有多仓,且买入信号触发
- **THEN** `next()` 方法 SHALL 先调用 `self.position.close()`
- **THEN** `next()` 方法 SHALL 再调用 `self.buy()`
- **THEN** 系统 SHALL 自动处理仓位管理(不重复开仓)
#### Scenario: 可配置策略参数
- **WHEN** 策略类定义类属性
- **THEN** 类属性 SHALL 作为策略参数(如 `short_period = 10`
- **THEN** Backtest 框架 SHALL 自动访问这些属性
- **THEN** 参数 SHALL 可通过 Backtest 构造函数覆盖
---
### Requirement: 策略类指标引用
策略类的 `init()` 方法 SHALL 正确引用 DataFrame 中计算好的指标列。
#### Scenario: 引用 SMA 指标
- **WHEN** DataFrame 包含 `sma10``sma30`
- **THEN** `init()` 方法注册 `self.sma_short = self.I(lambda x: x, self.data.sma10)`
- **THEN** `init()` 方法注册 `self.sma_long = self.I(lambda x: x, self.data.sma30)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.sma10``self.data.sma30` 访问指标
#### Scenario: 引用 MACD 指标
- **WHEN** DataFrame 包含 `macd``macd_signal`
- **THEN** `init()` 方法注册 `self.macd = self.I(lambda x: x, self.data.macd)`
- **THEN** `init()` 方法注册 `self.signal = self.I(lambda x: x, self.data.macd_signal)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.macd``self.data.macd_signal` 访问指标
#### Scenario: 引用 RSI 指标
- **WHEN** DataFrame 包含 `rsi`
- **THEN** `init()` 方法注册 `self.rsi = self.I(lambda x: x, self.data.rsi)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.rsi` 访问指标
- **THEN** 策略逻辑 SHALL 使用 RSI 阈值生成信号(如 RSI > 70 超买)
#### Scenario: 指标列不存在
- **WHEN** 策略类引用的列名不存在于 DataFrame
- **THEN** Backtest 框架抛出 KeyError
- **THEN** 主流程捕获异常并输出错误信息:"指标列 {column} 不存在"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 动态加载机制
主流程 SHALL 使用 importlib 动态加载策略文件模块。
#### Scenario: 加载顶层策略文件
- **WHEN** 用户指定 `--strategy-file strategy.py`
- **THEN** 系统使用 `spec_from_file_location('strategy', 'strategy.py')` 创建规范
- **THEN** 系统使用 `module_from_spec(spec)` 创建模块对象
- **THEN** 系统使用 `spec.loader.exec_module(module)` 执行模块
- **THEN** 系统成功获取 `module.calculate_indicators``module.get_strategy`
#### Scenario: 加载子目录策略文件
- **WHEN** 用户指定 `--strategy-file strategies/macd_strategy.py`
- **THEN** 系统使用 `spec_from_file_location('strategies.macd_strategy', 'strategies/macd_strategy.py')`
- **THEN** 模块名使用点号分隔(反映目录结构)
- **THEN** 系统成功加载子目录中的策略模块
#### Scenario: 模块命名空间隔离
- **WHEN** 系统动态加载多个策略文件
- **THEN** 每个策略模块 SHALL 有独立的命名空间
- **THEN** 模块间 SHALL 不共享全局变量
- **THEN** 系统通过 `getattr(module, name)` 明确访问函数和类
#### Scenario: 策略文件导入错误
- **WHEN** 策略文件包含语法错误或导入错误
- **THEN** `exec_module()` 抛出 ImportError 或 SyntaxError
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"策略文件 {file} 加载失败: {error}"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 策略接口验证
主流程 SHALL 验证策略文件是否符合接口要求。
#### Scenario: 验证 calculate_indicators 存在
- **WHEN** 系统加载策略模块
- **THEN** 系统使用 `hasattr(module, 'calculate_indicators')` 检查函数
- **WHEN** 函数不存在
- **THEN** 系统抛出 AttributeError
- **THEN** 主流程捕获并输出:"策略文件 {file} 缺少 calculate_indicators 函数"
#### Scenario: 验证 get_strategy 存在
- **WHEN** 系统加载策略模块
- **THEN** 系统使用 `hasattr(module, 'get_strategy')` 检查函数
- **WHEN** 函数不存在
- **THEN** 系统抛出 AttributeError
- **THEN** 主流程捕获并输出:"策略文件 {file} 缺少 get_strategy 函数"
#### Scenario: 验证 get_strategy 返回类
- **WHEN** 系统调用 `get_strategy()`
- **THEN** 系统使用 `isinstance(returned, type)` 检查返回值
- **WHEN** 返回值不是类
- **THEN** 系统抛出 TypeError
- **THEN** 主流程捕获并输出:"get_strategy() 必须返回一个类"
#### Scenario: 验证策略类继承
- **WHEN** 系统获取策略类
- **THEN** 系统使用 `issubclass(strategy_class, backtesting.Strategy)` 检查继承
- **WHEN** 策略类未继承 `backtesting.Strategy`
- **THEN** 系统抛出 TypeError
- **THEN** 主流程捕获并输出:"策略类必须继承 backtesting.Strategy"
---
### Requirement: 策略文件示例
系统 SHALL 提供策略模板文件作为开发者参考。
#### Scenario: 提供策略模板
- **WHEN** 用户查看 strategy.py 文件
- **THEN** 文件 SHALL 包含完整的策略示例SMA 双均线交叉)
- **THEN** 文件 SHALL 包含清晰的注释说明每个接口的用途
- **THEN** 文件 SHALL 包含代码示例指标计算函数、get_strategy、策略类
#### Scenario: 策略文件文档
- **WHEN** 策略文件开头有文档字符串
- **THEN** 文档 SHALL 描述策略逻辑
- **THEN** 文档 SHALL 列出需要的指标
- **THEN** 文档 SHALL 说明参数含义(如 `short_period`, `long_period`
#### Scenario: 策略参数说明
- **WHEN** 策略类定义类属性
- **THEN** 每个属性 SHALL 有注释说明(如 `short_period = 10 # 短期均线周期`
- **THEN** 参数 SHALL 使用有意义的名称(不是 param1, param2

View File

@@ -0,0 +1,366 @@
# Tasks: Refactor Backtest Script
## 1. 项目设置和依赖
- [x] 1.1 创建 requirements.txt 文件,列出所有必需的 Python 包pandas, numpy, backtesting, sqlalchemy
- [ ] 1.2 安装项目依赖pip install -r requirements.txt
- [x] 1.3 配置数据库凭证(在 backtest.py 中硬编码)
- 设置 DB_HOST = '81.71.3.24'
- 设置 DB_NAME = 'leopard_dev'
- 设置 DB_USER = 'your_username'
- 设置 DB_PASSWORD = 'your_password'
- 根据实际开发环境修改这些值
---
## 3. 策略模板实现
- [x] 3.1 创建 strategy.py 文件,包含策略模板和示例
- [x] 3.2 实现 calculate_indicators(data) 函数
- 计算 SMA10, SMA30, SMA60, SMA120 指标
- 使用 data['Close'].rolling(window=N).mean() 方法
- 将结果添加到 DataFramedata['sma10'] 等)
- 返回添加了指标列的 DataFrame
- [x] 3.3 实现 get_strategy() 函数
- 返回 SmaCross 类
- 添加函数文档字符串说明用途
- [x] 3.4 实现 SmaCross 策略类
- 继承 backtesting.Strategy
- 定义类属性short_period = 10, long_period = 30
- 实现 init() 方法:使用 self.I() 注册 sma10 和 sma30 指标
- 实现 next() 方法:使用 crossover() 检测金叉和死叉,执行买卖操作
- [x] 3.5 添加详细的代码注释和文档字符串
- 文件开头描述策略逻辑
- 每个函数添加参数和返回值说明
- 策略类参数添加注释(如 short_period 的含义)
## 4. 策略动态加载功能
- [x] 4.1 在 backtest.py 中实现 load_strategy(strategy_file) 函数
- 使用 importlib.util.spec_from_file_location() 加载模块
- 使用 importlib.util.module_from_spec() 创建模块对象
- 使用 spec.loader.exec_module() 执行模块
- [x] 4.2 实现接口验证逻辑
- 检查模块是否有 calculate_indicators 属性hasattr 检查)
- 检查模块是否有 get_strategy 属性
- 验证 get_strategy() 返回的是类对象isinstance 检查)
- 验证策略类继承自 backtesting.Strategyissubclass 检查)
- [x] 4.3 实现异常处理
- 捕获 FileNotFoundError策略文件不存在
- 捕获 ImportError模块导入失败
- 捕获 AttributeError接口不完整
- 输出清晰的错误信息:"策略文件 {file} 加载失败: {error}"
- [x] 4.4 返回策略组件
- 返回元组:(calculate_indicators 函数, strategy_class)
## 5. 命令行参数解析
- [x] 5.1 实现 parse_arguments() 函数
- 使用 argparse.ArgumentParser 创建解析器
- 添加 --code 参数必需help: 股票代码)
- 添加 --start-date 参数必需help: 回测开始日期)
- 添加 --end-date 参数必需help: 回测结束日期)
- 添加 --strategy-file 参数必需help: 策略文件路径)
- 添加 --cash 参数可选default=100000help: 初始资金)
- 添加 --commission 参数可选default=0.002help: 手续费率)
- 添加 --output 参数可选help: HTML 输出文件路径)
- 添加 --warmup-days 参数可选default=365help: 预热天数,默认一年)
- [x] 5.2 实现参数验证
- 检查日期格式YYYY-MM-DD使用 datetime.strptime() 验证
- 检查策略文件是否存在os.path.isfile()
- 验证数值参数为正数cash, commission
- [x] 5.3 添加友好的错误提示
- 参数错误时显示帮助信息
- 日期格式错误时提示正确格式
## 6. 结果输出功能
- [x] 6.1 实现 print_stats(stats) 函数
- 创建 INDICATOR_MAPPING 字典(英文键 → 中文标签)
- 遍历 stats 对象的键值对
- 使用中文标签格式化输出
- [x] 6.2 实现格式化逻辑
- 实现 format_value(value, cn_name, key) 辅助函数
- 百分比和比率类值保留 2 位小数
- 金额类值保留 2 位小数
- 次数类值取整
- 其他值保留 4 位小数
- [x] 6.3 添加输出格式化
- 输出标题:"回测结果"(使用 "=" * 60 分隔)
- 每个指标独占一行
- 确保中英文对齐美观
## 7. 主流程编排
- [x] 7.1 实现 main() 函数,编排完整流程
- 调用 parse_arguments() 解析参数
- 调用 load_data_from_db() 加载数据
- 调用 load_strategy() 加载策略
- 调用 calculate_indicators() 计算指标
- 创建 Backtest 对象并执行
- 调用 print_stats() 输出结果
- [x] 7.2 添加进度提示信息
- 数据加载前:输出 "加载股票数据: {code} ({start_date} ~ {end_date})"
- 数据加载后:输出 "数据加载完成,共 {N} 条记录"
- 策略加载前:输出 "加载策略: {strategy_file}"
- 指标计算后:输出 "指标计算完成"
- 回测开始:输出 "开始回测..."
- 回测完成:输出 "回测完成!"
- [x] 7.3 实现回测执行
- 使用 Backtest(data, strategy_class, cash=args.cash, commission=args.commission, finalize_trades=True)
- 调用 bt.run() 执行回测
- 保存返回的 stats 对象
## 8. HTML 图表生成
- [x] 8.1 实现可选的图表生成逻辑
- 检查 args.output 参数是否指定
- 仅当指定时才调用 bt.plot()
- [x] 8.2 生成 HTML 图表文件
- 使用 bt.plot(filename=args.output, show=False) 生成文件
- show=False 确保在无头环境中也能生成
- 输出提示:"图表已保存到: {filepath}"
- [x] 8.3 添加异常处理
- 捕获图表生成异常
- 输出警告:"图表生成失败,但回测已完成: {error}"
- 不影响统计信息的正常输出
- 确保主流程正常退出(状态码 0
## 9. 全局错误处理
- [x] 9.1 在 main() 函数外层添加 try-except
- 捕获所有未预期的异常
- 输出错误信息和堆栈跟踪traceback.print_exc()
- 使用非零状态码退出
- [x] 9.2 实现特定错误的状态码映射
- 数据库错误:状态码 2
- 文件操作错误:状态码 3
- 参数错误:状态码 4
- 其他错误:状态码 1
- [x] 9.3 添加 `if __name__ == '__main__':` 入口
- 调用 main() 函数
- 确保脚本可直接执行和作为模块导入
## 10. 文档和示例(可选)
- [ ] 10.1 创建 README.md 文档(可选)
- [ ] 10.2 添加内联文档到 backtest.py
- [ ] 10.3 添加使用示例到 README
## 11. 测试和验证
- [ ] 11.1 测试基础回测流程
- [ ] 11.2 测试 HTML 图表生成
- [ ] 11.3 测试错误处理
- [ ] 11.4 测试不同策略
- [ ] 11.5 验证输出格式
## 12. 代码质量检查
- [ ] 12.1 运行代码检查工具(可选)
- [ ] 12.2 验证依赖版本兼容性
- [ ] 12.3 最终代码审查
---
## 3. 策略模板实现
- [x] 3.1 创建 strategy.py 文件,包含策略模板和示例
- [x] 3.2 实现 calculate_indicators(data) 函数
- 计算 SMA10, SMA30, SMA60, SMA120 指标
- 使用 data['Close'].rolling(window=N).mean() 方法
- 将结果添加到 DataFramedata['sma10'] 等)
- 返回添加了指标列的 DataFrame
- [x] 3.3 实现 get_strategy() 函数
- 返回 SmaCross 类
- 添加函数文档字符串说明用途
- [x] 3.4 实现 SmaCross 策略类
- 继承 backtesting.Strategy
- 定义类属性short_period = 10, long_period = 30
- 实现 init() 方法:使用 self.I() 注册 sma10 和 sma30 指标
- 实现 next() 方法:使用 crossover() 检测金叉和死叉,执行买卖操作
- [x] 3.5 添加详细的代码注释和文档字符串
- 文件开头描述策略逻辑
- 每个函数添加参数和返回值说明
- 策略类参数添加注释(如 short_period 的含义)
---
## 4. 策略动态加载功能
- [x] 4.1 在 backtest.py 中实现 load_strategy(strategy_file) 函数
- 使用 importlib.util.spec_from_file_location() 加载模块
- 使用 importlib.util.module_from_spec() 创建模块对象
- 使用 spec.loader.exec_module() 执行模块
- [x] 4.2 实现接口验证逻辑
- 检查模块是否有 calculate_indicators 属性hasattr 检查)
- 检查模块是否有 get_strategy 属性
- 验证 get_strategy() 返回的是类对象isinstance 检查)
- 验证策略类继承自 backtesting.Strategyissubclass 检查)
- [x] 4.3 实现异常处理
- 捕获 FileNotFoundError策略文件不存在
- 捕获 ImportError模块导入失败
- 捕获 AttributeError接口不完整
- 输出清晰的错误信息:"策略文件 {file} 加载失败: {error}"
- [x] 4.4 返回策略组件
- 返回元组:(calculate_indicators 函数, strategy_class)
---
## 5. 命令行参数解析
- [x] 5.1 实现 parse_arguments() 函数
- 使用 argparse.ArgumentParser 创建解析器
- 添加 --code 参数必需help: 股票代码)
- 添加 --start-date 参数必需help: 回测开始日期)
- 添加 --end-date 参数必需help: 回测结束日期)
- 添加 --strategy-file 参数必需help: 策略文件路径)
- 添加 --cash 参数可选default=100000help: 初始资金)
- 添加 --commission 参数可选default=0.002help: 手续费率)
- 添加 --output 参数可选help: HTML 输出文件路径)
- 添加 --warmup-days 参数可选default=365help: 预热天数,默认一年)
- [x] 5.2 实现参数验证
- 检查日期格式YYYY-MM-DD使用 datetime.strptime() 验证
- 检查策略文件是否存在os.path.isfile()
- 验证数值参数为正数cash, commission
- [x] 5.3 添加友好的错误提示
- 参数错误时显示帮助信息
- 日期格式错误时提示正确格式
---
## 6. 结果输出功能
- [x] 6.1 实现 print_stats(stats) 函数
- 创建 INDICATOR_MAPPING 字典(英文键 → 中文标签)
- 遍历 stats 对象的键值对
- 使用中文标签格式化输出
- [x] 6.2 实现格式化逻辑
- 实现 format_value(value, cn_name, key) 辅助函数
- 百分比和比率类值保留 2 位小数
- 金额类值保留 2 位小数
- 次数类值取整
- 其他值保留 4 位小数
- [x] 6.3 添加输出格式化
- 输出标题:"回测结果"(使用 "=" * 60 分隔)
- 每个指标独占一行
- 确保中英文对齐美观
---
## 7. 主流程编排
- [x] 7.1 实现 main() 函数,编排完整流程
- 调用 parse_arguments() 解析参数
- 调用 load_data_from_db() 加载数据
- 调用 load_strategy() 加载策略
- 调用 calculate_indicators() 计算指标
- 创建 Backtest 对象并执行
- 调用 print_stats() 输出结果
- [x] 7.2 添加进度提示信息
- 数据加载前:输出 "加载股票数据: {code} ({start_date} ~ {end_date})"
- 数据加载后:输出 "数据加载完成,共 {N} 条记录"
- 策略加载前:输出 "加载策略: {strategy_file}"
- 指标计算后:输出 "指标计算完成"
- 回测开始:输出 "开始回测..."
- 回测完成:输出 "回测完成!"
- [x] 7.3 实现回测执行
- 使用 Backtest(data, strategy_class, cash=args.cash, commission=args.commission, finalize_trades=True)
- 调用 bt.run() 执行回测
- 保存返回的 stats 对象
- [x] 8.1 实现可选的图表生成逻辑
- 检查 args.output 参数是否指定
- 仅当指定时才调用 bt.plot()
- [x] 8.2 生成 HTML 图表文件
- 使用 bt.plot(filename=args.output, show=False) 生成文件
- show=False 确保在无头环境中也能生成
- 输出提示:"图表已保存到: {filepath}"
- [x] 8.3 添加异常处理
- 捕获图表生成异常
- 输出警告:"图表生成失败,但回测已完成: {error}"
- 不影响统计信息的正常输出
- 确保主流程正常退出(状态码 0
---
## 9. 全局错误处理
- [ ] 9.1 在 main() 函数外层添加 try-except
- 捕获所有未预期的异常
- 输出错误信息和堆栈跟踪traceback.print_exc()
- 使用非零状态码退出
- [ ] 9.2 实现特定错误的状态码映射
- 数据库错误:状态码 2
- 文件操作错误:状态码 3
- 参数错误:状态码 4
- 其他错误:状态码 1
- [ ] 9.3 添加 `if __name__ == '__main__':` 入口
- 调用 main() 函数
- 确保脚本可直接执行和作为模块导入
---
## 10. 文档和示例
- [ ] 10.1 创建 README.md 文档(可选)
- 说明项目用途和功能
- 提供安装步骤pip install -r requirements.txt
- 提供使用示例(基础用法、自定义参数、不同策略)
- 说明策略文件接口规范
- 说明环境变量配置DB_USER, DB_PASSWORD
- [ ] 10.2 添加内联文档到 backtest.py
- 文件开头添加模块文档字符串
- 说明命令行参数和用法
- 提供使用示例
- [ ] 10.3 添加使用示例到 README
```bash
# 基础用法
python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py
# 自定义参数
python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py --cash 500000 --commission 0.001 --output result.html
```
---
## 11. 测试和验证
- [ ] 11.1 测试基础回测流程
- 执行 `python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py`
- 验证数据加载成功
- 验证策略加载成功
- 验证回测执行成功
- 验证统计信息输出正确
- [ ] 11.2 测试 HTML 图表生成
- 执行带 `--output` 参数的命令
- 验证 HTML 文件成功生成
- 验证图表内容正确(价格曲线、资金曲线等)
- [ ] 11.3 测试错误处理
- 测试无效股票代码(应提示未找到数据)
- 测试无效日期格式(应提示格式错误)
- 测试策略文件不存在(应提示文件不存在)
- 测试数据库连接失败(应提示连接错误)
- 测试策略接口不完整(应提示缺少函数)
- [ ] 11.4 测试不同策略
- 创建 strategies/macd_strategy.py
- 使用新策略执行回测
- 验证动态加载功能正常
- [ ] 11.5 验证输出格式
- 检查控制台输出使用中文标签
- 检查数值格式化正确(小数位数)
- 检查 HTML 文件可正常打开
---
## 12. 代码质量检查
- [ ] 12.1 运行代码检查工具(可选)
- 使用 pylint 或 flake8 检查代码风格
- 修复警告和错误
- [ ] 12.2 验证依赖版本兼容性
- 检查 backtesting 库版本兼容性
- 检查 pandas 和 numpy 版本要求
- [ ] 12.3 最终代码审查
- 对照设计文档检查实现是否完整
- 对照规范文档检查所有场景是否覆盖
- 确保代码遵循设计决策

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-01-27

View File

@@ -0,0 +1,203 @@
## Context
当前项目使用`backtesting`库进行量化回测框架现有策略为SMA双均线交叉策略`strategies/sma_strategy.py`。用户需要新增基于MACD的趋势跟踪策略适配A股市场特性。
**当前状态**:
- 回测框架已就绪(`backtest.py`支持动态加载策略)
- 现有SMA策略作为参考模板
- 策略文件需要遵循固定模式:`calculate_indicators()``get_strategy()`、Strategy类
- 无风险管理要求,无需实现止损、仓位管理等复杂逻辑
**依赖环境**:
- Python 3.x
- pandas (已安装)
- backtesting库已安装
- ta-lib依赖已手动安装完成
## Goals / Non-Goals
**Goals:**
- 创建`strategies/macd_strategy.py`实现MACD趋势跟踪策略
- 使用ta-lib库简化MACD和EMA200指标计算
- 实现MACD金叉/死叉 + EMA200趋势过滤的交易信号
- 保持策略文件独立性,无需修改`backtest.py`
- 支持通过`--strategy-file`参数加载新策略
**Non-Goals:**
- 不实现风险管理功能(止损、止盈、仓位管理)
- 不支持多股票组合回测
- 不修改现有SMA策略
- 不实现命令行参数配置(所有参数固定在策略文件中)
## Decisions
### D1: 指标计算库选择
**决策**: 使用`ta-lib`而非原生pandas或pandas-ta
**理由**:
- ta-lib是技术分析领域的事实标准性能优异
- C语言实现计算速度快适合大量指标计算
- API简洁直观广泛用于量化交易系统
- 文档完善,社区支持广泛
- 与pandas集成良好可直接传入Series
**考虑的替代方案**:
- **原生pandas**: 实现简单但需手写EMA计算代码冗长
- **pandas-ta**: API设计现代但性能不如ta-lib且安装依赖较多
### D2: MACD参数配置
**决策**: 使用`(10, 20, 9)`参数组合(平衡型)
**理由**:
- 快线10比标准12更敏感适应A股较高波动性
- 慢线20比标准26更快响应同时保持趋势跟踪稳定性
- 信号线9保持标准避免信号过于频繁
- 该组合在多数A股市场环境下回测表现稳定
- 10-20的组合在斐波那契数列附近技术分析流认可度高
**参数优化依据**:
- A股波动率高需要相对敏感的快线参数
- T+1交易规则避免过于激进的参数减少假信号
- 散户追涨杀跌结合趋势过滤EMA200避免逆势交易
- 平衡策略:兼顾信号及时性和稳定性
### D3: 趋势过滤器选择
**决策**: 使用EMA200作为趋势确认
**理由**:
- 200日均线被广泛认可为牛熊分界线
- EMA比SMA更平滑减少假突破
- 与MACD配合MACD捕捉动量转折EMA200确认趋势方向
- 机构投资者常用大资金使用200日线作为战略配置参考
- 在A股市场验证结合EMA200可显著减少震荡市中的假信号
**交易逻辑**:
- **买入条件**: MACD金叉 AND 价格 > EMA200
- **卖出条件**: MACD死叉 OR 价格 < EMA200
### D4: 策略行为模式
**决策**: EMA200双向过滤跌破EMA200强制卖出
**理由**:
- 避免在趋势转向后继续持有
- EMA200跌破通常预示趋势反转及时止损保护利润
- 比仅入场过滤更严格,但风险控制更好
**替代方案(未采用)**:
- **仅入场过滤**: EMA200仅用于确认买入卖出仅依赖MACD死叉
- 优点: 交易次数更多,可能捕捉更多小波段
- 缺点: 在趋势反转时可能持有过久,回撤较大
- **动态参数**: 根据市场波动率动态调整MACD参数
- 优点: 适应不同市场环境
- 缺点: 实现复杂,超出当前需求范围
### D5: 策略文件结构
**决策**: 严格遵循现有`strategy.py`模式
**理由**:
- 保持代码一致性,便于维护
- 无需修改`backtest.py`(已验证可动态加载)
- 其他策略可参考相同模式开发
**文件模式**:
```python
# 必需函数
def calculate_indicators(data):
"""计算所需指标返回DataFrame"""
pass
def get_strategy():
"""返回策略类"""
pass
# 必需类
class MacdTrendStrategy(Strategy):
"""策略类"""
# 可配置参数(固定)
fast_period = 10
slow_period = 20
signal_period = 9
def init(self):
"""注册指标到backtesting框架"""
pass
def next(self):
"""每个时间步的决策逻辑"""
pass
```
### D6: 指标计算时机
**决策**: 在`calculate_indicators()`中计算所有指标
**理由**:
- 指标计算与策略逻辑分离,代码清晰
- backtesting框架在加载策略前调用`calculate_indicators()`
- 数据预处理在策略初始化前完成,提高性能
- 便于回测时查看完整指标数据
**替代方案(未采用)**:
- 在Strategy.init()中动态计算指标
- 优点: 数据与策略逻辑更紧密
- 缺点: 回测时无法提前查看指标,调试困难
## Risks / Trade-offs
### R1: pandas-ta安装依赖
**风险**: 用户环境可能未安装pandas-ta
**缓解**:
- ta-lib已手动安装无需在依赖管理中重复添加
- 提供清晰的错误提示如遇ModuleNotFoundError
### R2: 参数固定性
**风险**: 无法通过命令行调整参数,灵活性降低
**缓解**:
- 参数基于A股市场研究具有通用性
- 如需调整,可直接修改策略文件参数值
- 在代码注释中明确参数含义和调整建议
### R3: 无风险控制机制
**风险**: 在强趋势反转时可能出现较大回撤
**缓解**:
- EMA200趋势过滤已提供一定保护
- 如未来需要风险控制,可在`next()`方法中添加止损逻辑
- 当前设计满足"不考虑风险管理"的需求
### R4: 震荡市假信号
**风险**: MACD在横盘震荡市中易产生频繁假信号
**缓解**:
- EMA200趋势过滤可减少震荡市中的交易频率
- 选择相对保守的参数10-20而非8-17避免过于敏感
- 研究表明,零轴过滤和趋势过滤可显著降低震荡市损失
### R5: 策略滞后性
**风险**: 基于EMA的指标天然滞后可能错过趋势初期
**缓解**:
- 平衡型参数10-20-9在及时性和稳定性间取得平衡
- 滞后性是趋势指标的固有特性,无法完全消除
- 如需更及时信号可考虑更小参数组合8-17-7
## Migration Plan
无需迁移步骤,新策略文件完全独立,不影响现有功能。
## Open Questions
无 - 所有设计决策已明确。

View File

@@ -0,0 +1,34 @@
## Why
当前项目仅包含SMA双均线交叉策略`strategies/sma_strategy.py`需要引入基于MACD的趋势跟踪策略。MACD作为经典动量指标结合EMA200趋势过滤在A股市场表现优异能更准确地捕捉趋势启动点和反转信号。
## What Changes
- 创建 `strategies/macd_strategy.py` - 新增MACD趋势跟踪策略文件
- 实现MACD指标计算 - 使用ta-lib库计算MACD(10,20,9)指标和EMA200趋势线
- 实现策略交易逻辑 - MACD金叉/死叉信号 + EMA200趋势确认
- 保持策略文件独立性 - 按照现有`strategy.py`模式实现calculate_indicators、get_strategy、Strategy类
- 创建strategies目录 - 用于统一管理所有策略脚本
## Capabilities
### New Capabilities
- `macd-trading`: MACD趋势跟踪策略包含MACD指标计算、EMA200趋势过滤、以及基于金叉/死叉的交易信号生成
### Modified Capabilities
## Impact
**依赖变化**:
- ta-lib已手动安装用于技术指标计算
**代码影响**:
- 不需要修改现有代码(`backtest.py`无需改动,策略文件模式保持一致)
- 策略目录扩展至2个策略文件
- 可通过`--strategy-file`参数切换使用SMA或MACD策略
**系统影响**:
- 回测框架保持不变
- 现有SMA策略完全不受影响
- 可通过backtest.py的标准接口加载MACD策略

View File

@@ -0,0 +1,134 @@
## ADDED Requirements
### Requirement: MACD趋势跟踪策略
系统应提供基于MACD指标的趋势跟踪交易策略包括MACD计算、EMA200趋势过滤、以及基于金叉/死叉的交易信号生成。
#### Scenario: 策略文件加载
- **WHEN** 用户在命令行指定`--strategy-file strategies/macd_strategy.py`
- **THEN** backtest.py成功加载策略文件并执行回测
- **AND** 策略类正确注册所有技术指标到backtesting框架
- **AND** 策略逻辑根据MACD金叉/死叉和EMA200位置生成交易信号
#### Scenario: MACD指标计算
- **WHEN** 调用`calculate_indicators(data)`函数,传入包含[Open, High, Low, Close, Volume, factor]的DataFrame
- **THEN** 函数使用ta-lib计算以下指标并添加到DataFrame
- MACD线DIF: 10日EMA - 20日EMA
- MACD信号线DEA: 9日EMA的MACD
- MACD柱状图Histogram: MACD线 - 信号线
- EMA200: 200日指数移动平均线
- **AND** 返回包含原始数据和所有新增指标的DataFrame
- **AND** 指标名称使用ta-lib返回的默认列名macd、macdsignal、macdhist
#### Scenario: 策略初始化
- **WHEN** backtesting框架初始化MacdTrendStrategy策略类
- **THEN** 调用`init()`方法
- **AND** 在`init()`中通过`self.I()`注册以下指标到backtesting框架
- MACD线`self.data.MACD_10_20_9`
- MACD信号线`self.data.MACDs_10_20_9`
- EMA200`self.data.EMA_200`
- **AND** 所有参数fast_period=10、slow_period=20、signal_period=9在策略类中定义为类变量
- **AND** 注册的指标可直接在`next()`方法中访问
#### Scenario: MACD金叉买入信号
- **WHEN** 策略检测到MACD线上穿信号线金叉
- **AND** 当前价格高于EMA200趋势线确认上升趋势
- **AND** 当前无持仓或持仓方向与买入信号相反
- **THEN** 策略平掉现有仓位(如有)
- **AND** 策略开多仓(`self.buy()`
- **AND** 在趋势市场下捕捉上涨机会
#### Scenario: EMA200跌破卖出信号
- **WHEN** 策略检测到当前价格跌破EMA200趋势线
- **AND** 当前持有多仓
- **THEN** 策略平掉多仓(`self.position.close()`
- **AND** 不开空仓(仅平仓,避免逆势交易)
- **AND** 在趋势转向时及时止损保护利润
#### Scenario: MACD死叉卖出信号
- **WHEN** 策略检测到MACD线下穿信号线死叉
- **AND** 当前持有多仓
- **THEN** 策略平掉多仓(`self.position.close()`
- **AND** 不开空仓
- **AND** 在动量减弱时退出持仓
#### Scenario: EMA200下方不开仓
- **WHEN** 当前价格低于EMA200趋势线
- **AND** 检测到MACD金叉信号
- **THEN** 策略不执行买入操作
- **AND** 避免在下跌趋势中逆势交易
- **AND** 等待价格回到EMA200上方再考虑入场
#### Scenario: 空仓状态处理
- **WHEN** 策略当前无持仓
- **AND** 检测到卖出信号MACD死叉或EMA200跌破
- **THEN** 策略跳过卖出信号
- **AND** 避免重复平仓导致错误
#### Scenario: 震荡市场过滤
- **WHEN** 市场处于震荡状态价格围绕EMA200波动
- **AND** MACD产生频繁的假金叉/死叉信号
- **THEN** EMA200趋势过滤减少交易频率
- **AND** 避免在无明确趋势时频繁交易
- **AND** 等待趋势明确后再入场
#### Scenario: 趋势市场顺势交易
- **WHEN** 市场处于明确上升趋势价格持续在EMA200上方
- **AND** MACD金叉确认动量增强
- **THEN** 策略及时入场捕捉上涨机会
- **AND** 顺势交易提高胜率
- **AND** EMA200确保不在下跌趋势中买入
#### Scenario: 参数配置
- **WHEN** 用户查看策略代码
- **THEN** 策略参数清晰定义为类变量:
- `fast_period = 10`MACD快线周期
- `slow_period = 20`MACD慢线周期
- `signal_period = 9`MACD信号线周期
- **AND** 参数无需通过命令行传递
- **AND** 参数可直接在代码中修改以适配不同市场环境
#### Scenario: 依赖管理
- **WHEN** 安装项目依赖
- **THEN** ta-lib库已被正确安装手动安装
- **AND** `uv run python -c "import talib"`成功执行
- **AND** 策略文件可正常运行
- **AND** 如ta-lib未安装给出明确错误提示
#### Scenario: 回测兼容性
- **WHEN** 使用现有backtest.py框架
- **THEN** 框架通过`load_strategy()`函数成功加载macd_strategy.py
- **AND** 调用`calculate_indicators()`预处理数据
- **AND** 初始化策略类并执行回测
- **AND** 回测流程与SMA策略完全一致
#### Scenario: 指标数据完整性
- **WHEN** backtesting调用`calculate_indicators(data)`
- **THEN** 返回的DataFrame包含所有必需列
- 原始列:[Open, High, Low, Close, Volume, factor]
- MACD指标列[MACD_10_20_9, MACDh_10_20_9, MACDs_10_20_9]
- EMA趋势线列[EMA_200]
- **AND** 无NaN值除预热期外
- **AND** 指标数据可用于策略决策和图表展示
#### Scenario: 预热期处理
- **WHEN** 数据长度不足以计算完整指标前200天
- **THEN** 指标值为NaN
- **AND** backtesting框架会自动跳过预热期
- **AND** 策略逻辑在有足够数据后才执行
- **AND** 避免因数据不足导致的错误信号

View File

@@ -0,0 +1,81 @@
## 1. 环境准备
- [x] 1.1 安装ta-lib依赖包已完成手动安装
- [x] 1.2 验证ta-lib安装成功`uv run python -c "import talib"`无报错)
## 2. 目录结构
- [x] 2.1 确认strategies目录存在如不存在则创建
- [x] 2.2 移动现有strategy.py到strategies/sma_strategy.py
- [x] 2.3 验证文件移动成功且可正常导入
## 3. MACD策略文件创建
- [x] 3.1 创建strategies/macd_strategy.py文件
- [x] 3.2 添加文件头部文档(策略说明、作者、日期)
- [x] 3.3 添加必要的导入语句pandas、backtesting、talib、crossover
- [x] 3.4 定义calculate_indicators()函数签名
- [x] 3.5 定义get_strategy()函数
- [x] 3.6 定义MacdTrendStrategy类框架
## 4. 指标计算实现
- [x] 4.1 在calculate_indicators()中使用ta-lib计算MACD指标
- [x] 4.1.1 调用`talib.MACD(data['Close'], fastperiod=10, slowperiod=20, signalperiod=9)`
- [x] 4.1.2 验证MACD返回3列MACD线、信号线、柱状图
- [x] 4.1.3 计算EMA200趋势线`talib.EMA(data['Close'], timeperiod=200)`
- [x] 4.1.4 返回包含所有指标的完整DataFrame
## 5. 策略类实现
- [x] 5.1 在MacdTrendStrategy类中定义可配置参数
- [x] 5.1.1 fast_period = 10
- [x] 5.1.2 slow_period = 20
- [x] 5.1.3 signal_period = 9
- [x] 5.2 实现init()方法
- [x] 5.2.1 使用self.I()注册MACD线self.data.MACD_10_20_9
- [x] 5.2.2 使用self.I()注册MACD信号线self.data.MACDs_10_20_9
- [x] 5.2.3 使用self.I()注册EMA200self.data.EMA_200
- [x] 5.2.4 验证所有指标正确注册
- [x] 5.3 实现next()方法交易逻辑
- [x] 5.3.1 导入crossover函数用于检测金叉/死叉
- [x] 5.3.2 实现买入条件crossover(MACD, Signal) AND Close > EMA200
- [x] 5.3.3 实现卖出条件crossover(Signal, MACD) OR Close < EMA200
- [x] 5.3.4 处理空仓状态(避免重复平仓)
- [x] 5.3.5 确保开仓前先平掉现有仓位
## 6. 代码验证
- [x] 6.1 检查Python语法正确性无语法错误
- [x] 6.2 验证导入语句正确(所有依赖正确导入)
- [x] 6.3 检查类继承自Strategy
- [x] 6.4 检查策略文件结构符合SMA策略模式
## 7. 回测兼容性验证
- [x] 7.1 使用backtest.py加载macd_strategy.py`uv run python backtest.py --strategy-file strategies/macd_strategy.py`
- [x] 7.2 验证策略文件成功加载无报错
- [x] 7.3 执行简单回测(如测试股票、测试日期范围)
- [x] 7.4 验证回测结果输出正常
## 8. 文档和注释
- [x] 8.1 在文件头部添加清晰的策略说明文档
- [x] 8.2 在关键逻辑处添加代码注释
- [x] 8.3 说明MACD参数选择理由10-20-9组合
- [x] 8.4 说明EMA200趋势过滤原理
- [x] 8.5 说明买入/卖出信号条件
## 9. 可选验证任务
- [ ] 9.1 对比MACD策略与SMA策略的回测结果
- [ ] 9.2 测试不同参数组合的性能如8-17-7、12-26-9
- [ ] 9.3 验证EMA200过滤对回撤的影响
- [ ] 9.4 测试不同市场环境(牛市、熊市、震荡市)下的表现
## 10. 完成
- [x] 10.1 所有核心功能实现完成
- [x] 10.2 代码质量符合Python最佳实践
- [x] 10.3 策略可被backtest.py正常加载和执行
- [x] 10.4 回测结果符合预期(策略逻辑正确执行)

View File

@@ -0,0 +1,2 @@
schema: spec-driven
created: 2026-01-28

View File

@@ -0,0 +1,287 @@
## Context
**Current State:**
`backtest.py` (284 lines) 是单一文件,包含:
- 命令行参数解析
- 数据库连接和数据加载
- 策略动态加载和验证
- 回测执行逻辑
- 结果格式化输出
- 图表生成
**Constraints:**
- 需要保持现有功能完整性(数据加载、策略加载、回测执行、结果展示)
- 需要支持多股票回测(串行执行)
- 不考虑并发实现(保持简单)
- 错误处理采用立即失败策略
- 数据库配置明文存储,不考虑环境变量
**Stakeholders:**
- 开发者:需要清晰的模块划分和可复用的接口
- 终端用户:需要友好的 CLI 输出(进度条、表格化结果)
## Goals / Non-Goals
**Goals:**
1. 分离核心逻辑与 CLI 界面,提升代码复用性
2. 提供标准化函数接口,供其他模块调用回测功能
3. 支持多股票批量回测(串行执行)
4. 集中管理配置(数据库、参数、配色)
5. 优化 CLI 输出体验tabulate 表格化、tqdm 进度条)
**Non-Goals:**
1. 并行执行多股票回测(性能优化非目标)
2. 环境变量管理配置(配置明文存储即可)
3. 复杂的聚合统计(仅单股票结果拼接)
4. 图表文件合并(每个股票生成独立 HTML
5. 配置文件热重载(启动时加载一次)
## Decisions
### Decision 1: 三层模块划分
**选择:** 分离为 `config.py``backtest_core.py``backtest_command.py` 三个文件
**理由:**
- **config.py**:集中管理所有配置,避免硬编码分散
- **backtest_core.py**:纯粹的业务逻辑,提供可复用的函数接口
- **backtest_command.py**CLI 界面,负责参数解析和结果展示
**替代方案:**
- 方案 A保留单一文件但改进内部结构函数分离
- 拒绝理由仍无法复用CLI 和业务逻辑耦合
- 方案 B使用类封装`BacktestEngine` 类)
- 拒绝理由:增加复杂度,函数接口已足够
### Decision 2: BacktestResult 数据类
**选择:** 使用 `dataclasses.dataclass` 定义 `BacktestResult`
**理由:**
- 结构化返回结果,便于序列化和导出
- 类型提示支持,提升代码可读性
- 自动生成 `__init__``__repr__` 等方法,减少样板代码
**替代方案:**
- 方案 A直接返回原始 `stats` 对象backtesting 库返回)
- 拒绝理由:依赖 backtesting 库内部结构,耦合度高
- 方案 B返回字典
- 拒绝理由:缺乏类型提示,容易拼写错误
### Decision 3: 批量回测策略
**选择:** 串行执行(`for` 循环),立即失败
**理由:**
- 简单可靠,易于调试
- 错误处理清晰(第一个失败就停止)
- 避免并发带来的资源竞争和复杂度
**替代方案:**
- 方案 A并行执行ThreadPoolExecutor
- 拒绝理由:性能非目标,并发增加复杂度
- 方案 B继续执行其他股票最后统一报告错误
- 拒绝理由:用户需求是立即失败
### Decision 4: CLI 参数设计
**选择:** `--codes` 多值参数(`nargs='+'``--output-dir` 目录参数
**理由:**
- `--codes` 支持传入多个股票代码,如 `--codes 000001.SZ 600000.SH`
- `--output-dir` 为每个股票生成 `{code}.html`,如 `output/000001.SZ.html`
- 保持原有参数(`--start-date``--end-date``--strategy-file``--cash``--commission``--warmup-days`
**替代方案:**
- 方案 A`--code` 逗号分隔(如 `--code 000001.SZ,600000.SH`
- 拒绝理由:需要额外解析逻辑,不直观
- 方案 B`--code` 多次调用(如 `--code 000001.SZ --code 600000.SH`
- 拒绝理由argparse 的 `nargs='+'` 更符合习惯
### Decision 5: 输出优化库
**选择:** 使用 `tabulate` 表格化批量结果,使用 `tqdm` 显示进度条
**理由:**
- **tabulate**提供美观的表格输出支持多种格式grid、simple 等)
- **tqdm**:提供实时进度条,提升用户体验
- 两个库都是轻量级,不引入复杂依赖
**替代方案:**
- 方案 A手动格式化表格字符串拼接
- 拒绝理由:代码冗余,格式不够美观
- 方案 B不使用进度条仅输出完成提示
- 拒绝理由:多股票回测耗时较长,用户需要进度反馈
### Decision 6: 结果展示策略
**选择:** 单股票使用详细格式(现有),多股票使用表格格式(新增)
**理由:**
- 单股票:保持原有的详细输出(每个指标单独一行)
- 多股票:使用 `tabulate` 表格横向对比,节省垂直空间
**替代方案:**
- 方案 A所有情况都使用详细格式拼接
- 拒绝理由:多股票时输出过长,难以阅读
- 方案 B所有情况都使用表格格式
- 拒绝理由:单股票时表格优势不明显,详细格式更清晰
### Decision 7: 配置管理方式
**选择:** 明文常量存储在 `config.py`
**理由:**
- 满足用户需求(不考虑信息安全)
- 避免引入 `python-dotenv` 依赖
- 代码简洁,修改直接
**替代方案:**
- 方案 A环境变量`os.getenv`
- 拒绝理由:用户明确不需要
- 方案 B配置文件JSON/YAML
- 拒绝理由:增加文件管理和解析复杂度
### Decision 8: 数据访问接口
**选择:** `load_data_from_db(code, start_date, end_date)` 函数签名保持不变
**理由:**
- 现有接口已满足需求(单次查询一个股票)
- 迁移成本低,直接复制到 `backtest_core.py`
**替代方案:**
- 方案 A批量查询`load_data_from_db(codes, start_date, end_date)`
- 拒绝理由:需要修改 SQL 为 `IN` 子句,且结果聚合复杂
- 方案 B连接池复用全局 engine 对象)
- 拒绝理由:每次创建引擎的开销可接受(串行执行)
### Decision 9: 策略加载接口
**选择:** `load_strategy(strategy_file)` 返回 `(calculate_indicators, strategy_class)` 元组
**理由:**
- 保持现有接口,迁移成本低
- 函数返回两个值符合 Python 惯例
**替代方案:**
- 方案 A返回类对象策略类自带指标计算方法
- 拒绝理由:现有策略文件结构分离了两者,修改成本高
- 方案 B返回命名空间对象封装两个属性
- 拒绝理由:增加复杂度,元组足够
### Decision 10: 错误处理策略
**选择:** 立即失败(不捕获部分错误继续执行)
**理由:**
- 符合用户需求
- 简化错误追踪(第一个错误直接暴露)
- 避免"部分成功"的歧义状态
**替代方案:**
- 方案 A捕获错误但继续执行最后统一报告
- 拒绝理由:用户明确要求立即失败
## Risks / Trade-offs
### Risk 1: CLI 命令变化导致用户习惯中断
**风险:** 用户习惯使用 `python backtest.py`,需要切换到 `uv run python backtest_command.py`
**缓解:**
- 在项目根目录创建软链接 `backtest.py -> backtest_command.py`(可选)
- 或在 README 中明确说明新的使用方式
- 提供迁移指南(参数变化说明)
### Risk 2: 多股票串行执行耗时较长
**风险:** 10 个股票可能需要 10 倍时间(每个 30 秒 → 总计 5 分钟)
**缓解:**
- 使用 `tqdm` 进度条提供实时反馈
- 在 README 中说明性能限制
- 未来可扩展为并行执行(非当前目标)
### Risk 3: BacktestResult 字段可能与 backtesting 库不兼容
**风险:** backtesting 库升级后stats 对象的键名可能变化
**缓解:**
- 使用 `.get(key, default)` 方法访问,避免 KeyError
- 提供默认值0 或空字符串)
- 在文档中说明依赖的 backtesting 版本
### Risk 4: tabulate/tqdm 依赖未安装
**风险:** 用户运行时缺少依赖,导致 ImportError
**缓解:**
- 使用 `uv add` 明确添加依赖到 pyproject.toml
- 在 README 中说明安装步骤
- 错误信息中提示安装命令(`uv add tabulate tqdm`
### Risk 5: 策略文件路径处理不一致
**风险:** 策略文件路径可能是相对路径或绝对路径,导致加载失败
**缓解:**
- 使用 `os.path.abspath()` 转换为绝对路径
- 在错误信息中提示用户检查路径
- 测试相对路径和绝对路径两种情况
### Risk 6: 图表输出目录不存在
**风险:** 用户指定的 `--output-dir` 不存在,导致保存失败
**缓解:**
- 使用 `os.makedirs(output_dir, exist_ok=True)` 自动创建
- 在错误信息中提示用户检查目录权限
### Risk 7: 内存占用(多股票同时加载数据)
**风险:** 如果同时加载多个股票数据,内存占用可能较高
**缓解:**
- 串行执行确保一次只加载一个股票的数据
- 单个股票的数据量可控10 年约几 MB
- future 可考虑流式处理(非当前目标)
## Migration Plan
### Step 1: 创建 config.py
1.`backtest.py` 提取数据库配置
2. 添加默认回测参数
3. 添加图表配色配置
4. 测试导入无错误
### Step 2: 创建 backtest_core.py
1. 迁移 `load_data_from_db()` 函数(导入 config
2. 迁移 `load_strategy()` 函数
3. 迁移 `apply_color_scheme()` 函数(使用 config 配置)
4. 定义 `BacktestResult` 数据类
5. 实现 `run_backtest()` 函数
6. 实现 `run_batch_backtest()` 函数
7. 单元测试核心函数
### Step 3: 创建 backtest_command.py
1. 实现 `parse_arguments()` 函数(支持 `--codes`
2. 实现 `format_single_result()` 函数(详细格式)
3. 实现 `format_batch_results()` 函数(使用 tabulate
4. 实现 `main()` 函数(调用 `run_batch_backtest()`
5. 测试单股票回测
6. 测试多股票回测
### Step 4: 更新依赖
1. 运行 `uv add tabulate` 添加依赖
2. 运行 `uv add tqdm` 添加依赖
3. 运行 `uv sync` 同步依赖
### Step 5: 删除 backtest.py
1. 确认新功能完整(单股票、多股票、图表输出)
2. 确认错误处理正确(立即失败)
3. 删除 `backtest.py` 文件
4. 更新 README 说明新的使用方式
### Rollback Strategy
如果迁移过程中发现问题:
1. 保留 `backtest.py` 直到 `backtest_command.py` 完全可用
2. 使用 `git` 版本控制,可随时回退
3. 逐步迁移(先核心函数,后 CLI确保每步可验证
## Open Questions
1. **BacktestResult 字段完整性:** 是否需要包含所有 backtesting.stats 的键,或仅包含当前用到的字段?
- 倾向:仅包含当前用到的字段(未来可扩展)
2. **表格格式选择:** tabulate 支持多种格式grid、simple、pipe、html多股票结果使用哪种
- 倾向grid美观的边框格式
3. **进度条粒度:** tqdm 进度条应该显示每个股票的回测进度,还是仅显示批量回测的总进度?
- 倾向:仅显示批量回测的总进度(股票 N/M
4. **图表输出目录结构:** 多股票图表是平铺在 `output/` 下,还是按日期/策略分组?
- 倾向:平铺在 `output/` 下(简单)

View File

@@ -0,0 +1,54 @@
## Why
当前 `backtest.py` 存在职责混杂的问题:命令行参数解析、核心回测逻辑、数据访问、结果展示都耦合在单一文件中,导致:
- 难以在其他模块中复用回测功能
- 无法进行单元测试
- 仅支持单股票回测,无法批量处理
需要重构为分层架构,将核心逻辑与 CLI 界面分离,提升代码复用性和可维护性。
## What Changes
- **创建 `config.py`**:集中管理数据库配置、默认回测参数、图表配色
- **创建 `backtest_core.py`**:核心回测引擎
- 提供标准化接口 `run_backtest()`(单股票)
- 提供批量接口 `run_batch_backtest()`(多股票,串行执行)
- 封装数据访问和策略加载逻辑
- 返回结构化结果对象 `BacktestResult`
- **创建 `backtest_command.py`**:命令行界面
- 支持多股票代码参数 `--codes`(接受多个值)
- 使用 `tabulate` 优化批量结果的表格展示
- 使用 `tqdm` 显示批量回测进度条
- 保留原有的单股票详细输出格式
- **删除 `backtest.py`**:不再需要,功能已迁移
- **依赖更新**:添加 `tabulate``tqdm` 到项目依赖
## Capabilities
### New Capabilities
- `batch-backtest`: 批量回测功能,支持传入多个股票代码进行串行回测,并提供进度条和表格化结果展示
### Modified Capabilities
- 无(其他均为实现重构,不改变 spec 级别行为)
## Impact
- **代码影响**
- 删除 `backtest.py`284 行)
- 新增 `config.py`(约 30 行)
- 新增 `backtest_core.py`(约 250 行)
- 新增 `backtest_command.py`(约 150 行)
- **API 变化**
- 新增 `run_backtest(code, start_date, end_date, strategy_file, ...)` 函数
- 新增 `run_batch_backtest(codes, start_date, end_date, strategy_file, ...)` 函数
- 新增 `BacktestResult` 数据类
- **命令行变化**
- 单参数 `--code` 改为多值参数 `--codes`
- 新增 `--output-dir` 参数,为每个股票生成独立 HTML 图表
- 批量回测时显示进度条和表格化结果
- **依赖变化**
- 新增 `tabulate`(表格格式化)
- 新增 `tqdm`(进度条显示)
- **兼容性**
- **BREAKING**: 删除原有 `backtest.py`,命令行使用方式从 `python backtest.py` 改为 `uv run python backtest_command.py`
- 参数名称从 `--code` 改为 `--codes`

View File

@@ -0,0 +1,310 @@
# Spec: Batch Backtest
## ADDED Requirements
### Requirement: 多股票回测参数
系统 SHALL 支持通过命令行参数传入多个股票代码进行批量回测。
#### Scenario: 传入多个股票代码
- **WHEN** 用户执行 `python backtest_command.py --codes 000001.SZ 600000.SH --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategies/macd_strategy.py`
- **THEN** 系统解析所有股票代码到列表 `['000001.SZ', '600000.SH']`
- **THEN** 系统按顺序依次执行每个股票的回测
- **THEN** 系统为每个股票生成独立的回测结果
#### Scenario: 传入单个股票代码
- **WHEN** 用户执行 `python backtest_command.py --codes 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategies/macd_strategy.py`
- **THEN** 系统解析为单个股票代码列表 `['000001.SZ']`
- **THEN** 系统执行单个股票回测
- **THEN** 系统输出详细格式的回测结果
#### Scenario: 缺少 --codes 参数
- **WHEN** 用户未提供 `--codes` 参数
- **THEN** 系统输出错误信息:"错误: 需要以下参数: --codes"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 批量回测执行
系统 SHALL 串行执行多个股票的回测,每次加载一个股票的数据并执行回测。
#### Scenario: 成功执行多个股票回测
- **WHEN** 用户传入 N 个股票代码
- **THEN** 系统循环 N 次,每次加载一个股票的数据
- **THEN** 系统每次执行完整的回测流程(数据加载、指标计算、回测执行)
- **THEN** 系统每次执行完成后生成 `BacktestResult` 对象
- **THEN** 系统返回包含 N 个 `BacktestResult` 的列表
#### Scenario: 每个股票独立预热期
- **WHEN** 系统执行第 i 个股票的回测
- **THEN** 系统使用 `start_date - warmup_days` 计算该股票的预热开始日期
- **THEN** 系统独立加载该股票的预热期数据
- **THEN** 不同股票的预热期互不影响
#### Scenario: 第一个股票回测失败
- **WHEN** 系统执行第一个股票回测时发生错误(数据库连接失败、策略加载失败等)
- **THEN** 系统捕获异常并输出错误信息
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码(立即失败策略)
#### Scenario: 中间股票回测失败
- **WHEN** 系统执行第 i 个股票回测时发生错误
- **THEN** 系统输出错误信息(包含股票代码)
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 资源管理
- **WHEN** 系统完成第 i 个股票的回测
- **THEN** 系统关闭该股票的数据库连接(`engine.dispose()`
- **THEN** 系统释放该股票的数据内存
- **THEN** 系统开始加载第 i+1 个股票的数据
---
### Requirement: 批量回测进度显示
系统 SHALL 使用 tqdm 显示批量回测的实时进度,提供用户反馈。
#### Scenario: 显示进度条
- **WHEN** 系统开始执行 N 个股票的批量回测
- **THEN** 系统显示进度条格式:`回测进度: 25%|█████▌ | 1/4 [00:30<01:30, 12.5s/it]`
- **THEN** 系统在完成每个股票回测后更新进度条
- **THEN** 进度条显示当前进度i/N、已用时间、预计剩余时间
- **THEN** 进度条在所有股票回测完成后消失
#### Scenario: 单股票回测不显示进度条
- **WHEN** 用户传入单个股票代码
- **THEN** 系统不显示 tqdm 进度条
- **THEN** 系统直接输出回测结果
#### Scenario: 进度条描述文本
- **WHEN** 系统显示批量回测进度
- **THEN** 进度条描述 SHALL 为 "回测进度"(中文)
- **THEN** 进度条显示已完成/总数(如 "1/4", "2/4"
---
### Requirement: 批量回测结果展示
系统 SHALL 使用 tabulate 将多个股票的回测结果格式化为表格,便于横向对比。
#### Scenario: 表格化输出多股票结果
- **WHEN** 用户传入多个股票代码且回测成功
- **THEN** 系统使用 tabulate 生成表格
- **THEN** 表格格式 SHALL 为 grid带边框
- **THEN** 表格列 SHALL 包含:股票代码、收益率%、胜率%、最大回撤%、交易次数、SQN
- **THEN** 系统在表格上方显示表头(中文列名)
- **THEN** 数值保留 2 位小数(交易次数为整数)
#### Scenario: 表格内容填充
- **WHEN** 系统格式化第 i 个股票的结果
- **THEN** 系统从 `BacktestResult` 对象提取字段
- **THEN** "股票代码" 列填充 `result.code`
- **THEN** "收益率%" 列填充 `result.return_pct`
- **THEN** "胜率%" 列填充 `result.win_rate`
- **THEN** "最大回撤%" 列填充 `result.max_drawdown`
- **THEN** "交易次数" 列填充 `result.trades`
- **THEN** "SQN" 列填充 `result.sqn`
#### Scenario: 单股票回测不使用表格
- **WHEN** 用户传入单个股票代码
- **THEN** 系统不使用 tabulate 生成表格
- **THEN** 系统使用详细格式输出(每个指标单独一行)
- **THEN** 系统保持原有 `print_stats()` 的输出格式
#### Scenario: 表格示例输出
- **WHEN** 用户传入 2 个股票代码
- **THEN** 系统输出格式 SHALL 为:
```
+-------------+-----------+--------+------------+----------+-------+
| 股票代码 | 收益率% | 胜率% | 最大回撤% | 交易次数 | SQN |
+-------------+-----------+--------+------------+----------+-------+
| 000001.SZ | 20.35 | 55.00 | -8.50 | 45 | 1.85 |
| 600000.SH | 15.00 | 48.00 | -12.30 | 38 | 1.42 |
+-------------+-----------+--------+------------+----------+-------+
```
---
### Requirement: 多股票图表输出
系统 SHALL 为每个股票生成独立的 HTML 图表文件,文件名格式为 `{code}.html`。
#### Scenario: 指定 --output-dir 参数
- **WHEN** 用户传入 `--output-dir output/`
- **THEN** 系统为每个股票生成 HTML 文件到 `output/{code}.html`
- **THEN** 文件名 SHALL 为股票代码,如 `000001.SZ.html`, `600000.SH.html`
- **THEN** 系统自动创建 `output/` 目录(`exist_ok=True`
- **THEN** 系统在完成后输出提示:"图表已保存到目录: output/" 后列出所有文件
#### Scenario: 未指定 --output-dir 参数
- **WHEN** 用户未传入 `--output-dir` 参数
- **THEN** 系统不为任何股票生成图表文件
- **THEN** 系统仅输出控制台统计信息
#### Scenario: 图表文件覆盖
- **WHEN** 系统再次执行相同的批量回测
- **THEN** 系统覆盖已存在的 HTML 文件
- **THEN** 系统不提示文件已存在
---
### Requirement: 结构化回测结果
系统 SHALL 返回标准化的 `BacktestResult` 对象,包含所有关键指标。
#### Scenario: BacktestResult 对象创建
- **WHEN** 系统完成单股票回测
- **THEN** 系统从 `stats` 对象提取指标到 `BacktestResult`
- **THEN** `BacktestResult.code` 设置为股票代码
- **THEN** `BacktestResult.start_date` 设置为回测开始日期
- **THEN** `BacktestResult.end_date` 设置为回测结束日期
- **THEN** `BacktestResult.equity_final` 设置为最终权益
- **THEN** `BacktestResult.equity_peak` 设置为峰值收益
- **THEN** `BacktestResult.return_pct` 设置为总收益率
- **THEN** `BacktestResult.buy_hold_return` 设置为买入持有收益率
- **THEN** `BacktestResult.return_annual` 设置为年化收益率
- **THEN** `BacktestResult.volatility_annual` 设置为年化波动率
- **THEN** `BacktestResult.max_drawdown` 设置为最大回撤
- **THEN** `BacktestResult.avg_drawdown` 设置为平均回撤
- **THEN** `BacktestResult.max_drawdown_duration` 设置为最大回撤持续时长
- **THEN** `BacktestResult.avg_drawdown_duration` 设置为平均回撤持续时长
- **THEN** `BacktestResult.sortino_ratio` 设置为索提诺比率
- **THEN** `BacktestResult.calmar_ratio` 设置为卡尔玛比率
- **THEN** `BacktestResult.trades` 设置为交易次数
- **THEN** `BacktestResult.win_rate` 设置为胜率
- **THEN** `BacktestResult.sqn` 设置为系统质量数
- **THEN** `BacktestResult.cash` 设置为初始资金
- **THEN** `BacktestResult.commission` 设置为手续费率
#### Scenario: BacktestResult 列表返回
- **WHEN** 系统完成批量回测
- **THEN** 系统返回 `List[BacktestResult]`
- **THEN** 列表顺序 SHALL 与输入股票代码顺序一致
- **THEN** 列表长度 SHALL 等于输入股票代码数量(成功时)
#### Scenario: BacktestResult 数据类型
- **WHEN** 系统创建 `BacktestResult` 对象
- **THEN** 数值字段 SHALL 为 float 类型(除 `trades`, `max_drawdown_duration` 为 int
- **THEN** 日期字段 SHALL 为 str 类型YYYY-MM-DD 格式)
- **THEN** 系统支持 `result.to_dict()` 方法dataclass 自动生成)
---
### Requirement: 可复用回测引擎接口
系统 SHALL 提供标准化的函数接口,供其他模块调用回测功能。
#### Scenario: run_backtest 函数调用
- **WHEN** 其他模块调用 `run_backtest(code, start_date, end_date, strategy_file, cash, commission, warmup_days, output_file)`
- **THEN** 函数接收股票代码、日期范围、策略文件、回测参数、输出文件路径
- **THEN** 函数执行完整回测流程(数据加载、策略加载、指标计算、回测执行)
- **THEN** 函数返回 `BacktestResult` 对象
- **THEN** 函数不打印任何输出(纯函数)
#### Scenario: run_batch_backtest 函数调用
- **WHEN** 其他模块调用 `run_batch_backtest(codes, start_date, end_date, strategy_file, cash, commission, warmup_days, output_dir)`
- **THEN** 函数接收股票代码列表、日期范围、策略文件、回测参数、输出目录
- **THEN** 函数串行执行每个股票的回测
- **THEN** 函数返回 `List[BacktestResult]`
- **THEN** 函数显示 tqdm 进度条(批量时)
#### Scenario: 函数参数默认值
- **WHEN** 调用者不指定可选参数
- **THEN** `cash` 默认为 100000
- **THEN** `commission` 默认为 0.002
- **THEN** `warmup_days` 默认为 365
- **THEN** `output_file` 默认为 None不生成图表
- **THEN** `output_dir` 默认为 None不生成图表
#### Scenario: 函数异常抛出
- **WHEN** `run_backtest` 或 `run_batch_backtest` 执行时发生错误
- **THEN** 函数 SHALL 抛出异常(不捕获)
- **THEN** 异常类型 SHALL 为 ValueError、TypeError 或原始异常
- **THEN** 异常信息 SHALL 包含具体错误原因
- **THEN** 调用者负责捕获和处理异常
---
### Requirement: 集中配置管理
系统 SHALL 在 config.py 中集中管理数据库配置、默认回测参数、图表配色。
#### Scenario: 数据库配置访问
- **WHEN** backtest_core.py 需要数据库连接参数
- **THEN** 模块从 config 导入 `DB_HOST`, `DB_PORT`, `DB_NAME`, `DB_USER`, `DB_PASSWORD`
- **THEN** 模块使用这些常量构建连接字符串
- **THEN** 模块不重复定义数据库配置
#### Scenario: 默认参数访问
- **WHEN** backtest_core.py 需要默认回测参数
- **THEN** 模块从 config 导入 `DEFAULT_CASH`, `DEFAULT_COMMISSION`, `DEFAULT_WARMUP_DAYS`
- **THEN** 模块使用这些常量作为函数默认值
- **THEN** 模块不重复定义默认参数
#### Scenario: 图表配色访问
- **WHEN** backtest_core.py 需要设置图表配色
- **THEN** 模块从 config 导入 `BULL_COLOR`, `BEAR_COLOR`
- **THEN** 模块使用这些颜色设置 `plotting.BULL_COLOR` 和 `plotting.BEAR_COLOR`
- **THEN** 模块不重复定义颜色配置
#### Scenario: 配置文件内容
- **WHEN** 查看 config.py 文件
- **THEN** 文件包含数据库配置DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD
- **THEN** 文件包含默认回测参数DEFAULT_CASH, DEFAULT_COMMISSION, DEFAULT_WARMUP_DAYS
- **THEN** 文件包含图表配色BULL_COLOR, BEAR_COLOR
- **THEN** 所有配置使用明文常量(不使用环境变量)
---
### Requirement: 错误处理策略
系统 SHALL 在批量回测失败时立即停止执行,不继续处理后续股票。
#### Scenario: 数据加载失败
- **WHEN** 系统加载第 i 个股票数据时失败(数据库错误、数据不存在)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息:"回测失败 [{code}]: {error}"
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 策略加载失败
- **WHEN** 系统加载策略文件时失败(文件不存在、接口不完整)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息:"策略加载失败: {error}"
- **THEN** 系统停止执行所有股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 回测执行失败
- **WHEN** 系统执行第 i 个股票回测时失败(策略逻辑错误)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息和完整堆栈跟踪
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 图表生成失败
- **WHEN** 系统生成第 i 个股票图表时失败
- **THEN** 系统捕获异常
- **THEN** 系统输出警告:"图表生成失败 [{code}]: {error},但回测已完成"
- **THEN** 系统继续执行后续股票的回测
- **THEN** 系统在返回的 `BacktestResult` 中设置 `error` 字段(如果设计支持)
---
### Requirement: 依赖管理
系统 SHALL 在 pyproject.toml 中添加 tabulate 和 tqdm 依赖。
#### Scenario: 添加 tabulate 依赖
- **WHEN** 查看 pyproject.toml 文件
- **THEN** 文件包含 `tabulate` 依赖
- **THEN** 依赖版本 SHALL 为兼容当前 Python 版本的版本
- **THEN** 系统可以导入 `import tabulate` 无错误
#### Scenario: 添加 tqdm 依赖
- **WHEN** 查看 pyproject.toml 文件
- **THEN** 文件包含 `tqdm` 依赖
- **THEN** 依赖版本 SHALL 为兼容当前 Python 版本的版本
- **THEN** 系统可以导入 `from tqdm import tqdm` 无错误
#### Scenario: 依赖安装
- **WHEN** 用户运行 `uv sync` 或 `pip install -e .`
- **THEN** 系统自动安装 tabulate 和 tqdm
- **THEN** 系统显示依赖安装进度
- **THEN** 系统完成安装后可以正常使用回测工具
#### Scenario: 依赖缺失提示
- **WHEN** 系统导入 tabulate 或 tqdm 时失败
- **THEN** 系统输出友好错误信息:"缺少依赖: {package_name},请运行: uv add {package_name}"
- **THEN** 系统退出并返回非零状态码

View File

@@ -0,0 +1,96 @@
## 1. 依赖管理
- [x] 1.1 运行 `uv add tabulate` 添加依赖
- [x] 1.2 运行 `uv add tqdm` 添加依赖
- [x] 1.3 运行 `uv sync` 同步依赖
## 2. 配置管理模块
- [x] 2.1 创建 config.py 文件
- [x] 2.2 在 config.py 中定义数据库配置常量DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD
- [x] 2.3 在 config.py 中定义默认回测参数DEFAULT_CASH, DEFAULT_COMMISSION, DEFAULT_WARMUP_DAYS
- [x] 2.4 在 config.py 中定义图表配色BULL_COLOR, BEAR_COLOR
- [x] 2.5 测试 config.py 导入无错误
## 3. 核心回测引擎
- [x] 3.1 创建 backtest_core.py 文件
- [x] 3.2 在 backtest_core.py 中导入必要模块和 config
- [x] 3.3 定义 BacktestResult dataclass包含所有回测指标字段
- [x] 3.4 迁移 load_data_from_db() 函数(使用 config 数据库配置)
- [x] 3.5 迁移 load_strategy() 函数(保持原有逻辑)
- [x] 3.6 迁移 apply_color_scheme() 函数(使用 config 配色)
- [x] 3.7 实现 run_backtest() 函数(单股票回测)
- [x] 3.7.1 实现预热期日期计算逻辑
- [x] 3.7.2 实现数据加载和策略加载调用
- [x] 3.7.3 实现指标计算和数据截取
- [x] 3.7.4 实现 Backtest 执行
- [x] 3.7.5 实现图表生成(可选)
- [x] 3.7.6 实现 BacktestResult 对象构建和返回
- [x] 3.8 实现 run_batch_backtest() 函数(批量回测,串行)
- [x] 3.8.1 实现循环遍历股票代码
- [x] 3.8.2 实现为每个股票调用 run_backtest()
- [x] 3.8.3 实现为每个股票生成独立 HTML 文件
- [x] 3.8.4 实现结果列表收集和返回
- [x] 3.8.5 实现 tqdm 进度条显示(批量时)
- [x] 3.9 测试 run_backtest() 单股票回测
- [x] 3.10 测试 run_batch_backtest() 多股票回测
## 4. CLI 界面模块
- [x] 4.1 创建 backtest_command.py 文件
- [x] 4.2 在 backtest_command.py 中导入必要模块和 backtest_core
- [x] 4.3 实现 parse_arguments() 函数
- [x] 4.3.1 定义 --codes 多值参数nargs='+'
- [x] 4.3.2 定义 --output-dir 目录参数
- [x] 4.3.3 保持原有参数(--start-date, --end-date, --strategy-file, --cash, --commission, --warmup-days
- [x] 4.3.4 添加参数帮助文档和示例
- [x] 4.4 实现 format_single_result() 函数(详细格式输出)
- [x] 4.4.1 实现每个指标单独一行的格式化
- [x] 4.4.2 保持原有 print_stats() 的输出格式
- [x] 4.5 实现 format_batch_results() 函数(表格格式输出)
- [x] 4.5.1 实现使用 tabulate 生成表格
- [x] 4.5.2 定义表格列:股票代码、收益率%、胜率%、最大回撤%、交易次数、SQN
- [x] 4.5.3 实现表格数据填充(从 BacktestResult 对象提取)
- [x] 4.5.4 实现表格格式为 grid
- [x] 4.6 实现 main() 函数
- [x] 4.6.1 调用 parse_arguments() 解析参数
- [x] 4.6.2 调用 run_batch_backtest() 执行批量回测
- [x] 4.6.3 根据结果数量调用 format_single_result() 或 format_batch_results()
- [x] 4.6.4 实现图表保存提示(指定 --output-dir 时)
- [x] 4.6.5 实现错误捕获和友好错误信息输出
- [x] 4.6.6 实现退出状态码设置(成功 0失败非零
- [x] 4.7 添加 `if __name__ == "__main__": main()` 入口
- [x] 4.8 测试单股票回测命令行调用 (`uv run python backtest_command.py`)
- [x] 4.9 测试多股票回测命令行调用 (`uv run python backtest_command.py`)
- [x] 4.10 测试错误处理(参数缺失、文件不存在等)
## 5. 清理旧代码
- [x] 5.1 确认新功能完整(单股票、多股票、图表输出)
- [x] 5.2 确认错误处理正确(立即失败)
- [x] 5.3 删除 backtest.py 文件
- [x] 5.4 验证 git 状态(仅删除旧文件,无其他修改)
## 6. 文档更新
- [x] 6.1 更新 README.md如果存在
- [x] 6.1.1 说明新的命令行使用方式(`uv run python backtest_command.py`
- [x] 6.1.2 说明参数变化(--code 改为 --codes
- [x] 6.1.3 提供单股票和多股票示例
- [x] 6.1.4 说明 --output-dir 用法(多股票图表)
- [x] 6.2 创建 note_refactor.md可选记录重构说明
- [x] 6.2.1 说明文件结构变化
- [x] 6.2.2 说明接口变化
- [x] 6.2.3 提供迁移指南
## 7. 集成测试
- [x] 7.1 测试单个股票完整流程000001.SZ
- [x] 7.2 测试多个股票完整流程000001.SZ 600000.SH
- [x] 7.3 测试指定 --output-dir 生成图表
- [x] 7.4 测试不指定 --output-dir不生成图表
- [x] 7.5 测试错误情况(无效股票代码、不存在的策略文件等)
- [x] 7.6 验证进度条显示(多股票时)
- [x] 7.7 验证表格格式输出(多股票时)
- [x] 7.8 验证详细格式输出(单股票时)

7
openspec/config.yaml Normal file
View File

@@ -0,0 +1,7 @@
schema: spec-driven
Example:
context: |
使用 uv 工具进行 python 环境的管理和三方依赖的管理运行python命令的时候使用uv run python xxx
严禁在主机环境直接运行 pip、pip3 安装依赖包,必须使用 uv add xxx命令安装
项目面向中文开发者文档输出、日志输出、agent 交流时都要使用中文

View File

@@ -0,0 +1,195 @@
# Spec: Backtest CLI
## ADDED Requirements
### Requirement: 命令行参数解析
回测脚本 SHALL 通过命令行参数接收用户输入,参数 SHALL 包含股票代码、时间范围、策略文件、回测参数等。
#### Scenario: 基础回测执行
- **WHEN** 用户执行 `python backtest.py --code 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategy.py`
- **THEN** 系统解析所有必需参数,无错误提示
- **THEN** 开始执行回测流程
- **THEN** 回测完成后输出统计信息到控制台
#### Scenario: 可选参数未指定
- **WHEN** 用户未指定 `--cash` 参数
- **THEN** 系统使用默认值 100000 作为初始资金
- **WHEN** 用户未指定 `--commission` 参数
- **THEN** 系统使用默认值 0.002 作为手续费率
- **WHEN** 用户未指定 `--output` 参数
- **THEN** 系统不生成 HTML 图表文件
#### Scenario: 必需参数缺失
- **WHEN** 用户未提供 `--code` 参数
- **THEN** 系统输出错误信息:"错误: 需要以下参数: --code"
- **THEN** 系统退出并返回非零状态码
- **WHEN** 用户未提供 `--start-date``--end-date` 参数
- **THEN** 系统输出对应的错误信息
- **THEN** 系统退出并返回非零状态码
#### Scenario: 自定义参数值
- **WHEN** 用户指定 `--cash 500000 --commission 0.001 --output result.html`
- **THEN** 系统使用指定的 500000 作为初始资金
- **THEN** 系统使用指定的 0.001 作为手续费率
- **THEN** 回测完成后生成 HTML 图表到 result.html
---
### Requirement: 数据库数据加载
回测脚本 SHALL 从 PostgreSQL 数据库加载指定股票的历史价格数据,并自动处理复权。
#### Scenario: 成功加载数据
- **WHEN** 用户指定有效的股票代码和时间范围
- **THEN** 系统连接数据库并执行查询
- **THEN** 返回 DataFrame包含列: [Open, High, Low, Close, Volume, factor]
- **THEN** DataFrame 的索引为 trade_date (DatetimeIndex)
- **THEN** 数据已应用复权计算price * factor
#### Scenario: 数据库连接失败
- **WHEN** 数据库连接失败(凭证错误、网络问题等)
- **THEN** 系统捕获异常并输出错误信息:"数据库连接失败: {error}"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 未找到股票数据
- **WHEN** 指定的股票代码或时间范围内无数据
- **THEN** 系统抛出 ValueError: "未找到股票 {code} 在指定时间范围内的数据"
- **THEN** 主流程捕获异常并输出友好错误信息
- **THEN** 系统退出并返回非零状态码
#### Scenario: 数据验证
- **WHEN** 数据库返回的 DataFrame 为空
- **THEN** 系统提示数据为空并退出
- **WHEN** 数据库返回的 DataFrame 少于 10 条记录
- **THEN** 系统提示数据不足并退出
---
### Requirement: 策略动态加载
回测脚本 SHALL 支持动态加载指定路径的策略文件,并验证策略接口。
#### Scenario: 加载有效策略文件
- **WHEN** 用户指定 `--strategy-file strategy.py`
- **THEN** 系统通过 importlib 加载该模块
- **THEN** 系统获取模块的 `calculate_indicators` 函数
- **THEN** 系统调用模块的 `get_strategy()` 函数获取策略类
- **THEN** 系统返回 (calculate_indicators, strategy_class) 元组
#### Scenario: 策略文件不存在
- **WHEN** 用户指定的策略文件路径不存在
- **THEN** 系统捕获 FileNotFoundError
- **THEN** 输出错误信息:"策略文件 {file} 不存在"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 策略接口不完整
- **WHEN** 策略文件缺少 `calculate_indicators` 函数
- **THEN** 系统捕获 AttributeError
- **THEN** 输出错误信息:"策略文件 {file} 缺少 calculate_indicators 函数"
- **THEN** 系统退出并返回非零状态码
- **WHEN** 策略文件缺少 `get_strategy` 函数
- **THEN** 系统捕获 AttributeError
- **THEN** 输出错误信息:"策略文件 {file} 缺少 get_strategy 函数"
- **THEN** 系统退出并返回非零状态码
#### Scenario: 加载子目录中的策略
- **WHEN** 用户指定 `--strategy-file strategies/macd_strategy.py`
- **THEN** 系统正确加载子目录中的策略模块
- **THEN** 系统成功获取策略类和指标计算函数
---
### Requirement: 指标计算
回测脚本 SHALL 在执行回测前调用策略的指标计算函数,将技术指标添加到数据集中。
#### Scenario: 成功计算指标
- **WHEN** 系统调用 `calculate_indicators(data)`
- **THEN** 函数接收包含 [Open, High, Low, Close, Volume, factor] 的 DataFrame
- **THEN** 函数计算策略所需的指标(如 SMA, MACD, RSI
- **THEN** 函数返回添加了指标列的 DataFrame
- **THEN** DataFrame 保留原始列,新增指标列
#### Scenario: 指标计算产生 NaN 值
- **WHEN** 滚动窗口计算导致前 N 行的指标值为 NaN
- **THEN** DataFrame 包含 NaN 值(系统不自动删除)
- **THEN** Backtest 框架在回测时会跳过 NaN 值的行
#### Scenario: 指标计算函数抛出异常
- **WHEN** `calculate_indicators(data)` 执行时抛出异常
- **THEN** 主流程捕获异常
- **THEN** 输出错误信息:"指标计算失败: {error}"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 回测执行
回测脚本 SHALL 使用 backtesting 库执行回测,传入数据、策略和参数。
#### Scenario: 成功执行回测
- **WHEN** 系统调用 `Backtest(data, strategy_class, cash=..., commission=...).run()`
- **THEN** Backtest 初始化时调用策略类的 `init()` 方法
- **THEN** Backtest 逐个时间步调用策略类的 `next()` 方法
- **THEN** 系统返回包含回测统计信息的 stats 对象
#### Scenario: 回测参数传递
- **WHEN** 用户指定 `--cash 500000 --commission 0.001`
- **THEN** Backtest 实例化时使用 cash=500000
- **THEN** Backtest 实例化时使用 commission=0.001
- **THEN** Backtest 实例化时使用 finalize_trades=True
#### Scenario: 回测运行时错误
- **WHEN** 策略的 `next()` 方法执行时抛出异常
- **THEN** backtesting 库捕获异常
- **THEN** 系统输出错误信息和堆栈跟踪
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 结果输出
回测脚本 SHALL 将回测统计信息格式化输出到控制台,并可选生成 HTML 图表文件。
#### Scenario: 控制台输出
- **WHEN** 回测成功完成
- **THEN** 系统调用 `print_stats(stats)` 函数
- **THEN** 系统输出回测统计信息,使用中文标签
- **THEN** 输出内容包括:最终收益、总收益率、年化收益率、最大回撤、胜率等
- **THEN** 数值格式化(保留 2 位小数)
#### Scenario: 生成 HTML 图表
- **WHEN** 用户指定 `--output result.html`
- **THEN** 系统调用 `bt.plot(filename='result.html', show=False)`
- **THEN** 系统生成 HTML 文件到 result.html
- **THEN** 系统输出提示:"图表已保存到: result.html"
- **THEN** 图表包含价格曲线、资金曲线、买卖信号等
#### Scenario: 不生成 HTML 图表
- **WHEN** 用户未指定 `--output` 参数
- **THEN** 系统不调用 bt.plot() 方法
- **THEN** 系统不生成任何图表文件
- **THEN** 系统仅输出控制台统计信息
#### Scenario: 图表生成失败
- **WHEN** bt.plot() 方法执行时抛出异常
- **THEN** 系统捕获异常
- **THEN** 系统输出警告:"图表生成失败,但回测已完成: {error}"
- **THEN** 系统不影响控制台统计信息的输出
- **THEN** 系统正常退出(返回状态码 0
---
### Requirement: 错误处理
回测脚本 SHALL 对所有可能的错误进行捕获和处理,提供友好的错误提示。
#### Scenario: 数据库错误
- **WHEN** 数据库操作抛出 sqlalchemy.exc.SQLAlchemyError
- **THEN** 系统输出错误信息:"数据库错误: {error}"
- **THEN** 系统退出并返回状态码 2
#### Scenario: 文件操作错误
- **WHEN** 图表文件保存失败(权限、磁盘空间等)
- **THEN** 系统输出错误信息:"文件操作错误: {error}"
- **THEN** 系统退出并返回状态码 3
#### Scenario: 未预期的错误
- **WHEN** 发生其他未捕获的异常
- **THEN** 系统输出错误信息:"未知错误: {error}"
- **THEN** 系统输出完整的堆栈跟踪
- **THEN** 系统退出并返回状态码 1

View File

@@ -0,0 +1,312 @@
# batch-backtest Specification
## Purpose
TBD - created by archiving change refactor-backtest-separate-cli. Update Purpose after archive.
## Requirements
### Requirement: 多股票回测参数
系统 SHALL 支持通过命令行参数传入多个股票代码进行批量回测。
#### Scenario: 传入多个股票代码
- **WHEN** 用户执行 `python backtest_command.py --codes 000001.SZ 600000.SH --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategies/macd_strategy.py`
- **THEN** 系统解析所有股票代码到列表 `['000001.SZ', '600000.SH']`
- **THEN** 系统按顺序依次执行每个股票的回测
- **THEN** 系统为每个股票生成独立的回测结果
#### Scenario: 传入单个股票代码
- **WHEN** 用户执行 `python backtest_command.py --codes 000001.SZ --start-date 2024-01-01 --end-date 2025-12-31 --strategy-file strategies/macd_strategy.py`
- **THEN** 系统解析为单个股票代码列表 `['000001.SZ']`
- **THEN** 系统执行单个股票回测
- **THEN** 系统输出详细格式的回测结果
#### Scenario: 缺少 --codes 参数
- **WHEN** 用户未提供 `--codes` 参数
- **THEN** 系统输出错误信息:"错误: 需要以下参数: --codes"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 批量回测执行
系统 SHALL 串行执行多个股票的回测,每次加载一个股票的数据并执行回测。
#### Scenario: 成功执行多个股票回测
- **WHEN** 用户传入 N 个股票代码
- **THEN** 系统循环 N 次,每次加载一个股票的数据
- **THEN** 系统每次执行完整的回测流程(数据加载、指标计算、回测执行)
- **THEN** 系统每次执行完成后生成 `BacktestResult` 对象
- **THEN** 系统返回包含 N 个 `BacktestResult` 的列表
#### Scenario: 每个股票独立预热期
- **WHEN** 系统执行第 i 个股票的回测
- **THEN** 系统使用 `start_date - warmup_days` 计算该股票的预热开始日期
- **THEN** 系统独立加载该股票的预热期数据
- **THEN** 不同股票的预热期互不影响
#### Scenario: 第一个股票回测失败
- **WHEN** 系统执行第一个股票回测时发生错误(数据库连接失败、策略加载失败等)
- **THEN** 系统捕获异常并输出错误信息
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码(立即失败策略)
#### Scenario: 中间股票回测失败
- **WHEN** 系统执行第 i 个股票回测时发生错误
- **THEN** 系统输出错误信息(包含股票代码)
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 资源管理
- **WHEN** 系统完成第 i 个股票的回测
- **THEN** 系统关闭该股票的数据库连接(`engine.dispose()`
- **THEN** 系统释放该股票的数据内存
- **THEN** 系统开始加载第 i+1 个股票的数据
---
### Requirement: 批量回测进度显示
系统 SHALL 使用 tqdm 显示批量回测的实时进度,提供用户反馈。
#### Scenario: 显示进度条
- **WHEN** 系统开始执行 N 个股票的批量回测
- **THEN** 系统显示进度条格式:`回测进度: 25%|█████▌ | 1/4 [00:30<01:30, 12.5s/it]`
- **THEN** 系统在完成每个股票回测后更新进度条
- **THEN** 进度条显示当前进度i/N、已用时间、预计剩余时间
- **THEN** 进度条在所有股票回测完成后消失
#### Scenario: 单股票回测不显示进度条
- **WHEN** 用户传入单个股票代码
- **THEN** 系统不显示 tqdm 进度条
- **THEN** 系统直接输出回测结果
#### Scenario: 进度条描述文本
- **WHEN** 系统显示批量回测进度
- **THEN** 进度条描述 SHALL 为 "回测进度"(中文)
- **THEN** 进度条显示已完成/总数(如 "1/4", "2/4"
---
### Requirement: 批量回测结果展示
系统 SHALL 使用 tabulate 将多个股票的回测结果格式化为表格,便于横向对比。
#### Scenario: 表格化输出多股票结果
- **WHEN** 用户传入多个股票代码且回测成功
- **THEN** 系统使用 tabulate 生成表格
- **THEN** 表格格式 SHALL 为 grid带边框
- **THEN** 表格列 SHALL 包含:股票代码、收益率%、胜率%、最大回撤%、交易次数、SQN
- **THEN** 系统在表格上方显示表头(中文列名)
- **THEN** 数值保留 2 位小数(交易次数为整数)
#### Scenario: 表格内容填充
- **WHEN** 系统格式化第 i 个股票的结果
- **THEN** 系统从 `BacktestResult` 对象提取字段
- **THEN** "股票代码" 列填充 `result.code`
- **THEN** "收益率%" 列填充 `result.return_pct`
- **THEN** "胜率%" 列填充 `result.win_rate`
- **THEN** "最大回撤%" 列填充 `result.max_drawdown`
- **THEN** "交易次数" 列填充 `result.trades`
- **THEN** "SQN" 列填充 `result.sqn`
#### Scenario: 单股票回测不使用表格
- **WHEN** 用户传入单个股票代码
- **THEN** 系统不使用 tabulate 生成表格
- **THEN** 系统使用详细格式输出(每个指标单独一行)
- **THEN** 系统保持原有 `print_stats()` 的输出格式
#### Scenario: 表格示例输出
- **WHEN** 用户传入 2 个股票代码
- **THEN** 系统输出格式 SHALL 为:
```
+-------------+-----------+--------+------------+----------+-------+
| 股票代码 | 收益率% | 胜率% | 最大回撤% | 交易次数 | SQN |
+-------------+-----------+--------+------------+----------+-------+
| 000001.SZ | 20.35 | 55.00 | -8.50 | 45 | 1.85 |
| 600000.SH | 15.00 | 48.00 | -12.30 | 38 | 1.42 |
+-------------+-----------+--------+------------+----------+-------+
```
---
### Requirement: 多股票图表输出
系统 SHALL 为每个股票生成独立的 HTML 图表文件,文件名格式为 `{code}.html`。
#### Scenario: 指定 --output-dir 参数
- **WHEN** 用户传入 `--output-dir output/`
- **THEN** 系统为每个股票生成 HTML 文件到 `output/{code}.html`
- **THEN** 文件名 SHALL 为股票代码,如 `000001.SZ.html`, `600000.SH.html`
- **THEN** 系统自动创建 `output/` 目录(`exist_ok=True`
- **THEN** 系统在完成后输出提示:"图表已保存到目录: output/" 后列出所有文件
#### Scenario: 未指定 --output-dir 参数
- **WHEN** 用户未传入 `--output-dir` 参数
- **THEN** 系统不为任何股票生成图表文件
- **THEN** 系统仅输出控制台统计信息
#### Scenario: 图表文件覆盖
- **WHEN** 系统再次执行相同的批量回测
- **THEN** 系统覆盖已存在的 HTML 文件
- **THEN** 系统不提示文件已存在
---
### Requirement: 结构化回测结果
系统 SHALL 返回标准化的 `BacktestResult` 对象,包含所有关键指标。
#### Scenario: BacktestResult 对象创建
- **WHEN** 系统完成单股票回测
- **THEN** 系统从 `stats` 对象提取指标到 `BacktestResult`
- **THEN** `BacktestResult.code` 设置为股票代码
- **THEN** `BacktestResult.start_date` 设置为回测开始日期
- **THEN** `BacktestResult.end_date` 设置为回测结束日期
- **THEN** `BacktestResult.equity_final` 设置为最终权益
- **THEN** `BacktestResult.equity_peak` 设置为峰值收益
- **THEN** `BacktestResult.return_pct` 设置为总收益率
- **THEN** `BacktestResult.buy_hold_return` 设置为买入持有收益率
- **THEN** `BacktestResult.return_annual` 设置为年化收益率
- **THEN** `BacktestResult.volatility_annual` 设置为年化波动率
- **THEN** `BacktestResult.max_drawdown` 设置为最大回撤
- **THEN** `BacktestResult.avg_drawdown` 设置为平均回撤
- **THEN** `BacktestResult.max_drawdown_duration` 设置为最大回撤持续时长
- **THEN** `BacktestResult.avg_drawdown_duration` 设置为平均回撤持续时长
- **THEN** `BacktestResult.sortino_ratio` 设置为索提诺比率
- **THEN** `BacktestResult.calmar_ratio` 设置为卡尔玛比率
- **THEN** `BacktestResult.trades` 设置为交易次数
- **THEN** `BacktestResult.win_rate` 设置为胜率
- **THEN** `BacktestResult.sqn` 设置为系统质量数
- **THEN** `BacktestResult.cash` 设置为初始资金
- **THEN** `BacktestResult.commission` 设置为手续费率
#### Scenario: BacktestResult 列表返回
- **WHEN** 系统完成批量回测
- **THEN** 系统返回 `List[BacktestResult]`
- **THEN** 列表顺序 SHALL 与输入股票代码顺序一致
- **THEN** 列表长度 SHALL 等于输入股票代码数量(成功时)
#### Scenario: BacktestResult 数据类型
- **WHEN** 系统创建 `BacktestResult` 对象
- **THEN** 数值字段 SHALL 为 float 类型(除 `trades`, `max_drawdown_duration` 为 int
- **THEN** 日期字段 SHALL 为 str 类型YYYY-MM-DD 格式)
- **THEN** 系统支持 `result.to_dict()` 方法dataclass 自动生成)
---
### Requirement: 可复用回测引擎接口
系统 SHALL 提供标准化的函数接口,供其他模块调用回测功能。
#### Scenario: run_backtest 函数调用
- **WHEN** 其他模块调用 `run_backtest(code, start_date, end_date, strategy_file, cash, commission, warmup_days, output_file)`
- **THEN** 函数接收股票代码、日期范围、策略文件、回测参数、输出文件路径
- **THEN** 函数执行完整回测流程(数据加载、策略加载、指标计算、回测执行)
- **THEN** 函数返回 `BacktestResult` 对象
- **THEN** 函数不打印任何输出(纯函数)
#### Scenario: run_batch_backtest 函数调用
- **WHEN** 其他模块调用 `run_batch_backtest(codes, start_date, end_date, strategy_file, cash, commission, warmup_days, output_dir)`
- **THEN** 函数接收股票代码列表、日期范围、策略文件、回测参数、输出目录
- **THEN** 函数串行执行每个股票的回测
- **THEN** 函数返回 `List[BacktestResult]`
- **THEN** 函数显示 tqdm 进度条(批量时)
#### Scenario: 函数参数默认值
- **WHEN** 调用者不指定可选参数
- **THEN** `cash` 默认为 100000
- **THEN** `commission` 默认为 0.002
- **THEN** `warmup_days` 默认为 365
- **THEN** `output_file` 默认为 None不生成图表
- **THEN** `output_dir` 默认为 None不生成图表
#### Scenario: 函数异常抛出
- **WHEN** `run_backtest` 或 `run_batch_backtest` 执行时发生错误
- **THEN** 函数 SHALL 抛出异常(不捕获)
- **THEN** 异常类型 SHALL 为 ValueError、TypeError 或原始异常
- **THEN** 异常信息 SHALL 包含具体错误原因
- **THEN** 调用者负责捕获和处理异常
---
### Requirement: 集中配置管理
系统 SHALL 在 config.py 中集中管理数据库配置、默认回测参数、图表配色。
#### Scenario: 数据库配置访问
- **WHEN** backtest_core.py 需要数据库连接参数
- **THEN** 模块从 config 导入 `DB_HOST`, `DB_PORT`, `DB_NAME`, `DB_USER`, `DB_PASSWORD`
- **THEN** 模块使用这些常量构建连接字符串
- **THEN** 模块不重复定义数据库配置
#### Scenario: 默认参数访问
- **WHEN** backtest_core.py 需要默认回测参数
- **THEN** 模块从 config 导入 `DEFAULT_CASH`, `DEFAULT_COMMISSION`, `DEFAULT_WARMUP_DAYS`
- **THEN** 模块使用这些常量作为函数默认值
- **THEN** 模块不重复定义默认参数
#### Scenario: 图表配色访问
- **WHEN** backtest_core.py 需要设置图表配色
- **THEN** 模块从 config 导入 `BULL_COLOR`, `BEAR_COLOR`
- **THEN** 模块使用这些颜色设置 `plotting.BULL_COLOR` 和 `plotting.BEAR_COLOR`
- **THEN** 模块不重复定义颜色配置
#### Scenario: 配置文件内容
- **WHEN** 查看 config.py 文件
- **THEN** 文件包含数据库配置DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD
- **THEN** 文件包含默认回测参数DEFAULT_CASH, DEFAULT_COMMISSION, DEFAULT_WARMUP_DAYS
- **THEN** 文件包含图表配色BULL_COLOR, BEAR_COLOR
- **THEN** 所有配置使用明文常量(不使用环境变量)
---
### Requirement: 错误处理策略
系统 SHALL 在批量回测失败时立即停止执行,不继续处理后续股票。
#### Scenario: 数据加载失败
- **WHEN** 系统加载第 i 个股票数据时失败(数据库错误、数据不存在)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息:"回测失败 [{code}]: {error}"
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 策略加载失败
- **WHEN** 系统加载策略文件时失败(文件不存在、接口不完整)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息:"策略加载失败: {error}"
- **THEN** 系统停止执行所有股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 回测执行失败
- **WHEN** 系统执行第 i 个股票回测时失败(策略逻辑错误)
- **THEN** 系统捕获异常
- **THEN** 系统输出错误信息和完整堆栈跟踪
- **THEN** 系统停止执行后续股票的回测
- **THEN** 系统退出并返回非零状态码
#### Scenario: 图表生成失败
- **WHEN** 系统生成第 i 个股票图表时失败
- **THEN** 系统捕获异常
- **THEN** 系统输出警告:"图表生成失败 [{code}]: {error},但回测已完成"
- **THEN** 系统继续执行后续股票的回测
- **THEN** 系统在返回的 `BacktestResult` 中设置 `error` 字段(如果设计支持)
---
### Requirement: 依赖管理
系统 SHALL 在 pyproject.toml 中添加 tabulate 和 tqdm 依赖。
#### Scenario: 添加 tabulate 依赖
- **WHEN** 查看 pyproject.toml 文件
- **THEN** 文件包含 `tabulate` 依赖
- **THEN** 依赖版本 SHALL 为兼容当前 Python 版本的版本
- **THEN** 系统可以导入 `import tabulate` 无错误
#### Scenario: 添加 tqdm 依赖
- **WHEN** 查看 pyproject.toml 文件
- **THEN** 文件包含 `tqdm` 依赖
- **THEN** 依赖版本 SHALL 为兼容当前 Python 版本的版本
- **THEN** 系统可以导入 `from tqdm import tqdm` 无错误
#### Scenario: 依赖安装
- **WHEN** 用户运行 `uv sync` 或 `pip install -e .`
- **THEN** 系统自动安装 tabulate 和 tqdm
- **THEN** 系统显示依赖安装进度
- **THEN** 系统完成安装后可以正常使用回测工具
#### Scenario: 依赖缺失提示
- **WHEN** 系统导入 tabulate 或 tqdm 时失败
- **THEN** 系统输出友好错误信息:"缺少依赖: {package_name},请运行: uv add {package_name}"
- **THEN** 系统退出并返回非零状态码

View File

@@ -0,0 +1,280 @@
# Spec: Data Fetching
## ADDED Requirements
### Requirement: 数据库连接配置
系统 SHALL 通过硬编码常量管理数据库连接参数(开发环境)。
#### Scenario: 使用硬编码常量
- **WHEN** 系统在 backtest.py 中定义数据库配置
- **THEN** 系统定义 DB_HOST, DB_NAME, DB_USER, DB_PASSWORD 常量
- **THEN** DB_HOST 值 SHALL 为数据库主机地址(如 '81.71.3.24'
- **THEN** DB_NAME 值 SHALL 为数据库名称(如 'leopard_dev'
- **THEN** DB_USER 值 SHALL 为数据库用户名
- **THEN** DB_PASSWORD 值 SHALL 为数据库密码
#### Scenario: 构建连接字符串
- **WHEN** 系统创建 SQLAlchemy 连接
- **THEN** 系统使用硬编码的常量构建连接字符串
- **THEN** 连接字符串格式 SHALL 为 `postgresql://{user}:{password}@{host}/{database}`
- **THEN** 不从环境变量读取任何凭证
#### Scenario: 修改数据库凭证
- **WHEN** 开发人员需要更换数据库或凭证
- **THEN** 开发人员直接修改 backtest.py 中的常量值
- **THEN** 修改后脚本使用新凭证连接数据库
---
### Requirement: 数据库连接建立
系统 SHALL 使用 SQLAlchemy 创建 PostgreSQL 数据库连接。
#### Scenario: 成功建立连接
- **WHEN** 凭证正确且数据库可访问
- **THEN** 系统使用 `sqlalchemy.create_engine(conn_str)` 创建引擎
- **THEN** 连接字符串格式 SHALL 为 `postgresql://{user}:{password}@{host}/{database}`
- **THEN** 系统成功创建引擎对象
- **THEN** 系统可用于执行查询
#### Scenario: 连接字符串构建
- **WHEN** 系统构建 PostgreSQL 连接字符串
- **THEN** 连接字符串 SHALL 正确编码特殊字符(密码中的 @, : 等)
- **THEN** 连接字符串 SHALL 使用标准 URI 格式
- **THEN** 连接字符串 SHALL 不包含额外选项(仅基础连接参数)
#### Scenario: 数据库连接失败
- **WHEN** 凭证错误或数据库不可达
- **THEN** SQLAlchemy 抛出 `sqlalchemy.exc.OperationalError`
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"数据库连接失败: {error}"
- **THEN** 系统退出并返回状态码 2
#### Scenario: 连接池管理
- **WHEN** 系统创建引擎对象
- **THEN** SQLAlchemy SHALL 自动管理连接池
- **THEN** 查询后连接 SHALL 自动返回池中
- **THEN** 系统 SHALL 在查询完成后调用 `engine.dispose()` 清理
---
### Requirement: SQL 查询构建
系统 SHALL 构建参数化的 SQL 查询以获取股票历史数据。
#### Scenario: 基础查询结构
- **WHEN** 系统构建查询
- **THEN** 查询 SHALL 选择 trade_date, Open, High, Low, Close, Volume, factor
- **THEN** 查询 SHALL 连接 leopard_daily 和 leopard_stock 表
- **THEN** 查询 SHALL 按 stock.code 过滤
- **THEN** 查询 SHALL 按 trade_date 范围过滤
- **THEN** 查询 SHALL 按 trade_date 升序排序
#### Scenario: 复权价格计算
- **WHEN** 系统计算复权价格
- **THEN** Open SHALL 计算为 `open * factor`
- **THEN** Close SHALL 计算为 `close * factor`
- **THEN** High SHALL 计算为 `high * factor`
- **THEN** Low SHALL 计算为 `low * factor`
- **THEN** Volume SHALL 直接使用原始值(不复权)
- **THEN** factor SHALL 使用 `COALESCE(factor, 1.0)` 处理 NULL 值
#### Scenario: 参数化股票代码
- **WHEN** 用户指定股票代码(如 '000001.SZ'
- **THEN** 查询 WHERE 子句 SHALL 使用 `stock.code = '{code}'`
- **THEN** 代码 SHALL 精确匹配(不使用 LIKE
- **THEN** 查询 SHALL 返回匹配股票的所有日线数据
#### Scenario: 参数化日期范围
- **WHEN** 用户指定开始日期 '2024-01-01' 和结束日期 '2025-12-31'
- **THEN** 查询 WHERE 子句 SHALL 使用 `BETWEEN '{start_date} 00:00:00' AND '{end_date} 23:59:59'`
- **THEN** 00:00:00 和 23:59:59 SHALL 覆盖全天
- **THEN** 日期格式 SHALL 为 YYYY-MM-DD HH:MM:SS
#### Scenario: 完整 SQL 查询
- **WHEN** 系统执行数据加载
- **THEN** 查询 SHALL 为:
```sql
SELECT
trade_date,
open * factor AS Open,
close * factor AS Close,
high * factor AS High,
low * factor AS Low,
volume AS Volume,
COALESCE(factor, 1.0) AS factor
FROM leopard_daily daily
LEFT JOIN leopard_stock stock ON stock.id = daily.stock_id
WHERE stock.code = '{code}'
AND daily.trade_date BETWEEN '{start_date} 00:00:00'
AND '{end_date} 23:59:59'
ORDER BY daily.trade_date
```
---
### Requirement: 数据查询执行
系统 SHALL 使用 pandas 的 `read_sql` 函数执行 SQL 查询并返回 DataFrame。
#### Scenario: 成功执行查询
- **WHEN** SQL 查询有效且数据存在
- **THEN** 系统调用 `pd.read_sql(query, engine)`
- **THEN** 系统返回 DataFrame 对象
- **THEN** DataFrame SHALL 包含查询结果的所有列
- **THEN** DataFrame 行数 SHALL 匹配数据库返回的记录数
#### Scenario: 数据类型处理
- **WHEN** pandas 读取 SQL 结果
- **THEN** trade_date SHALL 自动转换为 datetime 类型
- **THEN** Open, High, Low, Close, Volume SHALL 为 float 类型
- **THEN** factor SHALL 为 float 类型
- **THEN** 系统不需要手动类型转换(除日期索引设置)
#### Scenario: 查询返回空结果
- **WHEN** 指定股票代码或日期范围无数据
- **THEN** `read_sql` 返回空 DataFrame0 行)
- **THEN** 系统检查 `len(df) == 0`
- **THEN** 系统抛出 ValueError: "未找到股票 {code} 在指定时间范围内的数据"
#### Scenario: SQL 语法错误
- **WHEN** SQL 查询包含语法错误
- **THEN** SQLAlchemy 抛出 `sqlalchemy.exc.ProgrammingError`
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"SQL 查询错误: {error}"
- **THEN** 系统退出并返回状态码 2
---
### Requirement: 数据格式转换
系统 SHALL 将查询结果转换为 backtesting 库要求的格式。
#### Scenario: 设置日期索引
- **WHEN** DataFrame 加载完成
- **THEN** 系统调用 `df.set_index('trade_date', inplace=True)`
- **THEN** DataFrame 的索引 SHALL 为 DatetimeIndex
- **THEN** 索引 SHALL 不再是数值索引
- **THEN** backtesting 库 SHALL 能正确处理日期范围
#### Scenario: 列名格式化
- **WHEN** DataFrame 加载完成
- **THEN** 列名 SHALL 为 ['Open', 'High', 'Low', 'Close', 'Volume', 'factor']
- **THEN** 列名 SHALL 遵循 backtesting 库要求(首字母大写)
- **THEN** 列名 SHALL 与 SQL 查询中的别名一致
#### Scenario: 数据验证
- **WHEN** 系统准备返回 DataFrame
- **THEN** 系统验证 DataFrame 包含必需列
- **THEN** 系统验证 'Open', 'High', 'Low', 'Close', 'Volume' 列存在
- **THEN** 系统验证索引为 DatetimeIndex
- **WHEN** 验证失败
- **THEN** 系统抛出 ValueError: "数据格式不符合要求"
---
### Requirement: 数据清理
系统 SHALL 清理数据以确保回测质量。
#### Scenario: 删除 NULL 值行
- **WHEN** DataFrame 包含 NULL 或 NaN 值
- **THEN** 系统调用 `df.dropna()` 删除
- **THEN** 任何包含 NaN 的行 SHALL 被删除
- **THEN** 返回的 DataFrame SHALL 不包含 NULL 值
#### Scenario: 数据完整性检查
- **WHEN** DataFrame 加载完成
- **THEN** 系统检查 trade_date 连续性
- **THEN** 系统检查无重复日期
- **WHEN** 发现异常
- **THEN** 系统输出警告:"数据存在异常: {detail}"
#### Scenario: 最小数据量验证
- **WHEN** DataFrame 行数少于 10
- **THEN** 系统输出错误:"数据不足,至少需要 10 天数据"
- **THEN** 系统抛出 ValueError
- **THEN** 主流程捕获并退出
---
### Requirement: 资源管理
系统 SHALL 正确管理数据库连接和内存资源。
#### Scenario: 引擎创建和清理
- **WHEN** 系统开始数据加载
- **THEN** 系统创建 SQLAlchemy 引擎对象
- **THEN** 系统使用引擎执行查询
- **WHEN** 查询完成
- **THEN** 系统调用 `engine.dispose()` 关闭连接池
- **THEN** 系统释放所有数据库连接
#### Scenario: 异常情况下的资源清理
- **WHEN** 查询过程中抛出异常
- **THEN** 系统在 finally 块中调用 `engine.dispose()`
- **THEN** 所有连接 SHALL 被正确关闭
- **THEN** 系统不会泄漏数据库连接
---
### Requirement: 错误处理和日志
系统 SHALL 提供清晰的错误信息和调试支持。
#### Scenario: 连接错误信息
- **WHEN** 数据库连接失败
- **THEN** 错误信息 SHALL 包含数据库主机和端口
- **THEN** 错误信息 SHALL 区分网络错误和认证错误
- **THEN** 系统提示用户检查凭证和网络连接
#### Scenario: 查询错误信息
- **WHEN** SQL 查询失败
- **THEN** 错误信息 SHALL 包含失败的 SQL 语句
- **THEN** 错误信息 SHALL 包含数据库返回的错误详情
- **THEN** 系统提示用户检查表结构和数据
#### Scenario: 数据格式错误信息
- **WHEN** 返回的 DataFrame 不符合要求
- **THEN** 错误信息 SHALL 列出缺失的列
- **THEN** 错误信息 SHALL 提示期望的格式
- **THEN** 系统建议用户检查数据库表结构
---
### Requirement: 函数接口
`load_data_from_db` 函数 SHALL 提供清晰的调用接口。
#### Scenario: 函数签名
- **WHEN** 主流程调用 `load_data_from_db(code, start_date, end_date)`
- **THEN** 函数接收三个字符串参数
- **THEN** `code` 为股票代码(如 '000001.SZ'
- **THEN** `start_date` 为开始日期(如 '2024-01-01'
- **THEN** `end_date` 为结束日期(如 '2025-12-31'
#### Scenario: 返回值
- **WHEN** 数据加载成功
- **THEN** 函数返回 pandas.DataFrame
- **THEN** DataFrame 索引为 DatetimeIndextrade_date
- **THEN** DataFrame 包含 ['Open', 'High', 'Low', 'Close', 'Volume', 'factor'] 列
#### Scenario: 异常抛出
- **WHEN** 数据加载失败
- **THEN** 函数 SHALL 抛出异常(不捕获)
- **THEN** 异常类型 SHALL 为 ValueError业务逻辑错误
- **THEN** 主流程负责捕获和处理异常
---
### Requirement: 性能考虑
系统 SHALL 优化数据加载性能以支持大数据集。
#### Scenario: 使用 pandas 向量化操作
- **WHEN** 执行复权计算
- **THEN** 计算 SHALL 使用 pandas 向量化操作
- **THEN** 不使用循环逐行计算
- **THEN** 10 年数据(约 2500 行) SHALL 在 1 秒内加载
#### Scenario: 索引优化
- **WHEN** 设置 DataFrame 索引
- **THEN** `set_index()` 操作 SHALL 高效(使用底层数组拷贝)
- **THEN** 日期索引 SHALL 支持快速范围查询
#### Scenario: 内存管理
- **WHEN** 加载大数据集
- **THEN** 系统 SHALL 及时调用 `engine.dispose()` 释放连接
- **THEN** DataFrame SHALL 使用 pandas 内部优化存储
- **THEN** 内存占用 SHALL 合理10 年数据约几 MB

View File

@@ -0,0 +1,134 @@
## ADDED Requirements
### Requirement: MACD趋势跟踪策略
系统应提供基于MACD指标的趋势跟踪交易策略包括MACD计算、EMA200趋势过滤、以及基于金叉/死叉的交易信号生成。
#### Scenario: 策略文件加载
- **WHEN** 用户在命令行指定`--strategy-file strategies/macd_strategy.py`
- **THEN** backtest.py成功加载策略文件并执行回测
- **AND** 策略类正确注册所有技术指标到backtesting框架
- **AND** 策略逻辑根据MACD金叉/死叉和EMA200位置生成交易信号
#### Scenario: MACD指标计算
- **WHEN** 调用`calculate_indicators(data)`函数,传入包含[Open, High, Low, Close, Volume, factor]的DataFrame
- **THEN** 函数使用ta-lib计算以下指标并添加到DataFrame
- MACD线DIF: 10日EMA - 20日EMA
- MACD信号线DEA: 9日EMA的MACD
- MACD柱状图Histogram: MACD线 - 信号线
- EMA200: 200日指数移动平均线
- **AND** 返回包含原始数据和所有新增指标的DataFrame
- **AND** 指标名称使用ta-lib返回的默认列名macd、macdsignal、macdhist
#### Scenario: 策略初始化
- **WHEN** backtesting框架初始化MacdTrendStrategy策略类
- **THEN** 调用`init()`方法
- **AND** 在`init()`中通过`self.I()`注册以下指标到backtesting框架
- MACD线`self.data.MACD_10_20_9`
- MACD信号线`self.data.MACDs_10_20_9`
- EMA200`self.data.EMA_200`
- **AND** 所有参数fast_period=10、slow_period=20、signal_period=9在策略类中定义为类变量
- **AND** 注册的指标可直接在`next()`方法中访问
#### Scenario: MACD金叉买入信号
- **WHEN** 策略检测到MACD线上穿信号线金叉
- **AND** 当前价格高于EMA200趋势线确认上升趋势
- **AND** 当前无持仓或持仓方向与买入信号相反
- **THEN** 策略平掉现有仓位(如有)
- **AND** 策略开多仓(`self.buy()`
- **AND** 在趋势市场下捕捉上涨机会
#### Scenario: EMA200跌破卖出信号
- **WHEN** 策略检测到当前价格跌破EMA200趋势线
- **AND** 当前持有多仓
- **THEN** 策略平掉多仓(`self.position.close()`
- **AND** 不开空仓(仅平仓,避免逆势交易)
- **AND** 在趋势转向时及时止损保护利润
#### Scenario: MACD死叉卖出信号
- **WHEN** 策略检测到MACD线下穿信号线死叉
- **AND** 当前持有多仓
- **THEN** 策略平掉多仓(`self.position.close()`
- **AND** 不开空仓
- **AND** 在动量减弱时退出持仓
#### Scenario: EMA200下方不开仓
- **WHEN** 当前价格低于EMA200趋势线
- **AND** 检测到MACD金叉信号
- **THEN** 策略不执行买入操作
- **AND** 避免在下跌趋势中逆势交易
- **AND** 等待价格回到EMA200上方再考虑入场
#### Scenario: 空仓状态处理
- **WHEN** 策略当前无持仓
- **AND** 检测到卖出信号MACD死叉或EMA200跌破
- **THEN** 策略跳过卖出信号
- **AND** 避免重复平仓导致错误
#### Scenario: 震荡市场过滤
- **WHEN** 市场处于震荡状态价格围绕EMA200波动
- **AND** MACD产生频繁的假金叉/死叉信号
- **THEN** EMA200趋势过滤减少交易频率
- **AND** 避免在无明确趋势时频繁交易
- **AND** 等待趋势明确后再入场
#### Scenario: 趋势市场顺势交易
- **WHEN** 市场处于明确上升趋势价格持续在EMA200上方
- **AND** MACD金叉确认动量增强
- **THEN** 策略及时入场捕捉上涨机会
- **AND** 顺势交易提高胜率
- **AND** EMA200确保不在下跌趋势中买入
#### Scenario: 参数配置
- **WHEN** 用户查看策略代码
- **THEN** 策略参数清晰定义为类变量:
- `fast_period = 10`MACD快线周期
- `slow_period = 20`MACD慢线周期
- `signal_period = 9`MACD信号线周期
- **AND** 参数无需通过命令行传递
- **AND** 参数可直接在代码中修改以适配不同市场环境
#### Scenario: 依赖管理
- **WHEN** 安装项目依赖
- **THEN** ta-lib库已被正确安装手动安装
- **AND** `uv run python -c "import talib"`成功执行
- **AND** 策略文件可正常运行
- **AND** 如ta-lib未安装给出明确错误提示
#### Scenario: 回测兼容性
- **WHEN** 使用现有backtest.py框架
- **THEN** 框架通过`load_strategy()`函数成功加载macd_strategy.py
- **AND** 调用`calculate_indicators()`预处理数据
- **AND** 初始化策略类并执行回测
- **AND** 回测流程与SMA策略完全一致
#### Scenario: 指标数据完整性
- **WHEN** backtesting调用`calculate_indicators(data)`
- **THEN** 返回的DataFrame包含所有必需列
- 原始列:[Open, High, Low, Close, Volume, factor]
- MACD指标列[MACD_10_20_9, MACDh_10_20_9, MACDs_10_20_9]
- EMA趋势线列[EMA_200]
- **AND** 无NaN值除预热期外
- **AND** 指标数据可用于策略决策和图表展示
#### Scenario: 预热期处理
- **WHEN** 数据长度不足以计算完整指标前200天
- **THEN** 指标值为NaN
- **AND** backtesting框架会自动跳过预热期
- **AND** 策略逻辑在有足够数据后才执行
- **AND** 避免因数据不足导致的错误信号

View File

@@ -0,0 +1,225 @@
# Spec: Strategy Loading
## ADDED Requirements
### Requirement: 策略文件接口
策略文件 SHALL 提供两个必需的接口:指标计算函数和策略类获取函数。
#### Scenario: 标准策略文件结构
- **WHEN** 用户创建策略文件
- **THEN** 文件 SHALL 包含 `calculate_indicators(data)` 函数
- **THEN** 文件 SHALL 包含 `get_strategy()` 函数
- **THEN** 文件 SHALL 包含一个继承 `backtesting.Strategy` 的类
- **THEN** 所有三个组件 SHALL 在同一文件中
#### Scenario: calculate_indicators 函数签名
- **WHEN** 主流程调用 `calculate_indicators(data)`
- **THEN** 函数接收一个参数data (pandas.DataFrame)
- **THEN** 函数返回一个 pandas.DataFrame
- **THEN** 返回的 DataFrame SHALL 包含原始列和新增的指标列
- **THEN** 函数 SHALL 修改输入的 DataFrame不创建副本
#### Scenario: get_strategy 函数签名
- **WHEN** 主流程调用 `get_strategy()`
- **THEN** 函数不接收参数
- **THEN** 函数返回一个类对象
- **THEN** 返回的类 SHALL 继承自 `backtesting.Strategy`
---
### Requirement: 指标计算函数
`calculate_indicators` 函数 SHALL 计算策略所需的技术指标,并将结果添加到 DataFrame 中。
#### Scenario: SMA 指标计算
- **WHEN** 策略需要简单移动平均线指标
- **THEN** 函数使用 `data['Close'].rolling(window=N).mean()` 计算
- **THEN** 函数将结果存储为 `data['smaN']`
- **THEN** N 为具体的周期(如 10, 30, 60, 120
#### Scenario: MACD 指标计算
- **WHEN** 策略需要 MACD 指标
- **THEN** 函数使用 `data['Close'].ewm(span=12).mean()` 计算 EMA12
- **THEN** 函数使用 `data['Close'].ewm(span=26).mean()` 计算 EMA26
- **THEN** 函数计算 MACD = EMA12 - EMA26
- **THEN** 函数计算 Signal = MACD.ewm(span=9).mean()
- **THEN** 函数将结果存储为 `data['macd']`, `data['macd_signal']`, `data['macd_hist']`
#### Scenario: RSI 指标计算
- **WHEN** 策略需要 RSI 指标
- **THEN** 函数计算价格变化 delta = data['Close'].diff()
- **THEN** 函数计算 gain = delta.where(delta > 0, 0)
- **THEN** 函数计算 loss = -delta.where(delta < 0, 0)
- **THEN** 函数计算平均收益和平均损失
- **THEN** 函数计算 RS = average_gain / average_loss
- **THEN** 函数计算 RSI = 100 - (100 / (1 + RS))
- **THEN** 函数将结果存储为 `data['rsi']`
#### Scenario: 多指标计算
- **WHEN** 策略需要多个技术指标
- **THEN** 函数按顺序计算每个指标
- **THEN** 函数将所有指标列添加到 DataFrame
- **THEN** DataFrame 最终包含原始列 + 所有指标列
- **THEN** 计算顺序 SHALL 遵循指标间的依赖关系(如 MACD 依赖 EMA
#### Scenario: 指标列命名约定
- **WHEN** 函数添加指标列到 DataFrame
- **THEN** 列名 SHALL 使用小写和下划线(如 `sma10`, `macd_signal`
- **THEN** 列名 SHALL 与策略类的 `init()` 方法中引用的名称一致
- **THEN** 列名 SHALL 避免与原始列冲突
---
### Requirement: 策略类定义
策略类 SHALL 继承 `backtesting.Strategy`,并实现 `init()``next()` 方法。
#### Scenario: 策略类继承
- **WHEN** 用户定义策略类
- **THEN** 类 SHALL 显式继承 `backtesting.Strategy`
- **THEN** 类 SHALL 定义类属性作为可配置参数
- **THEN** 类名 SHALL 使用大驼峰命名(如 `SmaCross`, `MacdStrategy`
#### Scenario: init 方法实现
- **WHEN** Backtest 框架初始化策略时
- **THEN** 系统调用策略类的 `init()` 方法
- **THEN** `init()` 方法 SHALL 使用 `self.I()` 注册指标
- **THEN** `self.I(lambda x: x, self.data.column_name)` SHALL 引用 DataFrame 中的指标列
- **THEN** `init()` 方法 SHALL 不执行数据计算
#### Scenario: next 方法实现 - 金叉买入
- **WHEN** 短期均线上穿长期均线(金叉)
- **THEN** `next()` 方法 SHALL 调用 `self.position.close()` 平仓
- **THEN** `next()` 方法 SHALL 调用 `self.buy()` 开多仓
- **THEN** `next()` 方法 SHALL 使用 `crossover()` 函数检测交叉
#### Scenario: next 方法实现 - 死叉卖出
- **WHEN** 短期均线下穿长期均线(死叉)
- **THEN** `next()` 方法 SHALL 调用 `self.position.close()` 平仓
- **THEN** `next()` 方法 SHALL 调用 `self.sell()` 开空仓
- **THEN** `next()` 方法 SHALL 使用 `crossover()` 函数检测交叉
#### Scenario: next 方法实现 - 避免重复开仓
- **WHEN** 策略已持有多仓,且买入信号触发
- **THEN** `next()` 方法 SHALL 先调用 `self.position.close()`
- **THEN** `next()` 方法 SHALL 再调用 `self.buy()`
- **THEN** 系统 SHALL 自动处理仓位管理(不重复开仓)
#### Scenario: 可配置策略参数
- **WHEN** 策略类定义类属性
- **THEN** 类属性 SHALL 作为策略参数(如 `short_period = 10`
- **THEN** Backtest 框架 SHALL 自动访问这些属性
- **THEN** 参数 SHALL 可通过 Backtest 构造函数覆盖
---
### Requirement: 策略类指标引用
策略类的 `init()` 方法 SHALL 正确引用 DataFrame 中计算好的指标列。
#### Scenario: 引用 SMA 指标
- **WHEN** DataFrame 包含 `sma10``sma30`
- **THEN** `init()` 方法注册 `self.sma_short = self.I(lambda x: x, self.data.sma10)`
- **THEN** `init()` 方法注册 `self.sma_long = self.I(lambda x: x, self.data.sma30)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.sma10``self.data.sma30` 访问指标
#### Scenario: 引用 MACD 指标
- **WHEN** DataFrame 包含 `macd``macd_signal`
- **THEN** `init()` 方法注册 `self.macd = self.I(lambda x: x, self.data.macd)`
- **THEN** `init()` 方法注册 `self.signal = self.I(lambda x: x, self.data.macd_signal)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.macd``self.data.macd_signal` 访问指标
#### Scenario: 引用 RSI 指标
- **WHEN** DataFrame 包含 `rsi`
- **THEN** `init()` 方法注册 `self.rsi = self.I(lambda x: x, self.data.rsi)`
- **THEN** `next()` 方法 SHALL 通过 `self.data.rsi` 访问指标
- **THEN** 策略逻辑 SHALL 使用 RSI 阈值生成信号(如 RSI > 70 超买)
#### Scenario: 指标列不存在
- **WHEN** 策略类引用的列名不存在于 DataFrame
- **THEN** Backtest 框架抛出 KeyError
- **THEN** 主流程捕获异常并输出错误信息:"指标列 {column} 不存在"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 动态加载机制
主流程 SHALL 使用 importlib 动态加载策略文件模块。
#### Scenario: 加载顶层策略文件
- **WHEN** 用户指定 `--strategy-file strategy.py`
- **THEN** 系统使用 `spec_from_file_location('strategy', 'strategy.py')` 创建规范
- **THEN** 系统使用 `module_from_spec(spec)` 创建模块对象
- **THEN** 系统使用 `spec.loader.exec_module(module)` 执行模块
- **THEN** 系统成功获取 `module.calculate_indicators``module.get_strategy`
#### Scenario: 加载子目录策略文件
- **WHEN** 用户指定 `--strategy-file strategies/macd_strategy.py`
- **THEN** 系统使用 `spec_from_file_location('strategies.macd_strategy', 'strategies/macd_strategy.py')`
- **THEN** 模块名使用点号分隔(反映目录结构)
- **THEN** 系统成功加载子目录中的策略模块
#### Scenario: 模块命名空间隔离
- **WHEN** 系统动态加载多个策略文件
- **THEN** 每个策略模块 SHALL 有独立的命名空间
- **THEN** 模块间 SHALL 不共享全局变量
- **THEN** 系统通过 `getattr(module, name)` 明确访问函数和类
#### Scenario: 策略文件导入错误
- **WHEN** 策略文件包含语法错误或导入错误
- **THEN** `exec_module()` 抛出 ImportError 或 SyntaxError
- **THEN** 主流程捕获异常
- **THEN** 系统输出错误信息:"策略文件 {file} 加载失败: {error}"
- **THEN** 系统退出并返回非零状态码
---
### Requirement: 策略接口验证
主流程 SHALL 验证策略文件是否符合接口要求。
#### Scenario: 验证 calculate_indicators 存在
- **WHEN** 系统加载策略模块
- **THEN** 系统使用 `hasattr(module, 'calculate_indicators')` 检查函数
- **WHEN** 函数不存在
- **THEN** 系统抛出 AttributeError
- **THEN** 主流程捕获并输出:"策略文件 {file} 缺少 calculate_indicators 函数"
#### Scenario: 验证 get_strategy 存在
- **WHEN** 系统加载策略模块
- **THEN** 系统使用 `hasattr(module, 'get_strategy')` 检查函数
- **WHEN** 函数不存在
- **THEN** 系统抛出 AttributeError
- **THEN** 主流程捕获并输出:"策略文件 {file} 缺少 get_strategy 函数"
#### Scenario: 验证 get_strategy 返回类
- **WHEN** 系统调用 `get_strategy()`
- **THEN** 系统使用 `isinstance(returned, type)` 检查返回值
- **WHEN** 返回值不是类
- **THEN** 系统抛出 TypeError
- **THEN** 主流程捕获并输出:"get_strategy() 必须返回一个类"
#### Scenario: 验证策略类继承
- **WHEN** 系统获取策略类
- **THEN** 系统使用 `issubclass(strategy_class, backtesting.Strategy)` 检查继承
- **WHEN** 策略类未继承 `backtesting.Strategy`
- **THEN** 系统抛出 TypeError
- **THEN** 主流程捕获并输出:"策略类必须继承 backtesting.Strategy"
---
### Requirement: 策略文件示例
系统 SHALL 提供策略模板文件作为开发者参考。
#### Scenario: 提供策略模板
- **WHEN** 用户查看 strategy.py 文件
- **THEN** 文件 SHALL 包含完整的策略示例SMA 双均线交叉)
- **THEN** 文件 SHALL 包含清晰的注释说明每个接口的用途
- **THEN** 文件 SHALL 包含代码示例指标计算函数、get_strategy、策略类
#### Scenario: 策略文件文档
- **WHEN** 策略文件开头有文档字符串
- **THEN** 文档 SHALL 描述策略逻辑
- **THEN** 文档 SHALL 列出需要的指标
- **THEN** 文档 SHALL 说明参数含义(如 `short_period`, `long_period`
#### Scenario: 策略参数说明
- **WHEN** 策略类定义类属性
- **THEN** 每个属性 SHALL 有注释说明(如 `short_period = 10 # 短期均线周期`
- **THEN** 参数 SHALL 使用有意义的名称(不是 param1, param2

View File

@@ -4,13 +4,22 @@ version = "0.1.0"
description = "Stock analysis"
requires-python = ">=3.14"
dependencies = [
"adata>=2.9.5",
"akshare>=1.18.20",
"backtesting~=0.6.5",
"duckdb>=1.4.3",
"baostock>=0.8.9",
"duckdb>=1.4.4",
"jupyter~=1.1.1",
"jupyter-bokeh>=4.0.5",
"matplotlib~=3.10.8",
"mplfinance>=0.12.10b0",
"pandas~=2.3.3",
"pandas-stubs~=2.3.3",
"peewee~=3.19.0",
"psycopg2-binary~=2.9.11",
"sqlalchemy>=2.0.46",
"ta-lib>=0.6.8",
"tabulate>=0.9.0",
"tqdm>=4.67.1",
"tushare>=1.4.24",
]

94
sql/initial.sql Normal file
View File

@@ -0,0 +1,94 @@
CREATE TABLE stock
(
code varchar not null,
name varchar not null,
fullname varchar,
industry varchar,
listed_date date,
market varchar,
exchange varchar,
primary key (code)
);
CREATE TABLE daily
(
code varchar not null,
trade_date date not null,
open double,
close double,
high double,
low double,
previous_close double,
turnover double,
volume integer,
price_change_amount double,
factor double,
primary key (code, trade_date)
);
CREATE TABLE finance_indicator
(
code varchar not null,
year integer not null,
accounts_payable double,
accounts_payable_to_total_assets_ratio double,
accounts_receivable double,
accounts_receivable_to_total_assets_ratio double,
accounts_receivable_turnover double,
capital_surplus double,
cash_and_cash_equivalents double,
cash_and_cash_equivalents_to_total_assets_ratio double,
cash_flow_adequacy_ratio double,
cash_flow_from_financing_activities double,
cash_flow_from_investing_activities double,
cash_flow_from_operating_activities double,
cash_flow_ratio double,
cash_reinvestment_ratio double,
current_assets double,
current_assets_to_total_assets_ratio double,
current_liabilities double,
current_liabilities_to_total_assets_ratio double,
current_liabilities_to_total_liabilities_ratio double,
current_ratio double,
days_accounts_receivable_turnover double,
days_fixed_assets_turnover double,
days_inventory_turnover double,
days_total_assets_turnover double,
earnings_per_share double,
fixed_assets double,
fixed_assets_to_total_assets_ratio double,
fixed_assets_turnover double,
goodwill double,
goodwill_to_total_assets_ratio double,
inventory double,
inventory_to_total_assets_ratio double,
inventory_turnover double,
liabilities_to_total_assets_ratio double,
long_term_funds_to_fixed_assets_ratio double,
long_term_liabilities double,
long_term_liabilities_to_total_assets_ratio double,
long_term_liabilities_to_total_liabilities_ratio double,
net_cash_flow_from_operating_activities double,
net_profit double,
net_profit_margin double,
operating_cost double,
operating_expenses double,
operating_gross_profit_margin double,
operating_profit double,
operating_profit_margin double,
operating_revenue double,
operating_safety_margin_ratio double,
quick_ratio double,
return_on_assets double,
return_on_equity double,
shareholders_equity double,
shareholders_equity_to_total_assets_ratio double,
surplus_reserve double,
total_assets double,
total_assets_turnover double,
total_liabilities double,
total_share_capital double,
undistributed_profit double,
primary key (code, year)
)

125
strategies/macd_strategy.py Normal file
View File

@@ -0,0 +1,125 @@
"""
MACD 趋势跟踪策略
策略逻辑:
- 当 MACD 线上穿信号线时 (金叉),且价格 > EMA 时,买入
- 当 MACD 线下穿信号线时 (死叉),或价格 < EMA 时,卖出
指标计算:
- MACD(10, 20, 9): 快线 10 日,慢线 20 日,信号线 9 日
- EMA: 200 日指数移动平均线(趋势确认)
参数选择理由:
- 快线 10: 比标准 12 更敏感,适应 A 股较高波动性
- 慢线 20: 比标准 26 更快响应,同时保持趋势跟踪稳定性
- 信号线 9: 保持标准,避免信号过于频繁
- EMA: 被广泛认可为牛熊分界线,避免逆势交易
趋势过滤:
- EMA 上方: 确认为上升趋势,允许开多仓
- EMA 下方: 确认为下降趋势,不开多仓,强制平仓
Author: Sisyphus
Date: 2025-01-27
"""
from backtesting import Strategy
from backtesting.lib import crossover
def calculate_indicators(data):
"""
计算策略所需的技术指标
使用 ta-lib 库计算 MACD 和 EMA 指标
参数:
data: DataFrame, 包含 [Open, High, Low, Close, Volume, factor]
返回:
DataFrame, 添加了指标列:
- macd: MACD 线 (macd)
- signal: MACD 信号线 (DEA)
- hist: MACD 柱状图 (Histogram)
- ema: 日指数移动平均线
"""
data = data.copy()
# 计算 MACD 指标 (10, 20, 9)
# talib.MACD 返回三个值: (macd, macdsignal, macdhist)
macd, macdsignal, macdhist = talib.MACD(data["Close"], fastperiod=10, slowperiod=20, signalperiod=9)
data["macd"] = macd
data["signal"] = macdsignal
data["hist"] = macdhist
# 计算 EMA 趋势线
data["ema"] = talib.SMA(data["Close"], timeperiod=120)
return data
def get_strategy():
"""
返回策略类
返回:
MacdTrendStrategy 类
"""
return MacdTrendStrategy
class MacdTrendStrategy(Strategy):
"""
MACD 趋势跟踪策略
结合 MACD 金叉/死叉信号和 EMA 趋势过滤
参数:
fast_period: MACD 快线周期 (默认: 10)
slow_period: MACD 慢线周期 (默认: 20)
signal_period: MACD 信号线周期 (默认: 9)
"""
# 可配置参数
fast_period = 10
slow_period = 20
signal_period = 9
def init(self):
"""
初始化策略
注册指标到 backtesting 框架
"""
# 注册 MACD 线
self.macd = self.I(lambda x: x, self.data.macd)
# 注册 MACD 信号线
self.signal = self.I(lambda x: x, self.data.signal)
# 注册 EMA 趋势线
self.ema = self.I(lambda x: x, self.data.ema)
def next(self):
"""
每个时间步的决策逻辑
买入条件:
- MACD 金叉 (MACD 线上穿信号线)
- 价格 > EMA (确认上升趋势)
卖出条件:
- MACD 死叉 (MACD 线下穿信号线)
- 或价格 < EMA (趋势转向,强制平仓)
"""
# 买入条件: MACD 金叉 AND 价格 > EMA
if crossover(self.macd, self.signal) and self.data.Close[-1] > self.ema[-1]:
self.buy() # 开多仓
# 卖出条件: MACD 死叉 OR 价格 < EMA
elif self.position.size > 0 and (crossover(self.signal, self.macd) or self.data.Close[-1] < self.ema[-1]):
self.position.close() # 平掉多仓
# 导入 talib (必须在文件末尾,因为 calculate_indicators 函数中使用了 talib)
import talib

View File

@@ -0,0 +1,94 @@
"""
SMA 双均线交叉策略
策略逻辑:
- 当短期均线上穿长期均线时 (金叉),买入
- 当短期均线下穿长期均线时 (死叉),卖出
指标计算 (使用 ta-lib):
- SMA10: 10 日简单移动平均线
- SMA30: 30 日简单移动平均线
- SMA60: 60 日简单移动平均线
- SMA120: 120 日简单移动平均线
"""
from backtesting import Strategy
from backtesting.lib import crossover
def calculate_indicators(data):
"""
计算策略所需的技术指标
使用 ta-lib 库计算 SMA 指标
参数:
data: DataFrame, 包含 [Open, High, Low, Close, Volume, factor]
返回:
DataFrame, 添加了指标列:
- sma10: 10 日简单移动平均线
- sma30: 30 日简单移动平均线
- sma60: 60 日简单移动平均线
- sma120: 120 日简单移动平均线
"""
data = data.copy()
# 计算不同周期的移动平均线
data["sma10"] = talib.SMA(data["Close"], timeperiod=10)
data["sma30"] = talib.SMA(data["Close"], timeperiod=30)
data["sma60"] = talib.SMA(data["Close"], timeperiod=60)
data["sma120"] = talib.SMA(data["Close"], timeperiod=120)
return data
def get_strategy():
"""
返回策略类
返回:
SmaCross 类
"""
return SmaCross
class SmaCross(Strategy):
"""
SMA 双均线交叉策略
参数:
short_period: 短期均线周期 (默认: 10)
long_period: 长期均线周期 (默认: 30)
"""
# 可配置参数
short_period = 10
long_period = 30
def init(self):
"""
初始化策略
注册指标到 backtesting 框架
"""
self.sma_short = self.I(lambda x: x, self.data.sma10)
self.sma_long = self.I(lambda x: x, self.data.sma30)
def next(self):
"""
每个时间步的决策逻辑
金叉: 短期均线上穿长期均线 → 买入
死叉: 短期均线下穿长期均线 → 卖出
"""
# 金叉:短期均线上穿长期均线
if crossover(self.data.sma10, self.data.sma30):
self.buy() # 开多仓
# 死叉:短期均线下穿长期均线
elif self.position.size > 0 and crossover(self.data.sma30, self.data.sma10):
self.position.close() # 开空仓
# 导入 talib (必须在文件末尾,因为 calculate_indicators 函数中使用了 talib)
import talib

404
uv.lock generated
View File

@@ -2,6 +2,57 @@ version = 1
revision = 3
requires-python = ">=3.14"
[[package]]
name = "adata"
version = "2.9.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "beautifulsoup4" },
{ name = "pandas" },
{ name = "py-mini-racer" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0e/da/1eb2f05b14e4d41edcc017b9d6b428f30712d0d046f1b85cd54201b423a5/adata-2.9.5.tar.gz", hash = "sha256:b398fd885ee31baca41b8a141c586d3430ef0fec633f6088a830429437210cf6", size = 188823, upload-time = "2025-12-26T11:09:29.759Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1d/9f/51a65fb438febc0ab38f493a837c5aeb9135dfee2e2c1224920038bbc686/adata-2.9.5-py3-none-any.whl", hash = "sha256:f9dc5d276f8771cf5a5f11fb81c6d97a00d188e20cfcef67022f210c8b23cbf1", size = 229158, upload-time = "2025-12-26T11:09:23.007Z" },
]
[[package]]
name = "akracer"
version = "0.0.14"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/1e/c6/f38feed5b961d73e1b4cb049fdb45338356e0f5b828b230c00d0e51f3137/akracer-0.0.14.tar.gz", hash = "sha256:e084c14bf6d9a02d5da375e3af1cba3d46f103aa1cf3a2010593b3e95bf1c29a", size = 10047643, upload-time = "2025-09-10T13:47:34.811Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/53/cb/1041355b14cd4b76ac082e8c676858f6eddb78f0ba37c59284adf36e5103/akracer-0.0.14-py3-none-any.whl", hash = "sha256:629eaccd0e1d18366804b797eb2692ed47bed0028f55b5a5af3cc277d521df04", size = 10076442, upload-time = "2025-09-10T13:47:29.061Z" },
]
[[package]]
name = "akshare"
version = "1.18.20"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "akracer", marker = "sys_platform == 'linux'" },
{ name = "beautifulsoup4" },
{ name = "curl-cffi" },
{ name = "decorator" },
{ name = "html5lib" },
{ name = "jsonpath" },
{ name = "lxml" },
{ name = "mini-racer", marker = "sys_platform != 'linux'" },
{ name = "openpyxl" },
{ name = "pandas" },
{ name = "py-mini-racer", marker = "sys_platform == 'linux'" },
{ name = "requests" },
{ name = "tabulate" },
{ name = "tqdm" },
{ name = "urllib3" },
{ name = "xlrd" },
]
sdist = { url = "https://files.pythonhosted.org/packages/da/e0/48c0d7fc2527787b3179960454037dbe5b8d3409aa00eab23748a34317be/akshare-1.18.20.tar.gz", hash = "sha256:f3797d454fd2bc9e75f85e24abdd2af2c29989d4f89379b3385998bbf1464d16", size = 855384, upload-time = "2026-01-27T14:35:25.261Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e6/b4/2743787e5366eb281b966f8c3fcc85d6e3a8456cefbac27d30ca7baafedd/akshare-1.18.20-py3-none-any.whl", hash = "sha256:9ba6cb3a17ee4cf957cf81e01cec59d55962a3fd867ab669d151a213bb5a9fc3", size = 1074968, upload-time = "2026-01-27T14:35:23.937Z" },
]
[[package]]
name = "anyio"
version = "4.12.1"
@@ -129,6 +180,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b3/b6/cf57538b968c5caa60ee626ec8be1c31e420067d2a4cf710d81605356f8c/backtesting-0.6.5-py3-none-any.whl", hash = "sha256:8ac2fa500c8fd83dc783b72957b600653a72687986fe3ca86d6ef6c8b8d74363", size = 192105, upload-time = "2025-07-30T05:57:03.322Z" },
]
[[package]]
name = "baostock"
version = "0.8.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pandas" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ee/d5/0fb2c61f392f89b1655490acb17a02861f1f1c38e973c9fc6aa049e54401/baostock-0.8.9.tar.gz", hash = "sha256:8169cdbed14fa442ace63c59549bef3f92b0c3dd1df9e5d9069f7bd04a76b0da", size = 21876, upload-time = "2024-05-31T02:56:54.546Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b2/37/bbabac2d33723d71bd8dbd5e819d9cbe5dc1e031b7dd12ed7de8fa040816/baostock-0.8.9-py3-none-any.whl", hash = "sha256:7a51fb30cd6b4325f5517198e350dc2fffaaab2923cd132b9f747b8b73ae7303", size = 45923, upload-time = "2024-05-31T02:56:53.161Z" },
]
[[package]]
name = "beautifulsoup4"
version = "4.14.3"
@@ -180,6 +243,32 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f6/a8/877f306720bc114c612579c5af36bcb359026b83d051226945499b306b1a/bokeh-3.8.2-py3-none-any.whl", hash = "sha256:5e2c0d84f75acb25d60efb9e4d2f434a791c4639b47d685534194c4e07bd0111", size = 7207131, upload-time = "2026-01-06T00:20:04.917Z" },
]
[[package]]
name = "bs4"
version = "0.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "beautifulsoup4" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c9/aa/4acaf814ff901145da37332e05bb510452ebed97bc9602695059dd46ef39/bs4-0.0.2.tar.gz", hash = "sha256:a48685c58f50fe127722417bae83fe6badf500d54b55f7e39ffe43b798653925", size = 698, upload-time = "2024-01-17T18:15:47.371Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/51/bb/bf7aab772a159614954d84aa832c129624ba6c32faa559dfb200a534e50b/bs4-0.0.2-py2.py3-none-any.whl", hash = "sha256:abf8742c0805ef7f662dce4b51cca104cffe52b835238afc169142ab9b3fbccc", size = 1189, upload-time = "2024-01-17T18:15:48.613Z" },
]
[[package]]
name = "build"
version = "1.4.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "os_name == 'nt'" },
{ name = "packaging" },
{ name = "pyproject-hooks" },
]
sdist = { url = "https://files.pythonhosted.org/packages/42/18/94eaffda7b329535d91f00fe605ab1f1e5cd68b2074d03f255c7d250687d/build-1.4.0.tar.gz", hash = "sha256:f1b91b925aa322be454f8330c6fb48b465da993d1e7e7e6fa35027ec49f3c936", size = 50054, upload-time = "2026-01-08T16:41:47.696Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c5/0d/84a4380f930db0010168e0aa7b7a8fed9ba1835a8fbb1472bc6d0201d529/build-1.4.0-py3-none-any.whl", hash = "sha256:6a07c1b8eb6f2b311b96fcbdbce5dab5fe637ffda0fd83c9cac622e927501596", size = 24141, upload-time = "2026-01-08T16:41:46.453Z" },
]
[[package]]
name = "certifi"
version = "2026.1.4"
@@ -298,6 +387,29 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ae/8c/469afb6465b853afff216f9528ffda78a915ff880ed58813ba4faf4ba0b6/contourpy-1.3.3-cp314-cp314t-win_arm64.whl", hash = "sha256:b7448cb5a725bb1e35ce88771b86fba35ef418952474492cf7c764059933ff8b", size = 203831, upload-time = "2025-07-26T12:02:51.449Z" },
]
[[package]]
name = "curl-cffi"
version = "0.14.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
{ name = "cffi" },
]
sdist = { url = "https://files.pythonhosted.org/packages/9b/c9/0067d9a25ed4592b022d4558157fcdb6e123516083700786d38091688767/curl_cffi-0.14.0.tar.gz", hash = "sha256:5ffbc82e59f05008ec08ea432f0e535418823cda44178ee518906a54f27a5f0f", size = 162633, upload-time = "2025-12-16T03:25:07.931Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/f0/0f21e9688eaac85e705537b3a87a5588d0cefb2f09d83e83e0e8be93aa99/curl_cffi-0.14.0-cp39-abi3-macosx_14_0_arm64.whl", hash = "sha256:e35e89c6a69872f9749d6d5fda642ed4fc159619329e99d577d0104c9aad5893", size = 3087277, upload-time = "2025-12-16T03:24:49.607Z" },
{ url = "https://files.pythonhosted.org/packages/ba/a3/0419bd48fce5b145cb6a2344c6ac17efa588f5b0061f212c88e0723da026/curl_cffi-0.14.0-cp39-abi3-macosx_15_0_x86_64.whl", hash = "sha256:5945478cd28ad7dfb5c54473bcfb6743ee1d66554d57951fdf8fc0e7d8cf4e45", size = 5804650, upload-time = "2025-12-16T03:24:51.518Z" },
{ url = "https://files.pythonhosted.org/packages/e2/07/a238dd062b7841b8caa2fa8a359eb997147ff3161288f0dd46654d898b4d/curl_cffi-0.14.0-cp39-abi3-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c42e8fa3c667db9ccd2e696ee47adcd3cd5b0838d7282f3fc45f6c0ef3cfdfa7", size = 8231918, upload-time = "2025-12-16T03:24:52.862Z" },
{ url = "https://files.pythonhosted.org/packages/7c/d2/ce907c9b37b5caf76ac08db40cc4ce3d9f94c5500db68a195af3513eacbc/curl_cffi-0.14.0-cp39-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:060fe2c99c41d3cb7f894de318ddf4b0301b08dca70453d769bd4e74b36b8483", size = 8654624, upload-time = "2025-12-16T03:24:54.579Z" },
{ url = "https://files.pythonhosted.org/packages/f2/ae/6256995b18c75e6ef76b30753a5109e786813aa79088b27c8eabb1ef85c9/curl_cffi-0.14.0-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:b158c41a25388690dd0d40b5bc38d1e0f512135f17fdb8029868cbc1993d2e5b", size = 8010654, upload-time = "2025-12-16T03:24:56.507Z" },
{ url = "https://files.pythonhosted.org/packages/fb/10/ff64249e516b103cb762e0a9dca3ee0f04cf25e2a1d5d9838e0f1273d071/curl_cffi-0.14.0-cp39-abi3-manylinux_2_28_i686.whl", hash = "sha256:1439fbef3500fb723333c826adf0efb0e2e5065a703fb5eccce637a2250db34a", size = 7781969, upload-time = "2025-12-16T03:24:57.885Z" },
{ url = "https://files.pythonhosted.org/packages/51/76/d6f7bb76c2d12811aa7ff16f5e17b678abdd1b357b9a8ac56310ceccabd5/curl_cffi-0.14.0-cp39-abi3-manylinux_2_34_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e7176f2c2d22b542e3cf261072a81deb018cfa7688930f95dddef215caddb469", size = 7969133, upload-time = "2025-12-16T03:24:59.261Z" },
{ url = "https://files.pythonhosted.org/packages/23/7c/cca39c0ed4e1772613d3cba13091c0e9d3b89365e84b9bf9838259a3cd8f/curl_cffi-0.14.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:03f21ade2d72978c2bb8670e9b6de5260e2755092b02d94b70b906813662998d", size = 9080167, upload-time = "2025-12-16T03:25:00.946Z" },
{ url = "https://files.pythonhosted.org/packages/75/03/a942d7119d3e8911094d157598ae0169b1c6ca1bd3f27d7991b279bcc45b/curl_cffi-0.14.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:58ebf02de64ee5c95613209ddacb014c2d2f86298d7080c0a1c12ed876ee0690", size = 9520464, upload-time = "2025-12-16T03:25:02.922Z" },
{ url = "https://files.pythonhosted.org/packages/a2/77/78900e9b0833066d2274bda75cba426fdb4cef7fbf6a4f6a6ca447607bec/curl_cffi-0.14.0-cp39-abi3-win_amd64.whl", hash = "sha256:6e503f9a103f6ae7acfb3890c843b53ec030785a22ae7682a22cc43afb94123e", size = 1677416, upload-time = "2025-12-16T03:25:04.902Z" },
{ url = "https://files.pythonhosted.org/packages/5c/7c/d2ba86b0b3e1e2830bd94163d047de122c69a8df03c5c7c36326c456ad82/curl_cffi-0.14.0-cp39-abi3-win_arm64.whl", hash = "sha256:2eed50a969201605c863c4c31269dfc3e0da52916086ac54553cfa353022425c", size = 1425067, upload-time = "2025-12-16T03:25:06.454Z" },
]
[[package]]
name = "cycler"
version = "0.12.1"
@@ -340,17 +452,26 @@ wheels = [
[[package]]
name = "duckdb"
version = "1.4.3"
version = "1.4.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/7f/da/17c3eb5458af69d54dedc8d18e4a32ceaa8ce4d4c699d45d6d8287e790c3/duckdb-1.4.3.tar.gz", hash = "sha256:fea43e03604c713e25a25211ada87d30cd2a044d8f27afab5deba26ac49e5268", size = 18478418, upload-time = "2025-12-09T10:59:22.945Z" }
sdist = { url = "https://files.pythonhosted.org/packages/36/9d/ab66a06e416d71b7bdcb9904cdf8d4db3379ef632bb8e9495646702d9718/duckdb-1.4.4.tar.gz", hash = "sha256:8bba52fd2acb67668a4615ee17ee51814124223de836d9e2fdcbc4c9021b3d3c", size = 18419763, upload-time = "2026-01-26T11:50:37.68Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b6/f4/a38651e478fa41eeb8e43a0a9c0d4cd8633adea856e3ac5ac95124b0fdbf/duckdb-1.4.3-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:316711a9e852bcfe1ed6241a5f654983f67e909e290495f3562cccdf43be8180", size = 29042272, upload-time = "2025-12-09T10:58:51.826Z" },
{ url = "https://files.pythonhosted.org/packages/16/de/2cf171a66098ce5aeeb7371511bd2b3d7b73a2090603b0b9df39f8aaf814/duckdb-1.4.3-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:9e625b2b4d52bafa1fd0ebdb0990c3961dac8bb00e30d327185de95b68202131", size = 15419343, upload-time = "2025-12-09T10:58:54.439Z" },
{ url = "https://files.pythonhosted.org/packages/35/28/6b0a7830828d4e9a37420d87e80fe6171d2869a9d3d960bf5d7c3b8c7ee4/duckdb-1.4.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:130c6760f6c573f9c9fe9aba56adba0fab48811a4871b7b8fd667318b4a3e8da", size = 13748905, upload-time = "2025-12-09T10:58:56.656Z" },
{ url = "https://files.pythonhosted.org/packages/15/4d/778628e194d63967870873b9581c8a6b4626974aa4fbe09f32708a2d3d3a/duckdb-1.4.3-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:20c88effaa557a11267706b01419c542fe42f893dee66e5a6daa5974ea2d4a46", size = 18487261, upload-time = "2025-12-09T10:58:58.866Z" },
{ url = "https://files.pythonhosted.org/packages/c6/5f/87e43af2e4a0135f9675449563e7c2f9b6f1fe6a2d1691c96b091f3904dd/duckdb-1.4.3-cp314-cp314-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1b35491db98ccd11d151165497c084a9d29d3dc42fc80abea2715a6c861ca43d", size = 20497138, upload-time = "2025-12-09T10:59:01.241Z" },
{ url = "https://files.pythonhosted.org/packages/94/41/abec537cc7c519121a2a83b9a6f180af8915fabb433777dc147744513e74/duckdb-1.4.3-cp314-cp314-win_amd64.whl", hash = "sha256:23b12854032c1a58d0452e2b212afa908d4ce64171862f3792ba9a596ba7c765", size = 12836056, upload-time = "2025-12-09T10:59:03.388Z" },
{ url = "https://files.pythonhosted.org/packages/b1/5a/8af5b96ce5622b6168854f479ce846cf7fb589813dcc7d8724233c37ded3/duckdb-1.4.3-cp314-cp314-win_arm64.whl", hash = "sha256:90f241f25cffe7241bf9f376754a5845c74775e00e1c5731119dc88cd71e0cb2", size = 13527759, upload-time = "2025-12-09T10:59:05.496Z" },
{ url = "https://files.pythonhosted.org/packages/97/a6/f19e2864e651b0bd8e4db2b0c455e7e0d71e0d4cd2cd9cc052f518e43eb3/duckdb-1.4.4-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:25874f8b1355e96178079e37312c3ba6d61a2354f51319dae860cf21335c3a20", size = 28909554, upload-time = "2026-01-26T11:50:00.107Z" },
{ url = "https://files.pythonhosted.org/packages/0e/93/8a24e932c67414fd2c45bed83218e62b73348996bf859eda020c224774b2/duckdb-1.4.4-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:452c5b5d6c349dc5d1154eb2062ee547296fcbd0c20e9df1ed00b5e1809089da", size = 15353804, upload-time = "2026-01-26T11:50:03.382Z" },
{ url = "https://files.pythonhosted.org/packages/62/13/e5378ff5bb1d4397655d840b34b642b1b23cdd82ae19599e62dc4b9461c9/duckdb-1.4.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:8e5c2d8a0452df55e092959c0bfc8ab8897ac3ea0f754cb3b0ab3e165cd79aff", size = 13676157, upload-time = "2026-01-26T11:50:06.232Z" },
{ url = "https://files.pythonhosted.org/packages/2d/94/24364da564b27aeebe44481f15bd0197a0b535ec93f188a6b1b98c22f082/duckdb-1.4.4-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1af6e76fe8bd24875dc56dd8e38300d64dc708cd2e772f67b9fbc635cc3066a3", size = 18426882, upload-time = "2026-01-26T11:50:08.97Z" },
{ url = "https://files.pythonhosted.org/packages/26/0a/6ae31b2914b4dc34243279b2301554bcbc5f1a09ccc82600486c49ab71d1/duckdb-1.4.4-cp314-cp314-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d0440f59e0cd9936a9ebfcf7a13312eda480c79214ffed3878d75947fc3b7d6d", size = 20435641, upload-time = "2026-01-26T11:50:12.188Z" },
{ url = "https://files.pythonhosted.org/packages/d2/b1/fd5c37c53d45efe979f67e9bd49aaceef640147bb18f0699a19edd1874d6/duckdb-1.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:59c8d76016dde854beab844935b1ec31de358d4053e792988108e995b18c08e7", size = 12762360, upload-time = "2026-01-26T11:50:14.76Z" },
{ url = "https://files.pythonhosted.org/packages/dd/2d/13e6024e613679d8a489dd922f199ef4b1d08a456a58eadd96dc2f05171f/duckdb-1.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:53cd6423136ab44383ec9955aefe7599b3fb3dd1fe006161e6396d8167e0e0d4", size = 13458633, upload-time = "2026-01-26T11:50:17.657Z" },
]
[[package]]
name = "et-xmlfile"
version = "2.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d3/38/af70d7ab1ae9d4da450eeec1fa3918940a5fafb9055e934af8d6eb0c2313/et_xmlfile-2.0.0.tar.gz", hash = "sha256:dab3f4764309081ce75662649be815c4c9081e88f0837825f90fd28317d4da54", size = 17234, upload-time = "2024-10-25T17:25:40.039Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/8b/5fe2cc11fee489817272089c4203e679c63b570a5aaeb18d852ae3cbba6a/et_xmlfile-2.0.0-py3-none-any.whl", hash = "sha256:7a91720bc756843502c3b7504c77b8fe44217c85c537d85037f0f536151b2caa", size = 18059, upload-time = "2024-10-25T17:25:39.051Z" },
]
[[package]]
@@ -405,6 +526,29 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/cf/58/8acf1b3e91c58313ce5cb67df61001fc9dcd21be4fadb76c1a2d540e09ed/fqdn-1.5.1-py3-none-any.whl", hash = "sha256:3a179af3761e4df6eb2e026ff9e1a3033d3587bf980a0b1b2e1e5d08d7358014", size = 9121, upload-time = "2021-03-11T07:16:28.351Z" },
]
[[package]]
name = "greenlet"
version = "3.3.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/8a/99/1cd3411c56a410994669062bd73dd58270c00cc074cac15f385a1fd91f8a/greenlet-3.3.1.tar.gz", hash = "sha256:41848f3230b58c08bb43dee542e74a2a2e34d3c59dc3076cec9151aeeedcae98", size = 184690, upload-time = "2026-01-23T15:31:02.076Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ae/fb/011c7c717213182caf78084a9bea51c8590b0afda98001f69d9f853a495b/greenlet-3.3.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:bd59acd8529b372775cd0fcbc5f420ae20681c5b045ce25bd453ed8455ab99b5", size = 275737, upload-time = "2026-01-23T15:32:16.889Z" },
{ url = "https://files.pythonhosted.org/packages/41/2e/a3a417d620363fdbb08a48b1dd582956a46a61bf8fd27ee8164f9dfe87c2/greenlet-3.3.1-cp314-cp314-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b31c05dd84ef6871dd47120386aed35323c944d86c3d91a17c4b8d23df62f15b", size = 646422, upload-time = "2026-01-23T16:01:00.354Z" },
{ url = "https://files.pythonhosted.org/packages/b4/09/c6c4a0db47defafd2d6bab8ddfe47ad19963b4e30f5bed84d75328059f8c/greenlet-3.3.1-cp314-cp314-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:02925a0bfffc41e542c70aa14c7eda3593e4d7e274bfcccca1827e6c0875902e", size = 658219, upload-time = "2026-01-23T16:05:30.956Z" },
{ url = "https://files.pythonhosted.org/packages/80/38/9d42d60dffb04b45f03dbab9430898352dba277758640751dc5cc316c521/greenlet-3.3.1-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:34a729e2e4e4ffe9ae2408d5ecaf12f944853f40ad724929b7585bca808a9d6f", size = 660237, upload-time = "2026-01-23T15:32:53.967Z" },
{ url = "https://files.pythonhosted.org/packages/96/61/373c30b7197f9e756e4c81ae90a8d55dc3598c17673f91f4d31c3c689c3f/greenlet-3.3.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:aec9ab04e82918e623415947921dea15851b152b822661cce3f8e4393c3df683", size = 1615261, upload-time = "2026-01-23T16:04:25.066Z" },
{ url = "https://files.pythonhosted.org/packages/fd/d3/ca534310343f5945316f9451e953dcd89b36fe7a19de652a1dc5a0eeef3f/greenlet-3.3.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:71c767cf281a80d02b6c1bdc41c9468e1f5a494fb11bc8688c360524e273d7b1", size = 1683719, upload-time = "2026-01-23T15:33:50.61Z" },
{ url = "https://files.pythonhosted.org/packages/52/cb/c21a3fd5d2c9c8b622e7bede6d6d00e00551a5ee474ea6d831b5f567a8b4/greenlet-3.3.1-cp314-cp314-win_amd64.whl", hash = "sha256:96aff77af063b607f2489473484e39a0bbae730f2ea90c9e5606c9b73c44174a", size = 228125, upload-time = "2026-01-23T15:32:45.265Z" },
{ url = "https://files.pythonhosted.org/packages/6a/8e/8a2db6d11491837af1de64b8aff23707c6e85241be13c60ed399a72e2ef8/greenlet-3.3.1-cp314-cp314-win_arm64.whl", hash = "sha256:b066e8b50e28b503f604fa538adc764a638b38cf8e81e025011d26e8a627fa79", size = 227519, upload-time = "2026-01-23T15:31:47.284Z" },
{ url = "https://files.pythonhosted.org/packages/28/24/cbbec49bacdcc9ec652a81d3efef7b59f326697e7edf6ed775a5e08e54c2/greenlet-3.3.1-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:3e63252943c921b90abb035ebe9de832c436401d9c45f262d80e2d06cc659242", size = 282706, upload-time = "2026-01-23T15:33:05.525Z" },
{ url = "https://files.pythonhosted.org/packages/86/2e/4f2b9323c144c4fe8842a4e0d92121465485c3c2c5b9e9b30a52e80f523f/greenlet-3.3.1-cp314-cp314t-manylinux_2_24_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:76e39058e68eb125de10c92524573924e827927df5d3891fbc97bd55764a8774", size = 651209, upload-time = "2026-01-23T16:01:01.517Z" },
{ url = "https://files.pythonhosted.org/packages/d9/87/50ca60e515f5bb55a2fbc5f0c9b5b156de7d2fc51a0a69abc9d23914a237/greenlet-3.3.1-cp314-cp314t-manylinux_2_24_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c9f9d5e7a9310b7a2f416dd13d2e3fd8b42d803968ea580b7c0f322ccb389b97", size = 654300, upload-time = "2026-01-23T16:05:32.199Z" },
{ url = "https://files.pythonhosted.org/packages/1d/94/74310866dfa2b73dd08659a3d18762f83985ad3281901ba0ee9a815194fb/greenlet-3.3.1-cp314-cp314t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:92497c78adf3ac703b57f1e3813c2d874f27f71a178f9ea5887855da413cd6d2", size = 653842, upload-time = "2026-01-23T15:32:55.671Z" },
{ url = "https://files.pythonhosted.org/packages/97/43/8bf0ffa3d498eeee4c58c212a3905dd6146c01c8dc0b0a046481ca29b18c/greenlet-3.3.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:ed6b402bc74d6557a705e197d47f9063733091ed6357b3de33619d8a8d93ac53", size = 1614917, upload-time = "2026-01-23T16:04:26.276Z" },
{ url = "https://files.pythonhosted.org/packages/89/90/a3be7a5f378fc6e84abe4dcfb2ba32b07786861172e502388b4c90000d1b/greenlet-3.3.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:59913f1e5ada20fde795ba906916aea25d442abcc0593fba7e26c92b7ad76249", size = 1676092, upload-time = "2026-01-23T15:33:52.176Z" },
{ url = "https://files.pythonhosted.org/packages/e1/2b/98c7f93e6db9977aaee07eb1e51ca63bd5f779b900d362791d3252e60558/greenlet-3.3.1-cp314-cp314t-win_amd64.whl", hash = "sha256:301860987846c24cb8964bdec0e31a96ad4a2a801b41b4ef40963c1b44f33451", size = 233181, upload-time = "2026-01-23T15:33:00.29Z" },
]
[[package]]
name = "h11"
version = "0.16.0"
@@ -414,6 +558,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" },
]
[[package]]
name = "html5lib"
version = "1.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
{ name = "webencodings" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ac/b6/b55c3f49042f1df3dcd422b7f224f939892ee94f22abcf503a9b7339eaf2/html5lib-1.1.tar.gz", hash = "sha256:b2e5b40261e20f354d198eae92afc10d750afb487ed5e50f9c4eaf07c184146f", size = 272215, upload-time = "2020-06-22T23:32:38.834Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6c/dd/a834df6482147d48e225a49515aabc28974ad5a4ca3215c18a882565b028/html5lib-1.1-py2.py3-none-any.whl", hash = "sha256:0d78f8fde1c230e99fe37986a60526d7049ed4bf8a9fadbad5f00e22e58e041d", size = 112173, upload-time = "2020-06-22T23:32:36.781Z" },
]
[[package]]
name = "httpcore"
version = "1.0.9"
@@ -569,6 +726,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d7/9e/038522f50ceb7e74f1f991bf1b699f24b0c2bbe7c390dd36ad69f4582258/json5-0.13.0-py3-none-any.whl", hash = "sha256:9a08e1dd65f6a4d4c6fa82d216cf2477349ec2346a38fd70cc11d2557499fbcc", size = 36163, upload-time = "2026-01-01T19:42:13.962Z" },
]
[[package]]
name = "jsonpath"
version = "0.82.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cf/a1/693351acd0a9edca4de9153372a65e75398898ea7f8a5c722ab00f464929/jsonpath-0.82.2.tar.gz", hash = "sha256:d87ef2bcbcded68ee96bc34c1809b69457ecec9b0c4dd471658a12bd391002d1", size = 10353, upload-time = "2023-08-24T18:57:55.459Z" }
[[package]]
name = "jsonpointer"
version = "3.0.0"
@@ -635,6 +798,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/38/64/285f20a31679bf547b75602702f7800e74dbabae36ef324f716c02804753/jupyter-1.1.1-py2.py3-none-any.whl", hash = "sha256:7a59533c22af65439b24bbe60373a4e95af8f16ac65a6c00820ad378e3f7cc83", size = 2657, upload-time = "2024-08-30T07:15:47.045Z" },
]
[[package]]
name = "jupyter-bokeh"
version = "4.0.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "bokeh" },
{ name = "ipywidgets" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b4/fd/8f0213c704bf36b5f523ae5bf7dc367f3687e75dcc2354084b75c05d2b53/jupyter_bokeh-4.0.5.tar.gz", hash = "sha256:a33d6ab85588f13640b30765fa15d1111b055cbe44f67a65ca57d3593af8245d", size = 149140, upload-time = "2024-06-03T06:33:33.488Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/47/78/33b2294aad62e5f95b89a89379c5995c2bd978018387ef8bec79f6dc272c/jupyter_bokeh-4.0.5-py3-none-any.whl", hash = "sha256:1110076c14c779071cf492646a1a871aefa8a477261e4721327a666e65df1a2c", size = 148593, upload-time = "2024-06-03T06:33:35.82Z" },
]
[[package]]
name = "jupyter-client"
version = "8.8.0"
@@ -864,28 +1040,90 @@ name = "leopard-analysis"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "adata" },
{ name = "akshare" },
{ name = "backtesting" },
{ name = "baostock" },
{ name = "duckdb" },
{ name = "jupyter" },
{ name = "jupyter-bokeh" },
{ name = "matplotlib" },
{ name = "mplfinance" },
{ name = "pandas" },
{ name = "pandas-stubs" },
{ name = "peewee" },
{ name = "psycopg2-binary" },
{ name = "sqlalchemy" },
{ name = "ta-lib" },
{ name = "tabulate" },
{ name = "tqdm" },
{ name = "tushare" },
]
[package.metadata]
requires-dist = [
{ name = "adata", specifier = ">=2.9.5" },
{ name = "akshare", specifier = ">=1.18.20" },
{ name = "backtesting", specifier = "~=0.6.5" },
{ name = "duckdb", specifier = ">=1.4.3" },
{ name = "baostock", specifier = ">=0.8.9" },
{ name = "duckdb", specifier = ">=1.4.4" },
{ name = "jupyter", specifier = "~=1.1.1" },
{ name = "jupyter-bokeh", specifier = ">=4.0.5" },
{ name = "matplotlib", specifier = "~=3.10.8" },
{ name = "mplfinance", specifier = ">=0.12.10b0" },
{ name = "pandas", specifier = "~=2.3.3" },
{ name = "pandas-stubs", specifier = "~=2.3.3" },
{ name = "peewee", specifier = "~=3.19.0" },
{ name = "psycopg2-binary", specifier = "~=2.9.11" },
{ name = "sqlalchemy", specifier = ">=2.0.46" },
{ name = "ta-lib", specifier = ">=0.6.8" },
{ name = "tabulate", specifier = ">=0.9.0" },
{ name = "tqdm", specifier = ">=4.67.1" },
{ name = "tushare", specifier = ">=1.4.24" },
]
[[package]]
name = "lxml"
version = "6.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/aa/88/262177de60548e5a2bfc46ad28232c9e9cbde697bd94132aeb80364675cb/lxml-6.0.2.tar.gz", hash = "sha256:cd79f3367bd74b317dda655dc8fcfa304d9eb6e4fb06b7168c5cf27f96e0cd62", size = 4073426, upload-time = "2025-09-22T04:04:59.287Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/03/15/d4a377b385ab693ce97b472fe0c77c2b16ec79590e688b3ccc71fba19884/lxml-6.0.2-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:b0c732aa23de8f8aec23f4b580d1e52905ef468afb4abeafd3fec77042abb6fe", size = 8659801, upload-time = "2025-09-22T04:02:30.113Z" },
{ url = "https://files.pythonhosted.org/packages/c8/e8/c128e37589463668794d503afaeb003987373c5f94d667124ffd8078bbd9/lxml-6.0.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4468e3b83e10e0317a89a33d28f7aeba1caa4d1a6fd457d115dd4ffe90c5931d", size = 4659403, upload-time = "2025-09-22T04:02:32.119Z" },
{ url = "https://files.pythonhosted.org/packages/00/ce/74903904339decdf7da7847bb5741fc98a5451b42fc419a86c0c13d26fe2/lxml-6.0.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:abd44571493973bad4598a3be7e1d807ed45aa2adaf7ab92ab7c62609569b17d", size = 4966974, upload-time = "2025-09-22T04:02:34.155Z" },
{ url = "https://files.pythonhosted.org/packages/1f/d3/131dec79ce61c5567fecf82515bd9bc36395df42501b50f7f7f3bd065df0/lxml-6.0.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:370cd78d5855cfbffd57c422851f7d3864e6ae72d0da615fca4dad8c45d375a5", size = 5102953, upload-time = "2025-09-22T04:02:36.054Z" },
{ url = "https://files.pythonhosted.org/packages/3a/ea/a43ba9bb750d4ffdd885f2cd333572f5bb900cd2408b67fdda07e85978a0/lxml-6.0.2-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:901e3b4219fa04ef766885fb40fa516a71662a4c61b80c94d25336b4934b71c0", size = 5055054, upload-time = "2025-09-22T04:02:38.154Z" },
{ url = "https://files.pythonhosted.org/packages/60/23/6885b451636ae286c34628f70a7ed1fcc759f8d9ad382d132e1c8d3d9bfd/lxml-6.0.2-cp314-cp314-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:a4bf42d2e4cf52c28cc1812d62426b9503cdb0c87a6de81442626aa7d69707ba", size = 5352421, upload-time = "2025-09-22T04:02:40.413Z" },
{ url = "https://files.pythonhosted.org/packages/48/5b/fc2ddfc94ddbe3eebb8e9af6e3fd65e2feba4967f6a4e9683875c394c2d8/lxml-6.0.2-cp314-cp314-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2c7fdaa4d7c3d886a42534adec7cfac73860b89b4e5298752f60aa5984641a0", size = 5673684, upload-time = "2025-09-22T04:02:42.288Z" },
{ url = "https://files.pythonhosted.org/packages/29/9c/47293c58cc91769130fbf85531280e8cc7868f7fbb6d92f4670071b9cb3e/lxml-6.0.2-cp314-cp314-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:98a5e1660dc7de2200b00d53fa00bcd3c35a3608c305d45a7bbcaf29fa16e83d", size = 5252463, upload-time = "2025-09-22T04:02:44.165Z" },
{ url = "https://files.pythonhosted.org/packages/9b/da/ba6eceb830c762b48e711ded880d7e3e89fc6c7323e587c36540b6b23c6b/lxml-6.0.2-cp314-cp314-manylinux_2_31_armv7l.whl", hash = "sha256:dc051506c30b609238d79eda75ee9cab3e520570ec8219844a72a46020901e37", size = 4698437, upload-time = "2025-09-22T04:02:46.524Z" },
{ url = "https://files.pythonhosted.org/packages/a5/24/7be3f82cb7990b89118d944b619e53c656c97dc89c28cfb143fdb7cd6f4d/lxml-6.0.2-cp314-cp314-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8799481bbdd212470d17513a54d568f44416db01250f49449647b5ab5b5dccb9", size = 5269890, upload-time = "2025-09-22T04:02:48.812Z" },
{ url = "https://files.pythonhosted.org/packages/1b/bd/dcfb9ea1e16c665efd7538fc5d5c34071276ce9220e234217682e7d2c4a5/lxml-6.0.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:9261bb77c2dab42f3ecd9103951aeca2c40277701eb7e912c545c1b16e0e4917", size = 5097185, upload-time = "2025-09-22T04:02:50.746Z" },
{ url = "https://files.pythonhosted.org/packages/21/04/a60b0ff9314736316f28316b694bccbbabe100f8483ad83852d77fc7468e/lxml-6.0.2-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:65ac4a01aba353cfa6d5725b95d7aed6356ddc0a3cd734de00124d285b04b64f", size = 4745895, upload-time = "2025-09-22T04:02:52.968Z" },
{ url = "https://files.pythonhosted.org/packages/d6/bd/7d54bd1846e5a310d9c715921c5faa71cf5c0853372adf78aee70c8d7aa2/lxml-6.0.2-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:b22a07cbb82fea98f8a2fd814f3d1811ff9ed76d0fc6abc84eb21527596e7cc8", size = 5695246, upload-time = "2025-09-22T04:02:54.798Z" },
{ url = "https://files.pythonhosted.org/packages/fd/32/5643d6ab947bc371da21323acb2a6e603cedbe71cb4c99c8254289ab6f4e/lxml-6.0.2-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:d759cdd7f3e055d6bc8d9bec3ad905227b2e4c785dc16c372eb5b5e83123f48a", size = 5260797, upload-time = "2025-09-22T04:02:57.058Z" },
{ url = "https://files.pythonhosted.org/packages/33/da/34c1ec4cff1eea7d0b4cd44af8411806ed943141804ac9c5d565302afb78/lxml-6.0.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:945da35a48d193d27c188037a05fec5492937f66fb1958c24fc761fb9d40d43c", size = 5277404, upload-time = "2025-09-22T04:02:58.966Z" },
{ url = "https://files.pythonhosted.org/packages/82/57/4eca3e31e54dc89e2c3507e1cd411074a17565fa5ffc437c4ae0a00d439e/lxml-6.0.2-cp314-cp314-win32.whl", hash = "sha256:be3aaa60da67e6153eb15715cc2e19091af5dc75faef8b8a585aea372507384b", size = 3670072, upload-time = "2025-09-22T04:03:38.05Z" },
{ url = "https://files.pythonhosted.org/packages/e3/e0/c96cf13eccd20c9421ba910304dae0f619724dcf1702864fd59dd386404d/lxml-6.0.2-cp314-cp314-win_amd64.whl", hash = "sha256:fa25afbadead523f7001caf0c2382afd272c315a033a7b06336da2637d92d6ed", size = 4080617, upload-time = "2025-09-22T04:03:39.835Z" },
{ url = "https://files.pythonhosted.org/packages/d5/5d/b3f03e22b3d38d6f188ef044900a9b29b2fe0aebb94625ce9fe244011d34/lxml-6.0.2-cp314-cp314-win_arm64.whl", hash = "sha256:063eccf89df5b24e361b123e257e437f9e9878f425ee9aae3144c77faf6da6d8", size = 3754930, upload-time = "2025-09-22T04:03:41.565Z" },
{ url = "https://files.pythonhosted.org/packages/5e/5c/42c2c4c03554580708fc738d13414801f340c04c3eff90d8d2d227145275/lxml-6.0.2-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:6162a86d86893d63084faaf4ff937b3daea233e3682fb4474db07395794fa80d", size = 8910380, upload-time = "2025-09-22T04:03:01.645Z" },
{ url = "https://files.pythonhosted.org/packages/bf/4f/12df843e3e10d18d468a7557058f8d3733e8b6e12401f30b1ef29360740f/lxml-6.0.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:414aaa94e974e23a3e92e7ca5b97d10c0cf37b6481f50911032c69eeb3991bba", size = 4775632, upload-time = "2025-09-22T04:03:03.814Z" },
{ url = "https://files.pythonhosted.org/packages/e4/0c/9dc31e6c2d0d418483cbcb469d1f5a582a1cd00a1f4081953d44051f3c50/lxml-6.0.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:48461bd21625458dd01e14e2c38dd0aea69addc3c4f960c30d9f59d7f93be601", size = 4975171, upload-time = "2025-09-22T04:03:05.651Z" },
{ url = "https://files.pythonhosted.org/packages/e7/2b/9b870c6ca24c841bdd887504808f0417aa9d8d564114689266f19ddf29c8/lxml-6.0.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:25fcc59afc57d527cfc78a58f40ab4c9b8fd096a9a3f964d2781ffb6eb33f4ed", size = 5110109, upload-time = "2025-09-22T04:03:07.452Z" },
{ url = "https://files.pythonhosted.org/packages/bf/0c/4f5f2a4dd319a178912751564471355d9019e220c20d7db3fb8307ed8582/lxml-6.0.2-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5179c60288204e6ddde3f774a93350177e08876eaf3ab78aa3a3649d43eb7d37", size = 5041061, upload-time = "2025-09-22T04:03:09.297Z" },
{ url = "https://files.pythonhosted.org/packages/12/64/554eed290365267671fe001a20d72d14f468ae4e6acef1e179b039436967/lxml-6.0.2-cp314-cp314t-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:967aab75434de148ec80597b75062d8123cadf2943fb4281f385141e18b21338", size = 5306233, upload-time = "2025-09-22T04:03:11.651Z" },
{ url = "https://files.pythonhosted.org/packages/7a/31/1d748aa275e71802ad9722df32a7a35034246b42c0ecdd8235412c3396ef/lxml-6.0.2-cp314-cp314t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:d100fcc8930d697c6561156c6810ab4a508fb264c8b6779e6e61e2ed5e7558f9", size = 5604739, upload-time = "2025-09-22T04:03:13.592Z" },
{ url = "https://files.pythonhosted.org/packages/8f/41/2c11916bcac09ed561adccacceaedd2bf0e0b25b297ea92aab99fd03d0fa/lxml-6.0.2-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2ca59e7e13e5981175b8b3e4ab84d7da57993eeff53c07764dcebda0d0e64ecd", size = 5225119, upload-time = "2025-09-22T04:03:15.408Z" },
{ url = "https://files.pythonhosted.org/packages/99/05/4e5c2873d8f17aa018e6afde417c80cc5d0c33be4854cce3ef5670c49367/lxml-6.0.2-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:957448ac63a42e2e49531b9d6c0fa449a1970dbc32467aaad46f11545be9af1d", size = 4633665, upload-time = "2025-09-22T04:03:17.262Z" },
{ url = "https://files.pythonhosted.org/packages/0f/c9/dcc2da1bebd6275cdc723b515f93edf548b82f36a5458cca3578bc899332/lxml-6.0.2-cp314-cp314t-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b7fc49c37f1786284b12af63152fe1d0990722497e2d5817acfe7a877522f9a9", size = 5234997, upload-time = "2025-09-22T04:03:19.14Z" },
{ url = "https://files.pythonhosted.org/packages/9c/e2/5172e4e7468afca64a37b81dba152fc5d90e30f9c83c7c3213d6a02a5ce4/lxml-6.0.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e19e0643cc936a22e837f79d01a550678da8377d7d801a14487c10c34ee49c7e", size = 5090957, upload-time = "2025-09-22T04:03:21.436Z" },
{ url = "https://files.pythonhosted.org/packages/a5/b3/15461fd3e5cd4ddcb7938b87fc20b14ab113b92312fc97afe65cd7c85de1/lxml-6.0.2-cp314-cp314t-musllinux_1_2_armv7l.whl", hash = "sha256:1db01e5cf14345628e0cbe71067204db658e2fb8e51e7f33631f5f4735fefd8d", size = 4764372, upload-time = "2025-09-22T04:03:23.27Z" },
{ url = "https://files.pythonhosted.org/packages/05/33/f310b987c8bf9e61c4dd8e8035c416bd3230098f5e3cfa69fc4232de7059/lxml-6.0.2-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:875c6b5ab39ad5291588aed6925fac99d0097af0dd62f33c7b43736043d4a2ec", size = 5634653, upload-time = "2025-09-22T04:03:25.767Z" },
{ url = "https://files.pythonhosted.org/packages/70/ff/51c80e75e0bc9382158133bdcf4e339b5886c6ee2418b5199b3f1a61ed6d/lxml-6.0.2-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:cdcbed9ad19da81c480dfd6dd161886db6096083c9938ead313d94b30aadf272", size = 5233795, upload-time = "2025-09-22T04:03:27.62Z" },
{ url = "https://files.pythonhosted.org/packages/56/4d/4856e897df0d588789dd844dbed9d91782c4ef0b327f96ce53c807e13128/lxml-6.0.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:80dadc234ebc532e09be1975ff538d154a7fa61ea5031c03d25178855544728f", size = 5257023, upload-time = "2025-09-22T04:03:30.056Z" },
{ url = "https://files.pythonhosted.org/packages/0f/85/86766dfebfa87bea0ab78e9ff7a4b4b45225df4b4d3b8cc3c03c5cd68464/lxml-6.0.2-cp314-cp314t-win32.whl", hash = "sha256:da08e7bb297b04e893d91087df19638dc7a6bb858a954b0cc2b9f5053c922312", size = 3911420, upload-time = "2025-09-22T04:03:32.198Z" },
{ url = "https://files.pythonhosted.org/packages/fe/1a/b248b355834c8e32614650b8008c69ffeb0ceb149c793961dd8c0b991bb3/lxml-6.0.2-cp314-cp314t-win_amd64.whl", hash = "sha256:252a22982dca42f6155125ac76d3432e548a7625d56f5a273ee78a5057216eca", size = 4406837, upload-time = "2025-09-22T04:03:34.027Z" },
{ url = "https://files.pythonhosted.org/packages/92/aa/df863bcc39c5e0946263454aba394de8a9084dbaff8ad143846b0d844739/lxml-6.0.2-cp314-cp314t-win_arm64.whl", hash = "sha256:bb4c1847b303835d89d785a18801a883436cdfd5dc3d62947f9c49e24f0f5a2c", size = 3822205, upload-time = "2025-09-22T04:03:36.249Z" },
]
[[package]]
@@ -963,6 +1201,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/af/33/ee4519fa02ed11a94aef9559552f3b17bb863f2ecfe1a35dc7f548cde231/matplotlib_inline-0.2.1-py3-none-any.whl", hash = "sha256:d56ce5156ba6085e00a9d54fead6ed29a9c47e215cd1bba2e976ef39f5710a76", size = 9516, upload-time = "2025-10-23T09:00:20.675Z" },
]
[[package]]
name = "mini-racer"
version = "0.14.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/55/7b/2f417069fb8fcb85c1458e51ea83c12d37f892a41544ef28479e37a315a3/mini_racer-0.14.0.tar.gz", hash = "sha256:7f812d6f21a8828e99e986bf4bb184c04bd906c845061aa43d7dd3edc8b8e6f5", size = 41238, upload-time = "2026-01-05T07:28:50.336Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4f/b5/d184a34787edae8301ec5bd1a454c9bfdce2c58fb3c887f8d12416589057/mini_racer-0.14.0-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:b02b3e15c548958a75afec12b9c21afa01c4a3aacbea66f5856036ff9b6c1a36", size = 19847149, upload-time = "2026-01-05T07:28:24.682Z" },
{ url = "https://files.pythonhosted.org/packages/d4/09/f7afb45b4e54ccacc88fb543d7d87040904c7bbcbeed3f944959189f93c1/mini_racer-0.14.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:049a239a1174d40e2a38da71b55aa0ad73a1a7be90956d4ab9ddf9a1dcfa8178", size = 18396834, upload-time = "2026-01-05T07:28:27.653Z" },
{ url = "https://files.pythonhosted.org/packages/ba/c5/305d16ea858e9be168e00b2cd5d4e7b74524d9c4b1349b1267386c25964e/mini_racer-0.14.0-py3-none-win_amd64.whl", hash = "sha256:7e4cd3fef3df603c0d1feea6e258cf02c6c09e8619d43d4ff0f0a8595cf96715", size = 15474619, upload-time = "2026-01-05T07:28:45.059Z" },
{ url = "https://files.pythonhosted.org/packages/bd/27/e313b5ff8f6583253e5f9fee64ab88476a570c7307554acb0e2899668a97/mini_racer-0.14.0-py3-none-win_arm64.whl", hash = "sha256:2cb21a959c7045c46d727db015e614903217f3648d24fcdbde6de3b4bd17a498", size = 14795219, upload-time = "2026-01-05T07:28:48.25Z" },
]
[[package]]
name = "mistune"
version = "3.2.0"
@@ -1115,6 +1365,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ad/0d/eca3d962f9eef265f01a8e0d20085c6dd1f443cbffc11b6dede81fd82356/numpy-2.4.1-cp314-cp314t-win_arm64.whl", hash = "sha256:6436cffb4f2bf26c974344439439c95e152c9a527013f26b3577be6c2ca64295", size = 10667121, upload-time = "2026-01-10T06:44:41.644Z" },
]
[[package]]
name = "openpyxl"
version = "3.1.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "et-xmlfile" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3d/f9/88d94a75de065ea32619465d2f77b29a0469500e99012523b91cc4141cd1/openpyxl-3.1.5.tar.gz", hash = "sha256:cf0e3cf56142039133628b5acffe8ef0c12bc902d2aadd3e0fe5878dc08d1050", size = 186464, upload-time = "2024-06-28T14:03:44.161Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c0/da/977ded879c29cbd04de313843e76868e6e13408a94ed6b987245dc7c8506/openpyxl-3.1.5-py2.py3-none-any.whl", hash = "sha256:5282c12b107bffeef825f4617dc029afaf41d0ea60823bbb665ef3079dc79de2", size = 250910, upload-time = "2024-06-28T14:03:41.161Z" },
]
[[package]]
name = "packaging"
version = "25.0"
@@ -1325,6 +1587,17 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8e/37/efad0257dc6e593a18957422533ff0f87ede7c9c6ea010a2177d738fb82f/pure_eval-0.2.3-py3-none-any.whl", hash = "sha256:1db8e35b67b3d218d818ae653e27f06c3aa420901fa7b081ca98cbedc874e0d0", size = 11842, upload-time = "2024-07-21T12:58:20.04Z" },
]
[[package]]
name = "py-mini-racer"
version = "0.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/50/97/a578b918b2e5923dd754cb60bb8b8aeffc85255ffb92566e3c65b148ff72/py_mini_racer-0.6.0.tar.gz", hash = "sha256:f71e36b643d947ba698c57cd9bd2232c83ca997b0802fc2f7f79582377040c11", size = 5994836, upload-time = "2021-04-22T07:58:35.993Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/13/13/058240c7fd1fbf29a24bda048d93346c2a56275736b76b56afe64050a161/py_mini_racer-0.6.0-py2.py3-none-macosx_10_10_x86_64.whl", hash = "sha256:346e73bb89a2024888244d487834be24a121089ceb0641dd0200cb96c4e24b57", size = 5280865, upload-time = "2021-04-22T07:58:29.118Z" },
{ url = "https://files.pythonhosted.org/packages/29/a9/8ce0ca222ef04d602924a1e099be93f5435ca6f3294182a30574d4159ca2/py_mini_racer-0.6.0-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:42896c24968481dd953eeeb11de331f6870917811961c9b26ba09071e07180e2", size = 5416149, upload-time = "2021-04-22T07:58:25.615Z" },
{ url = "https://files.pythonhosted.org/packages/5d/71/76ac5d593e14b148a4847b608c5ad9a2c7c4827c796c33b396d0437fa113/py_mini_racer-0.6.0-py2.py3-none-win_amd64.whl", hash = "sha256:97cab31bbf63ce462ba4cd6e978c572c916d8b15586156c7c5e0b2e42c10baab", size = 4797809, upload-time = "2021-04-22T07:58:32.286Z" },
]
[[package]]
name = "pycparser"
version = "2.23"
@@ -1352,6 +1625,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8b/40/2614036cdd416452f5bf98ec037f38a1afb17f327cb8e6b652d4729e0af8/pyparsing-3.3.1-py3-none-any.whl", hash = "sha256:023b5e7e5520ad96642e2c6db4cb683d3970bd640cdf7115049a6e9c3682df82", size = 121793, upload-time = "2025-12-23T03:14:02.103Z" },
]
[[package]]
name = "pyproject-hooks"
version = "1.2.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e7/82/28175b2414effca1cdac8dc99f76d660e7a4fb0ceefa4b4ab8f5f6742925/pyproject_hooks-1.2.0.tar.gz", hash = "sha256:1e859bd5c40fae9448642dd871adf459e5e2084186e8d2c2a79a824c970da1f8", size = 19228, upload-time = "2024-09-29T09:24:13.293Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bd/24/12818598c362d7f300f18e74db45963dbcb85150324092410c8b49405e42/pyproject_hooks-1.2.0-py3-none-any.whl", hash = "sha256:9e5c6bfa8dcc30091c74b0cf803c81fdd29d94f01992a7707bc97babb1141913", size = 10216, upload-time = "2024-09-29T09:24:11.978Z" },
]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
@@ -1565,6 +1847,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922", size = 1201486, upload-time = "2025-05-27T00:56:49.664Z" },
]
[[package]]
name = "simplejson"
version = "3.20.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/41/f4/a1ac5ed32f7ed9a088d62a59d410d4c204b3b3815722e2ccfb491fa8251b/simplejson-3.20.2.tar.gz", hash = "sha256:5fe7a6ce14d1c300d80d08695b7f7e633de6cd72c80644021874d985b3393649", size = 85784, upload-time = "2025-09-26T16:29:36.64Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/05/5b/83e1ff87eb60ca706972f7e02e15c0b33396e7bdbd080069a5d1b53cf0d8/simplejson-3.20.2-py3-none-any.whl", hash = "sha256:3b6bb7fb96efd673eac2e4235200bfffdc2353ad12c54117e1e4e2fc485ac017", size = 57309, upload-time = "2025-09-26T16:29:35.312Z" },
]
[[package]]
name = "six"
version = "1.17.0"
@@ -1583,6 +1874,30 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a6/9a/b4450ccce353e2430621b3bb571899ffe1033d5cd72c9e065110f95b1a63/soupsieve-2.8.2-py3-none-any.whl", hash = "sha256:0f4c2f6b5a5fb97a641cf69c0bd163670a0e45e6d6c01a2107f93a6a6f93c51a", size = 37016, upload-time = "2026-01-18T16:21:29.7Z" },
]
[[package]]
name = "sqlalchemy"
version = "2.0.46"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "greenlet", marker = "platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64'" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/06/aa/9ce0f3e7a9829ead5c8ce549392f33a12c4555a6c0609bb27d882e9c7ddf/sqlalchemy-2.0.46.tar.gz", hash = "sha256:cf36851ee7219c170bb0793dbc3da3e80c582e04a5437bc601bfe8c85c9216d7", size = 9865393, upload-time = "2026-01-21T18:03:45.119Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/f8/5ecdfc73383ec496de038ed1614de9e740a82db9ad67e6e4514ebc0708a3/sqlalchemy-2.0.46-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:56bdd261bfd0895452006d5316cbf35739c53b9bb71a170a331fa0ea560b2ada", size = 2152079, upload-time = "2026-01-21T19:05:58.477Z" },
{ url = "https://files.pythonhosted.org/packages/e5/bf/eba3036be7663ce4d9c050bc3d63794dc29fbe01691f2bf5ccb64e048d20/sqlalchemy-2.0.46-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:33e462154edb9493f6c3ad2125931e273bbd0be8ae53f3ecd1c161ea9a1dd366", size = 3272216, upload-time = "2026-01-21T18:46:52.634Z" },
{ url = "https://files.pythonhosted.org/packages/05/45/1256fb597bb83b58a01ddb600c59fe6fdf0e5afe333f0456ed75c0f8d7bd/sqlalchemy-2.0.46-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9bcdce05f056622a632f1d44bb47dbdb677f58cad393612280406ce37530eb6d", size = 3277208, upload-time = "2026-01-21T18:40:16.38Z" },
{ url = "https://files.pythonhosted.org/packages/d9/a0/2053b39e4e63b5d7ceb3372cface0859a067c1ddbd575ea7e9985716f771/sqlalchemy-2.0.46-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:8e84b09a9b0f19accedcbeff5c2caf36e0dd537341a33aad8d680336152dc34e", size = 3221994, upload-time = "2026-01-21T18:46:54.622Z" },
{ url = "https://files.pythonhosted.org/packages/1e/87/97713497d9502553c68f105a1cb62786ba1ee91dea3852ae4067ed956a50/sqlalchemy-2.0.46-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:4f52f7291a92381e9b4de9050b0a65ce5d6a763333406861e33906b8aa4906bf", size = 3243990, upload-time = "2026-01-21T18:40:18.253Z" },
{ url = "https://files.pythonhosted.org/packages/a8/87/5d1b23548f420ff823c236f8bea36b1a997250fd2f892e44a3838ca424f4/sqlalchemy-2.0.46-cp314-cp314-win32.whl", hash = "sha256:70ed2830b169a9960193f4d4322d22be5c0925357d82cbf485b3369893350908", size = 2114215, upload-time = "2026-01-21T18:42:55.232Z" },
{ url = "https://files.pythonhosted.org/packages/3a/20/555f39cbcf0c10cf452988b6a93c2a12495035f68b3dbd1a408531049d31/sqlalchemy-2.0.46-cp314-cp314-win_amd64.whl", hash = "sha256:3c32e993bc57be6d177f7d5d31edb93f30726d798ad86ff9066d75d9bf2e0b6b", size = 2139867, upload-time = "2026-01-21T18:42:56.474Z" },
{ url = "https://files.pythonhosted.org/packages/3e/f0/f96c8057c982d9d8a7a68f45d69c674bc6f78cad401099692fe16521640a/sqlalchemy-2.0.46-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4dafb537740eef640c4d6a7c254611dca2df87eaf6d14d6a5fca9d1f4c3fc0fa", size = 3561202, upload-time = "2026-01-21T18:33:10.337Z" },
{ url = "https://files.pythonhosted.org/packages/d7/53/3b37dda0a5b137f21ef608d8dfc77b08477bab0fe2ac9d3e0a66eaeab6fc/sqlalchemy-2.0.46-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:42a1643dc5427b69aca967dae540a90b0fbf57eaf248f13a90ea5930e0966863", size = 3526296, upload-time = "2026-01-21T18:45:12.657Z" },
{ url = "https://files.pythonhosted.org/packages/33/75/f28622ba6dde79cd545055ea7bd4062dc934e0621f7b3be2891f8563f8de/sqlalchemy-2.0.46-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:ff33c6e6ad006bbc0f34f5faf941cfc62c45841c64c0a058ac38c799f15b5ede", size = 3470008, upload-time = "2026-01-21T18:33:11.725Z" },
{ url = "https://files.pythonhosted.org/packages/a9/42/4afecbbc38d5e99b18acef446453c76eec6fbd03db0a457a12a056836e22/sqlalchemy-2.0.46-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:82ec52100ec1e6ec671563bbd02d7c7c8d0b9e71a0723c72f22ecf52d1755330", size = 3476137, upload-time = "2026-01-21T18:45:15.001Z" },
{ url = "https://files.pythonhosted.org/packages/fc/a1/9c4efa03300926601c19c18582531b45aededfb961ab3c3585f1e24f120b/sqlalchemy-2.0.46-py3-none-any.whl", hash = "sha256:f9c11766e7e7c0a2767dda5acb006a118640c9fc0a4104214b96269bfb78399e", size = 1937882, upload-time = "2026-01-21T18:22:10.456Z" },
]
[[package]]
name = "stack-data"
version = "0.6.3"
@@ -1597,6 +1912,36 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f1/7b/ce1eafaf1a76852e2ec9b22edecf1daa58175c090266e9f6c64afcd81d91/stack_data-0.6.3-py3-none-any.whl", hash = "sha256:d5558e0c25a4cb0853cddad3d77da9891a08cb85dd9f9f91b9f8cd66e511e695", size = 24521, upload-time = "2023-09-30T13:58:03.53Z" },
]
[[package]]
name = "ta-lib"
version = "0.6.8"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "build" },
{ name = "numpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ba/ec/27114f6255e6723783d4c4366810620a4347375ebf66f8aea86d9dd58ffd/ta_lib-0.6.8.tar.gz", hash = "sha256:3a9195299df9d7d2a6e9d16bebd6b706b0ea99e4b871864c4b034c2577e21a77", size = 380772, upload-time = "2025-10-20T20:49:56.544Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/db/61/c47098dfb28c468d29fccfbb2ba35a10001d37dd51c4200a4e50c788ede6/ta_lib-0.6.8-cp314-cp314-macosx_13_0_x86_64.whl", hash = "sha256:36b2a516fce57309840f5ef3fa2fd0c4449293fc72536a0400d2e1e26b414da8", size = 1075848, upload-time = "2025-10-20T20:49:29.517Z" },
{ url = "https://files.pythonhosted.org/packages/6d/e9/a30e770902c1df915a94a43e652f432e7647b710c0e1120751c05805d4bc/ta_lib-0.6.8-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:7993164e8e9f78ec31d38c47850ca6ba5451788b5b49a8a2dbb3322b36b5693b", size = 986649, upload-time = "2025-10-20T20:49:30.702Z" },
{ url = "https://files.pythonhosted.org/packages/9b/2f/8961a9e7434a2d10b8f625bb4d5c049484a898e76e9c5e40398da410aec0/ta_lib-0.6.8-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:613cf06313331f49dd7b85a5a24fbddb1156c9723b6921a231906241726e5aee", size = 3971825, upload-time = "2025-10-20T20:49:32.185Z" },
{ url = "https://files.pythonhosted.org/packages/75/c1/352bc32394549ac9886829a24070a507a30abf45265135b60ee77354f7da/ta_lib-0.6.8-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ce2bc1ea01200b6d8130ab917296d05d77a1a571ec6c1ee25cfca6d55cd5db4a", size = 3991433, upload-time = "2025-10-20T20:49:34.182Z" },
{ url = "https://files.pythonhosted.org/packages/e4/b3/7bde1867df3bf015f48d510d2ba7491359ce13c79ecf5127acae3d308272/ta_lib-0.6.8-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:a63a52221f8c73f82f4e00493351d987f594931198589287aee96f8da673cfd5", size = 3585925, upload-time = "2025-10-20T20:49:35.765Z" },
{ url = "https://files.pythonhosted.org/packages/82/13/8d389f60bb085b6991764d7535f066dd6009fc4f5a45dbd26dc9eaaa3c0a/ta_lib-0.6.8-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:559326d8f3d904cd4aa61f6a392d5626f35eec6a9f6cc83bcddb0abf88c40516", size = 3629696, upload-time = "2025-10-20T20:49:37.299Z" },
{ url = "https://files.pythonhosted.org/packages/82/bc/d2e4c2b752baaee592095feb69514764b004fe53af7cc893ba9c3854cc30/ta_lib-0.6.8-cp314-cp314-win32.whl", hash = "sha256:f5b6174bf4bf9152e368561dff410203c6921e4dd2afbcda3283a95957158112", size = 766352, upload-time = "2025-10-20T20:49:41.088Z" },
{ url = "https://files.pythonhosted.org/packages/40/98/0f2755b5bde81d7b1eaf96b4204f18fabea38b0efc869cb0ea05d57e0afc/ta_lib-0.6.8-cp314-cp314-win_amd64.whl", hash = "sha256:1fb4028437201e19014e4e374272b739867c8a3eb655da46675ef4c2ff14b616", size = 886955, upload-time = "2025-10-20T20:49:38.513Z" },
{ url = "https://files.pythonhosted.org/packages/0b/4c/d341020377f8b183405bdf3c5717fc2ca04a8d33b5c59b2348377ee459d9/ta_lib-0.6.8-cp314-cp314-win_arm64.whl", hash = "sha256:bfad1202fb1f9140e3810cc607058395f59032d9128cc0d716900c78bea5f337", size = 755896, upload-time = "2025-10-20T20:49:39.9Z" },
]
[[package]]
name = "tabulate"
version = "0.9.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090, upload-time = "2022-10-06T17:21:48.54Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252, upload-time = "2022-10-06T17:21:44.262Z" },
]
[[package]]
name = "terminado"
version = "0.18.1"
@@ -1642,6 +1987,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/50/49/8dc3fd90902f70084bd2cd059d576ddb4f8bb44c2c7c0e33a11422acb17e/tornado-6.5.4-cp39-abi3-win_arm64.whl", hash = "sha256:053e6e16701eb6cbe641f308f4c1a9541f91b6261991160391bfc342e8a551a1", size = 445910, upload-time = "2025-12-15T19:21:02.571Z" },
]
[[package]]
name = "tqdm"
version = "4.67.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737, upload-time = "2024-11-24T20:12:22.481Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540, upload-time = "2024-11-24T20:12:19.698Z" },
]
[[package]]
name = "traitlets"
version = "5.14.3"
@@ -1651,6 +2008,24 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/00/c0/8f5d070730d7836adc9c9b6408dec68c6ced86b304a9b26a14df072a6e8c/traitlets-5.14.3-py3-none-any.whl", hash = "sha256:b74e89e397b1ed28cc831db7aea759ba6640cb3de13090ca145426688ff1ac4f", size = 85359, upload-time = "2024-04-19T11:11:46.763Z" },
]
[[package]]
name = "tushare"
version = "1.4.24"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "bs4" },
{ name = "lxml" },
{ name = "pandas" },
{ name = "requests" },
{ name = "simplejson" },
{ name = "tqdm" },
{ name = "websocket-client" },
]
sdist = { url = "https://files.pythonhosted.org/packages/89/09/2141aaccb90a8249edb42d6b31330606d8cf9345237773775a3aa4c71986/tushare-1.4.24.tar.gz", hash = "sha256:786acbf6ee7dfb0b152bdd570b673f74e58b86a0d9908a221c6bdc4254a4e0ea", size = 128539, upload-time = "2025-08-25T02:02:05.451Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/80/75/63810958023595b460f2a5ef6baf5a60ffd8166e5fc06a3c2f22e9ca7b34/tushare-1.4.24-py3-none-any.whl", hash = "sha256:778e3128262747cb0cdadac2e5a5e6cd1a520c239b4ffbde2776652424451b08", size = 143587, upload-time = "2025-08-25T02:02:03.554Z" },
]
[[package]]
name = "types-pytz"
version = "2025.2.0.20251108"
@@ -1741,6 +2116,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/0e/fa3b193432cfc60c93b42f3be03365f5f909d2b3ea410295cf36df739e31/widgetsnbextension-4.0.15-py3-none-any.whl", hash = "sha256:8156704e4346a571d9ce73b84bee86a29906c9abfd7223b7228a28899ccf3366", size = 2196503, upload-time = "2025-11-01T21:15:53.565Z" },
]
[[package]]
name = "xlrd"
version = "2.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/07/5a/377161c2d3538d1990d7af382c79f3b2372e880b65de21b01b1a2b78691e/xlrd-2.0.2.tar.gz", hash = "sha256:08b5e25de58f21ce71dc7db3b3b8106c1fa776f3024c54e45b45b374e89234c9", size = 100167, upload-time = "2025-06-14T08:46:39.039Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1a/62/c8d562e7766786ba6587d09c5a8ba9f718ed3fa8af7f4553e8f91c36f302/xlrd-2.0.2-py2.py3-none-any.whl", hash = "sha256:ea762c3d29f4cca48d82df517b6d89fbce4db3107f9d78713e48cd321d5c9aa9", size = 96555, upload-time = "2025-06-14T08:46:37.766Z" },
]
[[package]]
name = "xyzservices"
version = "2025.11.0"