THE ORCHESTRATOR (#600)

* feat(background-agent): add ConcurrencyManager for model-based limits

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* fix(background-agent): set default concurrency to 5

🤖 Generated with [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(background-agent): support 0 as unlimited concurrency

Setting concurrency to 0 means unlimited (Infinity).
Works for defaultConcurrency, providerConcurrency, and modelConcurrency.

🤖 Generated with [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): use auto flag for session resumption after compaction

- executor.ts: Added `auto: true` to summarize body, removed subsequent prompt_async call
- preemptive-compaction/index.ts: Added `auto: true` to summarize body, removed subsequent promptAsync call
- executor.test.ts: Updated test expectation to include `auto: true`

Instead of sending 'Continue' prompt after compaction, use SessionCompaction's `auto: true` feature which auto-resumes the session.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(agents): update sisyphus orchestrator

Update Sisyphus agent orchestrator with latest changes.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(features): update background agent manager

Update background agent manager with latest configuration changes.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(features): update init-deep template

Update initialization template with latest configuration.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(hooks): update hook constants and configuration

Update hook constants and configuration across agent-usage-reminder, keyword-detector, and claude-code-hooks.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(tools): remove background-task tool

Remove background-task tool module completely:
- src/tools/background-task/constants.ts
- src/tools/background-task/index.ts
- src/tools/background-task/tools.ts
- src/tools/background-task/types.ts

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(tools): update tool exports and main plugin entry

Update tool index exports and main plugin entry point after background-task tool removal.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(auth): update constants to match CLIProxyAPI (50min buffer, 2 endpoints)

- Changed ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS from 60,000ms (1min) to 3,000,000ms (50min)
- Removed autopush endpoint from ANTIGRAVITY_ENDPOINT_FALLBACKS (now 2 endpoints: daily → prod)
- Added comprehensive test suite with 6 tests covering all updated constants
- Updated comments to reflect CLIProxyAPI parity

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(auth): remove PKCE to match CLIProxyAPI

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat(auth): implement port 51121 with OS fallback

Add port fallback logic to OAuth callback server:
- Try port 51121 (ANTIGRAVITY_CALLBACK_PORT) first
- Fallback to OS-assigned port on EADDRINUSE
- Add redirectUri property to CallbackServerHandle
- Return actual bound port in handle.port

Add comprehensive port handling tests (5 new tests):
- Should prefer port 51121
- Should return actual bound port
- Should fallback when port occupied
- Should cleanup and release port on close
- Should provide redirect URI with actual port

All 16 tests passing (11 existing + 5 new).

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* test(auth): add token expiry tests for 50-min buffer

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat(agents): add Prometheus system prompt and planner methodology

Add prometheus-prompt.ts with comprehensive planner agent system prompt.
Update plan-prompt.ts with streamlined Prometheus workflow including:
- Context gathering via explore/librarian agents
- Metis integration for AI slop guardrails
- Structured plan output format

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): add Metis plan consultant agent

Add Metis agent for pre-planning analysis that identifies:
- Hidden requirements and implicit constraints
- AI failure points and common mistakes
- Clarifying questions before planning begins

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): add Momus plan reviewer agent

Add Momus agent for rigorous plan review against:
- Clarity and verifiability standards
- Completeness checks
- AI slop detection

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): add Sisyphus-Junior focused executor agent

Add Sisyphus-Junior agent for focused task execution:
- Same discipline as Sisyphus, no delegation capability
- Used for category-based task spawning via sisyphus_task tool

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): add orchestrator-sisyphus agent

Add orchestrator-sisyphus agent for complex workflow orchestration:
- Manages multi-agent workflows
- Coordinates between specialized agents
- Handles start-work command execution

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(skill-loader): add skill-content resolver for agent skills

Add resolveMultipleSkills() for resolving skill content to prepend to agent prompts.
Includes test coverage for resolution logic.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): add category and skills support to buildAgent

Extend buildAgent() to support:
- category: inherit model/temperature from DEFAULT_CATEGORIES
- skills: prepend resolved skill content to agent prompt

Includes comprehensive test coverage for new functionality.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): register new agents in index and types

- Export Metis, Momus, orchestrator-sisyphus in builtinAgents
- Add new agent names to BuiltinAgentName type
- Update AGENTS.md documentation with new agents

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(features): add boulder-state persistence

Add boulder-state feature for persisting workflow state:
- storage.ts: File I/O operations for state persistence
- types.ts: State interfaces
- Includes test coverage

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(skills): add frontend-ui-ux builtin skill

Add frontend-ui-ux skill for designer-turned-developer UI work:
- SKILL.md with comprehensive design principles
- skills.ts updated with skill template

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(tools): add sisyphus_task tool for category-based delegation

Add sisyphus_task tool supporting:
- Category-based task delegation (visual, business-logic, etc.)
- Direct agent targeting
- Background execution with resume capability
- DEFAULT_CATEGORIES configuration

Includes test coverage.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(background-agent): add resume capability and model field

- Add resume() method for continuing existing agent sessions
- Add model field to BackgroundTask and LaunchInput types
- Update launch() to pass model to session.prompt()
- Comprehensive test coverage for resume functionality

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add task-resume-info hook

Add hook for injecting task resume information into tool outputs.
Enables seamless continuation of background agent sessions.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add prometheus-md-only write restriction hook

Add hook that restricts Prometheus planner to writing only .md files
in the .sisyphus/ directory. Prevents planners from implementing.
Includes test coverage.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add start-work hook for Sisyphus workflow

Add hook for detecting /start-work command and triggering
orchestrator-sisyphus agent for plan execution.
Includes test coverage.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add sisyphus-orchestrator hook

Add hook for orchestrating Sisyphus agent workflows:
- Coordinates task execution between agents
- Manages workflow state persistence
- Handles agent handoffs

Includes comprehensive test coverage.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): export new hooks in index

Export new hooks:
- createPrometheusMdOnlyHook
- createTaskResumeInfoHook
- createStartWorkHook
- createSisyphusOrchestratorHook

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(todo-enforcer): add skipAgents option and improve permission check

- Add skipAgents option to skip continuation for specified agents
- Default skip: Prometheus (Planner)
- Improve tool permission check to handle 'allow'/'deny' string values
- Add agent name detection from session messages

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(config): add categories, new agents and hooks to schema

Update Zod schema with:
- CategoryConfigSchema for task delegation categories
- CategoriesConfigSchema for user category overrides
- New agents: Metis (Plan Consultant)
- New hooks: prometheus-md-only, start-work, sisyphus-orchestrator
- New commands: start-work
- Agent category and skills fields

Includes schema test coverage.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(commands): add start-work command

Add /start-work command for executing Prometheus plans:
- start-work.ts: Command template for orchestrator-sisyphus
- commands.ts: Register command with agent binding
- types.ts: Add command name to type union

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(migration): add backup creation and category migration

- Create timestamped backup before migration writes
- Add migrateAgentConfigToCategory() for model→category migration
- Add shouldDeleteAgentConfig() for cleanup when matching defaults
- Add Prometheus and Metis to agent name map
- Comprehensive test coverage for new functionality

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(config-handler): add Sisyphus-Junior and orchestrator support

- Add Sisyphus-Junior agent creation
- Add orchestrator-sisyphus tool restrictions
- Rename Planner-Sisyphus to Prometheus (Planner)
- Use PROMETHEUS_SYSTEM_PROMPT and PROMETHEUS_PERMISSION

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(cli): add categories config for Antigravity auth

Add category model overrides for Gemini Antigravity authentication:
- visual: gemini-3-pro-high
- artistry: gemini-3-pro-high
- writing: gemini-3-pro-high

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(sisyphus): update to use sisyphus_task and add resume docs

- Update example code from background_task to sisyphus_task
- Add 'Resume Previous Agent' documentation section
- Remove model name from Oracle section heading
- Disable call_omo_agent tool for Sisyphus

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor: update tool references from background_task to sisyphus_task

Update all references across:
- agent-usage-reminder: Update AGENT_TOOLS and REMINDER_MESSAGE
- claude-code-hooks: Update comment
- call-omo-agent: Update constants and tool restrictions
- init-deep template: Update example code
- tools/index.ts: Export sisyphus_task, remove background_task

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hook-message-injector): add ToolPermission type support

Add ToolPermission type union: boolean | 'allow' | 'deny' | 'ask'
Update StoredMessage and related interfaces for new permission format.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(main): wire up new tools, hooks and agents

Wire up in main plugin entry:
- Import and create sisyphus_task tool
- Import and wire taskResumeInfo, startWork, sisyphusOrchestrator hooks
- Update tool restrictions from background_task to sisyphus_task
- Pass userCategories to createSisyphusTask

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* docs: update documentation for Prometheus and new features

Update documentation across all language versions:
- Rename Planner-Sisyphus to Prometheus (Planner)
- Add Metis (Plan Consultant) agent documentation
- Add Categories section with usage examples
- Add sisyphus_task tool documentation
- Update AGENTS.md with new structure and complexity hotspots
- Update src/tools/AGENTS.md with sisyphus_task category

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* build: regenerate schema.json with new types

Update JSON schema with:
- New agents: Prometheus (Planner), Metis (Plan Consultant)
- New hooks: prometheus-md-only, start-work, sisyphus-orchestrator
- New commands: start-work
- New skills: frontend-ui-ux
- CategoryConfigSchema for task delegation
- Agent category and skills fields

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* skill

* feat: add toast notifications for task execution

- Display toast when background task starts in BackgroundManager
- Display toast when sisyphus_task sync task starts
- Wire up prometheus-md-only hook initialization in main plugin

This provides user feedback in OpenCode TUI where task TUI is not visible.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add read-only warning injection for Prometheus task delegation

When Prometheus (Planner) spawns subagents via task tools (sisyphus_task, task, call_omo_agent), a system directive is injected into the prompt to ensure subagents understand they are in a planning consultation context and must NOT modify files.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(hooks): add mandatory hands-on verification enforcement for orchestrated tasks

- sisyphus-orchestrator: Add verification reminder with tool matrix (playwright/interactive_bash/curl)

- start-work: Inject detailed verification workflow with deliverable-specific guidance

🤖 Generated with [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode) assistance

* docs(agents): clarify oracle and metis agent descriptions emphasizing read-only consultation roles

- Oracle: high-IQ reasoning specialist for debugging and architecture (read-only)
- Metis: updated description to align with oracle's consultation-only model
- Updated AGENTS.md with clarified agent responsibilities

* docs(orchestrator): emphasize oracle as read-only consultation agent

- Updated orchestrator-sisyphus agent descriptions
- Updated sisyphus-prompt-builder to highlight oracle's read-only consultation role
- Clarified that oracle provides high-IQ reasoning without write operations

* docs(refactor,root): update oracle consultation model in feature templates and root docs

- Updated refactor command template to emphasize oracle's read-only role
- Updated root AGENTS.md with oracle agent description emphasizing high-IQ debugging and architecture consultation
- Clarified oracle as non-write agent for design and debugging support

* feat(features): add TaskToastManager for consolidated task notifications

- Create task-toast-manager feature with singleton pattern

- Show running task list (newest first) when new task starts

- Track queued tasks status from ConcurrencyManager

- Integrate with BackgroundManager and sisyphus-task tool

🤖 Generated with [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode) assistance

* feat(hooks): add resume session_id to verification reminders for orchestrator subagent work

When subagent work fails verification, show exact sisyphus_task(resume="...")
command with session_id for immediate retry. Consolidates verification workflow
across boulder and standalone modes.

* refactor(hooks): remove duplicate verification enforcement from start-work hook

Verification reminders are now centralized in sisyphus-orchestrator hook,
eliminating redundant code in start-work. The orchestrator hook handles all
verification messaging across both boulder and standalone modes.

* test(hooks): update prometheus-md-only test assertions and formatting

Updated test structure and assertions to match current output format.
Improved test clarity while maintaining complete coverage of markdown
validation and write restriction behavior.

* orchestrator

* feat(skills): add git-master skill for atomic commits and history management

- Add comprehensive git-master skill for commit, rebase, and history operations
- Implements atomic commit strategy with multi-file splitting rules
- Includes style detection, branch analysis, and history search capabilities
- Provides three modes: COMMIT, REBASE, HISTORY_SEARCH

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* docs(agents): add pre-delegation planning section to Sisyphus prompt

- Add SISYPHUS_PRE_DELEGATION_PLANNING section with mandatory declaration rules
- Implements 3-step decision tree: Identify → Select → Declare
- Forces explicit category/agent/skill declaration before every sisyphus_task call
- Includes mandatory 4-part format: Category/Agent, Reason, Skills, Expected Outcome
- Provides examples (CORRECT vs WRONG) and enforcement rules
- Follows prompt engineering best practices: Clear, CoT, Structured, Examples

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(tools): rename agent parameter to subagent_type in sisyphus_task

- Update parameter name from 'agent' to 'subagent_type' for consistency with call_omo_agent
- Update all references and error messages
- Remove deprecated 'agent' field from SisyphusTaskArgs interface
- Update git-master skill documentation to reflect parameter name change

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(agents): change orchestrator-sisyphus default model to claude-sonnet-4-5

- Update orchestrator-sisyphus model from opus-4-5 to sonnet-4-5 for better cost efficiency
- Keep Prometheus using opus-4-5 for planning tasks

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* refactor(config): make Prometheus model independent from plan agent config

- Prometheus no longer inherits model from plan agent configuration
- Fallback chain: session default model -> claude-opus-4-5
- Removes coupling between Prometheus and legacy plan agent settings

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* fix(momus): allow system directives in input validation

System directives (XML tags like <system-reminder>) are automatically
injected and should be ignored during input validation. Only reject
when there's actual user text besides the file path.

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(prometheus): enhance high accuracy mode with mandatory Momus loop

When user requests high accuracy:
- Momus review loop is now mandatory until 'OKAY'
- No excuses allowed - must fix ALL issues
- No maximum retry limit - keep looping until approved
- Added clear explanation of what 'OKAY' means

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(prometheus): enhance reference section with detailed guidance

References now include:
- Pattern references (existing code to follow)
- API/Type references (contracts to implement)
- Test references (testing patterns)
- Documentation references (specs and requirements)
- External references (libraries and frameworks)
- Explanation of WHY each reference matters

The executor has no interview context - references are their only guide.

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)

* feat(git-master): add configurable commit footer and co-author options

Add git_master config with commit_footer and include_co_authored_by flags.
Users can disable Sisyphus attribution in commits via oh-my-opencode.json.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat(hooks): add single-task directive and system-reminder tags to orchestrator

Inject SINGLE_TASK_DIRECTIVE when orchestrator calls sisyphus_task to enforce
atomic task delegation. Wrap verification reminders in <system-reminder> tags
for better LLM attention.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* refactor: use ContextCollector for hook injection and remove unused background tools

Split changes:
- Replace injectHookMessage with ContextCollector.register() pattern for improved hook content injection
- Remove unused background task tools infrastructure (createBackgroundOutput, createBackgroundCancel)

🤖 Generated with assistance of OhMyOpenCode (https://github.com/code-yeongyu/oh-my-opencode)

* chore(context-injector): add debug logging for context injection tracing

Add DEBUG log statements to trace context injection flow:
- Log message transform hook invocations
- Log sessionID extraction from message info
- Log hasPending checks for context collector
- Log hook content registration to contextCollector

🤖 Generated with [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode) assistance

* fix(context-injector): prepend to user message instead of separate synthetic message

- Change from creating separate synthetic user message to prepending context
  directly to last user message's text part
- Separate synthetic messages were ignored by model (treated as previous turn)
- Prepending to clone ensures: UI shows original, model receives prepended content
- Update tests to reflect new behavior

* feat(prometheus): enforce mandatory todo registration on plan generation trigger

* fix(sisyphus-task): add proper error handling for sync mode and implement BackgroundManager.resume()

- Add try-catch for session.prompt() in sync mode with detailed error messages
- Sort assistant messages by time to get the most recent response
- Add 'No assistant response found' error handling
- Implement BackgroundManager.resume() method for task resumption
- Fix ConcurrencyManager type mismatch (model → concurrencyKey)

* docs(sisyphus-task): clarify resume usage with session_id and add when-to-use guidance

- Fix terminology: 'Task ID' → 'Session ID' in resume parameter docs
- Add clear 'WHEN TO USE resume' section with concrete scenarios
- Add example usage pattern in Sisyphus agent prompt
- Emphasize token savings and context preservation benefits

* fix(agents): block task/sisyphus_task/call_omo_agent from explore and librarian

Exploration agents should not spawn other agents - they are leaf nodes
in the agent hierarchy for codebase search only.

* refactor(oracle): change default model from GPT-5.2 to Claude Opus 4.5

* feat(oracle): change default model to claude-opus-4-5

* fix(sisyphus-orchestrator): check boulder session_ids before filtering sessions

Bug: continuation was not triggered even when boulder.json existed with
session_ids because the session filter ran BEFORE reading boulder state.

Fix: Read boulder state first, then include boulder sessions in the
allowed sessions for continuation.

* feat(task-toast): display skills and concurrency info in toast

- Add skills field to TrackedTask and LaunchInput types
- Show skills in task list message as [skill1, skill2]
- Add concurrency slot info [running/limit] in Running header
- Pass skills from sisyphus_task to toast manager (sync & background)
- Add unit tests for new toast features

* refactor(categories): rename high-iq to ultrabrain

* feat(sisyphus-task): add skillContent support to background agent launching

- Add optional skillContent field to LaunchInput type
- Implement buildSystemContent utility to combine skill and category prompts
- Update BackgroundManager to pass skillContent as system parameter
- Add comprehensive tests for skillContent optionality and buildSystemContent logic

🤖 Generated with assistance of oh-my-opencode

* Revert "refactor(tools): remove background-task tool"

This reverts commit 6dbc4c095badd400e024510554a42a0dc018ae42.

* refactor(sisyphus-task): rename background to run_in_background

* fix(oracle): use gpt-5.2 as default model

* test(sisyphus-task): add resume with background parameter tests

* feat(start-work): auto-select single incomplete plan and use system-reminder format

- Auto-select when only one incomplete plan exists among multiple
- Wrap multiple plans message in <system-reminder> tag
- Change prompt to 'ask user' style for agent guidance
- Add 'All Plans Complete' state handling

* feat(sisyphus-task): make skills parameter required

- Add validation for skills parameter (must be provided, use [] if empty)
- Update schema to remove .optional()
- Update type definition to make skills non-optional
- Fix existing tests to include skills parameter

* fix: prevent session model change when sending notifications

- background-agent: use only parentModel, remove prevMessage fallback
- todo-continuation: don't pass model to preserve session's lastModel
- Remove unused imports (findNearestMessageWithFields, fs, path)

Root cause: session.prompt with model param changes session's lastModel

* fix(sisyphus-orchestrator): register handler in event loop for boulder continuation

* fix(sisyphus_task): use promptAsync for sync mode to preserve main session

- session.prompt() changes the active session, causing UI model switch
- Switch to promptAsync + polling to avoid main session state change
- Matches background-agent pattern for consistency

* fix(sisyphus-orchestrator): only trigger boulder continuation for orchestrator-sisyphus agent

* feat(background-agent): add parentAgent tracking to preserve agent context in background tasks

- Add parentAgent field to BackgroundTask, LaunchInput, and ResumeInput interfaces
- Pass parentAgent through background task manager to preserve agent identity
- Update sisyphus-orchestrator to set orchestrator-sisyphus agent context
- Add session tracking for background agents to prevent context loss
- Propagate agent context in background-task and sisyphus-task tools

This ensures background/subagent spawned tasks maintain proper agent context for notifications and continuity.

🤖 Generated with assistance of oh-my-opencode

* fix(antigravity): sync plugin.ts with PKCE-removed oauth.ts API

Remove decodeState import and update OAuth flow to use simple state
string comparison for CSRF protection instead of PKCE verifier.
Update exchangeCode calls to match new signature (code, redirectUri,
clientId, clientSecret).

* fix(hook-message-injector): preserve agent info with two-pass message lookup

findNearestMessageWithFields now has a fallback pass that returns
messages with ANY useful field (agent OR model) instead of requiring
ALL fields. This prevents parentAgent from being lost when stored
messages don't have complete model info.

* fix(sisyphus-task): use SDK session.messages API for parent agent lookup

Background task notifications were showing 'build' agent instead of the
actual parent agent (e.g., 'Sisyphus'). The hook-injected message storage
only contains limited info; the actual agent name is in the SDK session.

Changes:
- Add getParentAgentFromSdk() to query SDK messages API
- Look up agent from SDK first, fallback to hook-injected messages
- Ensures background tasks correctly preserve parent agent context

* fix(sisyphus-task): use ctx.agent directly for parentAgent

The tool context already provides the agent name via ctx.agent.
The previous SDK session.messages lookup was completely wrong -
SDK messages don't store agent info per message.

Removes useless getParentAgentFromSdk function.

* feat(prometheus-md-only): allow .md files anywhere, only block code files

Prometheus (Planner) can now write .md files anywhere, not just .sisyphus/.
Still blocks non-.md files (code) to enforce read-only planning for code.

This allows planners to write commentary and analysis in markdown format.

* Revert "feat(prometheus-md-only): allow .md files anywhere, only block code files"

This reverts commit c600111597591e1862696ee0b92051e587aa1a6b.

* fix(momus): accept bracket-style system directives in input validation

Momus was rejecting inputs with bracket-style directives like [analyze-mode]
and [SYSTEM DIRECTIVE...] because it only recognized XML-style tags.

Now accepts:
- XML tags: <system-reminder>, <context>, etc.
- Bracket blocks: [analyze-mode], [SYSTEM DIRECTIVE...], [SYSTEM REMINDER...], etc.

* fix(sisyphus-orchestrator): inject delegation warning before Write/Edit outside .sisyphus

- Add ORCHESTRATOR_DELEGATION_REQUIRED strong warning in tool.execute.before
- Fix tool.execute.after filePath detection using pendingFilePaths Map
- before stores filePath by callID, after retrieves and deletes it
- Fixes bug where output.metadata.filePath was undefined

* docs: add orchestration, category-skill, and CLI guides

* fix(cli): correct category names in Antigravity migration (visual → visual-engineering)

* fix(sisyphus-task): prevent infinite polling when session removed from status

* fix(tests): update outdated test expectations

- constants.test.ts: Update endpoint count (2→3) and token buffer (50min→60sec)
- token.test.ts: Update expiry tests to use 60-second buffer
- sisyphus-orchestrator: Add fallback to output.metadata.filePath when callID missing

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
This commit is contained in:
YeonGyu-Kim
2026-01-09 02:24:43 +09:00
committed by GitHub
parent 8394926fe1
commit 768ecd928b
92 changed files with 13771 additions and 672 deletions

View File

@@ -6,16 +6,16 @@
## OVERVIEW ## OVERVIEW
OpenCode plugin: multi-model agent orchestration (Claude Opus 4.5, GPT-5.2, Gemini 3, Grok), 11 LSP tools, AST-Grep, Claude Code compatibility layer. "oh-my-zsh" for OpenCode. OpenCode plugin implementing Claude Code/AmpCode features. Multi-model agent orchestration (GPT-5.2, Claude, Gemini, Grok), LSP tools (11), AST-Grep search, MCP integrations (context7, websearch_exa, grep_app). "oh-my-zsh" for OpenCode.
## STRUCTURE ## STRUCTURE
``` ```
oh-my-opencode/ oh-my-opencode/
├── src/ ├── src/
│ ├── agents/ # 7 AI agents - see src/agents/AGENTS.md │ ├── agents/ # AI agents (7): Sisyphus, oracle, librarian, explore, frontend, document-writer, multimodal-looker
│ ├── hooks/ # 22 lifecycle hooks - see src/hooks/AGENTS.md │ ├── hooks/ # 22 lifecycle hooks - see src/hooks/AGENTS.md
│ ├── tools/ # LSP, AST-Grep, session mgmt - see src/tools/AGENTS.md │ ├── tools/ # LSP, AST-Grep, Grep, Glob, session mgmt - see src/tools/AGENTS.md
│ ├── features/ # Claude Code compat layer - see src/features/AGENTS.md │ ├── features/ # Claude Code compat layer - see src/features/AGENTS.md
│ ├── auth/ # Google Antigravity OAuth - see src/auth/AGENTS.md │ ├── auth/ # Google Antigravity OAuth - see src/auth/AGENTS.md
│ ├── shared/ # Cross-cutting utilities - see src/shared/AGENTS.md │ ├── shared/ # Cross-cutting utilities - see src/shared/AGENTS.md
@@ -24,6 +24,7 @@ oh-my-opencode/
│ ├── config/ # Zod schema, TypeScript types │ ├── config/ # Zod schema, TypeScript types
│ └── index.ts # Main plugin entry (464 lines) │ └── index.ts # Main plugin entry (464 lines)
├── script/ # build-schema.ts, publish.ts, generate-changelog.ts ├── script/ # build-schema.ts, publish.ts, generate-changelog.ts
├── assets/ # JSON schema
└── dist/ # Build output (ESM + .d.ts) └── dist/ # Build output (ESM + .d.ts)
``` ```
@@ -31,13 +32,24 @@ oh-my-opencode/
| Task | Location | Notes | | Task | Location | Notes |
|------|----------|-------| |------|----------|-------|
| Add agent | `src/agents/` | Create .ts, add to builtinAgents, update types.ts | | Add agent | `src/agents/` | Create .ts, add to builtinAgents in index.ts, update types.ts |
| Add hook | `src/hooks/` | Dir with createXXXHook(), export from index.ts | | Add hook | `src/hooks/` | Create dir with createXXXHook(), export from index.ts |
| Add tool | `src/tools/` | Dir with constants/types/tools.ts, add to builtinTools | | Add tool | `src/tools/` | Dir with index/types/constants/tools.ts, add to builtinTools |
| Add MCP | `src/mcp/` | Create config, add to index.ts | | Add MCP | `src/mcp/` | Create config, add to index.ts and types.ts |
| Add skill | `src/features/builtin-skills/` | Dir with SKILL.md | | Add skill | `src/features/builtin-skills/` | Create skill dir with SKILL.md |
| Config schema | `src/config/schema.ts` | Run `bun run build:schema` after | | LSP behavior | `src/tools/lsp/` | client.ts (connection), tools.ts (handlers) |
| AST-Grep | `src/tools/ast-grep/` | napi.ts for @ast-grep/napi binding |
| Google OAuth | `src/auth/antigravity/` | OAuth plugin for Google/Gemini models |
| Config schema | `src/config/schema.ts` | Zod schema, run `bun run build:schema` after changes |
| Claude Code compat | `src/features/claude-code-*-loader/` | Command, skill, agent, mcp loaders | | Claude Code compat | `src/features/claude-code-*-loader/` | Command, skill, agent, mcp loaders |
| Background agents | `src/features/background-agent/` | manager.ts for task management |
| Skill MCP | `src/features/skill-mcp-manager/` | MCP servers embedded in skills |
| Interactive terminal | `src/tools/interactive-bash/` | tmux session management |
| CLI installer | `src/cli/install.ts` | Interactive TUI installation |
| Doctor checks | `src/cli/doctor/checks/` | Health checks for environment |
| Shared utilities | `src/shared/` | Cross-cutting utilities |
| Slash commands | `src/hooks/auto-slash-command/` | Auto-detect and execute `/command` patterns |
| Ralph Loop | `src/hooks/ralph-loop/` | Self-referential dev loop until completion |
## TDD (Test-Driven Development) ## TDD (Test-Driven Development)
@@ -64,7 +76,7 @@ oh-my-opencode/
## CONVENTIONS ## CONVENTIONS
- **Bun only**: `bun run`, `bun test`, `bunx` (NEVER npm/npx) - **Package manager**: Bun only (`bun run`, `bun build`, `bunx`)
- **Types**: bun-types (not @types/node) - **Types**: bun-types (not @types/node)
- **Build**: `bun build` (ESM) + `tsc --emitDeclarationOnly` - **Build**: `bun build` (ESM) + `tsc --emitDeclarationOnly`
- **Exports**: Barrel pattern in index.ts; explicit named exports for tools/hooks - **Exports**: Barrel pattern in index.ts; explicit named exports for tools/hooks
@@ -72,26 +84,41 @@ oh-my-opencode/
- **Testing**: BDD comments `#given`, `#when`, `#then` (same as AAA); TDD workflow (RED-GREEN-REFACTOR) - **Testing**: BDD comments `#given`, `#when`, `#then` (same as AAA); TDD workflow (RED-GREEN-REFACTOR)
- **Temperature**: 0.1 for code agents, max 0.3 - **Temperature**: 0.1 for code agents, max 0.3
## ANTI-PATTERNS ## ANTI-PATTERNS (THIS PROJECT)
| Category | Forbidden | - **npm/yarn**: Use bun exclusively
|----------|-----------| - **@types/node**: Use bun-types
| Type Safety | `as any`, `@ts-ignore`, `@ts-expect-error` | - **Bash file ops**: Never mkdir/touch/rm/cp/mv for file creation in code
| Package Manager | npm, yarn, npx | - **Direct bun publish**: GitHub Actions workflow_dispatch only (OIDC provenance)
| File Ops | Bash mkdir/touch/rm for code file creation | - **Local version bump**: Version managed by CI workflow
| Publishing | Direct `bun publish`, local version bump | - **Year 2024**: NEVER use 2024 in code/prompts (use current year)
| Agent Behavior | High temp (>0.3), broad tool access, sequential agent calls | - **Rush completion**: Never mark tasks complete without verification
| Hooks | Heavy PreToolUse logic, blocking without reason | - **Over-exploration**: Stop searching when sufficient context found
| Year | 2024 in code/prompts (use current year) | - **High temperature**: Don't use >0.3 for code-related agents
- **Broad tool access**: Prefer explicit `include` over unrestricted access
- **Sequential agent calls**: Use `sisyphus_task` for parallel execution
- **Heavy PreToolUse logic**: Slows every tool call
- **Self-planning for complex tasks**: Spawn planning agent (Prometheus) instead
## UNIQUE STYLES
- **Platform**: Union type `"darwin" | "linux" | "win32" | "unsupported"`
- **Optional props**: Extensive `?` for optional interface properties
- **Flexible objects**: `Record<string, unknown>` for dynamic configs
- **Error handling**: Consistent try/catch with async/await
- **Agent tools**: `tools: { include: [...] }` or `tools: { exclude: [...] }`
- **Temperature**: Most agents use `0.1` for consistency
- **Hook naming**: `createXXXHook` function convention
- **Factory pattern**: Components created via `createXXX()` functions
## AGENT MODELS ## AGENT MODELS
| Agent | Default Model | Purpose | | Agent | Default Model | Purpose |
|-------|-------|---------| |-------|-------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator | | Sisyphus | anthropic/claude-opus-4-5 | Primary orchestrator |
| oracle | openai/gpt-5.2 | Strategy, code review | | oracle | openai/gpt-5.2 | Read-only consultation. High-IQ debugging, architecture |
| librarian | anthropic/claude-sonnet-4-5 | Docs, OSS research | | librarian | anthropic/claude-sonnet-4-5 | Multi-repo analysis, docs |
| explore | opencode/grok-code | Fast codebase grep | | explore | opencode/grok-code | Fast codebase exploration |
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | UI generation | | frontend-ui-ux-engineer | google/gemini-3-pro-preview | UI generation |
| document-writer | google/gemini-3-pro-preview | Technical docs | | document-writer | google/gemini-3-pro-preview | Technical docs |
| multimodal-looker | google/gemini-3-flash | PDF/image analysis | | multimodal-looker | google/gemini-3-flash | PDF/image analysis |
@@ -102,7 +129,8 @@ oh-my-opencode/
bun run typecheck # Type check bun run typecheck # Type check
bun run build # ESM + declarations + schema bun run build # ESM + declarations + schema
bun run rebuild # Clean + Build bun run rebuild # Clean + Build
bun test # Run tests (380+) bun run build:schema # Schema only
bun test # Run tests
``` ```
## DEPLOYMENT ## DEPLOYMENT
@@ -110,10 +138,16 @@ bun test # Run tests (380+)
**GitHub Actions workflow_dispatch only** **GitHub Actions workflow_dispatch only**
1. Never modify package.json version locally 1. Never modify package.json version locally
2. Commit & push to dev 2. Commit & push changes
3. Trigger: `gh workflow run publish -f bump=patch|minor|major` 3. Trigger `publish` workflow: `gh workflow run publish -f bump=patch`
CI auto-commits schema changes on master, maintains rolling `next` draft release on dev. **Critical**: Never `bun publish` directly. Never bump version locally.
## CI PIPELINE
- **ci.yml**: Parallel test/typecheck, build verification, auto-commit schema on master, rolling `next` draft release
- **publish.yml**: Manual workflow_dispatch, version bump, changelog, OIDC npm publish
- **sisyphus-agent.yml**: Agent-in-CI for automated issue handling via `@sisyphus-dev-ai` mentions
## COMPLEXITY HOTSPOTS ## COMPLEXITY HOTSPOTS
@@ -123,13 +157,18 @@ CI auto-commits schema changes on master, maintains rolling `next` draft release
| `src/cli/config-manager.ts` | 669 | JSONC parsing, env detection | | `src/cli/config-manager.ts` | 669 | JSONC parsing, env detection |
| `src/auth/antigravity/fetch.ts` | 621 | Token refresh, URL rewriting | | `src/auth/antigravity/fetch.ts` | 621 | Token refresh, URL rewriting |
| `src/tools/lsp/client.ts` | 611 | LSP protocol, JSON-RPC | | `src/tools/lsp/client.ts` | 611 | LSP protocol, JSON-RPC |
| `src/auth/antigravity/response.ts` | 598 | Response transformation, streaming |
| `src/auth/antigravity/thinking.ts` | 571 | Thinking block extraction/transformation |
| `src/hooks/anthropic-context-window-limit-recovery/executor.ts` | 564 | Multi-stage recovery | | `src/hooks/anthropic-context-window-limit-recovery/executor.ts` | 564 | Multi-stage recovery |
| `src/agents/sisyphus.ts` | 504 | Orchestrator prompt | | `src/agents/sisyphus.ts` | 504 | Orchestrator prompt |
## NOTES ## NOTES
- **Testing**: Bun native test (`bun test`), BDD-style `#given/#when/#then`, 360+ tests
- **OpenCode**: Requires >= 1.0.150 - **OpenCode**: Requires >= 1.0.150
- **Config**: `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json` - **Multi-lang docs**: README.md (EN), README.ko.md (KO), README.ja.md (JA), README.zh-cn.md (ZH-CN)
- **JSONC**: Config files support comments and trailing commas - **Config**: `~/.config/opencode/oh-my-opencode.json` (user) or `.opencode/oh-my-opencode.json` (project)
- **Claude Code**: Full compat layer for settings.json hooks, commands, skills, agents, MCPs - **Trusted deps**: @ast-grep/cli, @ast-grep/napi, @code-yeongyu/comment-checker
- **JSONC support**: Config files support comments (`// comment`, `/* block */`) and trailing commas
- **Claude Code Compat**: Full compatibility layer for settings.json hooks, commands, skills, agents, MCPs
- **Skill MCP**: Skills can embed MCP server configs in YAML frontmatter - **Skill MCP**: Skills can embed MCP server configs in YAML frontmatter

View File

@@ -862,7 +862,8 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
- **Sisyphus**: プライマリオーケストレーターエージェント (Claude Opus 4.5) - **Sisyphus**: プライマリオーケストレーターエージェント (Claude Opus 4.5)
- **OpenCode-Builder**: OpenCode のデフォルトビルドエージェントSDK 制限により名前変更、デフォルトで無効) - **OpenCode-Builder**: OpenCode のデフォルトビルドエージェントSDK 制限により名前変更、デフォルトで無効)
- **Planner-Sisyphus**: OpenCode のデフォルトプランエージェントSDK 制限により名前変更、デフォルトで有効) - **Prometheus (Planner)**: OpenCode のデフォルトプランエージェント + work-planner 方法論(デフォルトで有効)
- **Metis (Plan Consultant)**: 隠された要件と AI 失敗ポイントを特定する事前計画分析エージェント
**設定オプション:** **設定オプション:**
@@ -911,8 +912,11 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
"OpenCode-Builder": { "OpenCode-Builder": {
"model": "anthropic/claude-opus-4" "model": "anthropic/claude-opus-4"
}, },
"Planner-Sisyphus": { "Prometheus (Planner)": {
"model": "openai/gpt-5.2" "model": "openai/gpt-5.2"
},
"Metis (Plan Consultant)": {
"model": "anthropic/claude-sonnet-4-5"
} }
} }
} }
@@ -922,8 +926,8 @@ Oh My OpenCode は以下の場所からフックを読み込んで実行しま
| --------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | | --------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `disabled` | `false` | `true` の場合、すべての Sisyphus オーケストレーションを無効化し、元の build/plan をプライマリとして復元します。 | | `disabled` | `false` | `true` の場合、すべての Sisyphus オーケストレーションを無効化し、元の build/plan をプライマリとして復元します。 |
| `default_builder_enabled` | `false` | `true` の場合、OpenCode-Builder エージェントを有効化しますOpenCode build と同じ、SDK 制限により名前変更)。デフォルトでは無効です。 | | `default_builder_enabled` | `false` | `true` の場合、OpenCode-Builder エージェントを有効化しますOpenCode build と同じ、SDK 制限により名前変更)。デフォルトでは無効です。 |
| `planner_enabled` | `true` | `true` の場合、Planner-Sisyphus エージェントを有効化しますOpenCode plan と同じ、SDK 制限により名前変更)。デフォルトで有効です。 | | `planner_enabled` | `true` | `true` の場合、Prometheus (Planner) エージェントを有効化しますwork-planner 方法論を含む)。デフォルトで有効です。 |
| `replace_plan` | `true` | `true` の場合、デフォルトのプランエージェントをサブエージェントモードに降格させます。`false` に設定すると、Planner-Sisyphus とデフォルトのプランの両方を利用できます。 | | `replace_plan` | `true` | `true` の場合、デフォルトのプランエージェントをサブエージェントモードに降格させます。`false` に設定すると、Prometheus (Planner) とデフォルトのプランの両方を利用できます。 |
### Background Tasks ### Background Tasks

View File

@@ -127,6 +127,7 @@ No stupid token consumption massive subagents here. No bloat tools here.
- [Built-in Skills](#built-in-skills) - [Built-in Skills](#built-in-skills)
- [Sisyphus Agent](#sisyphus-agent) - [Sisyphus Agent](#sisyphus-agent)
- [Background Tasks](#background-tasks) - [Background Tasks](#background-tasks)
- [Categories](#categories)
- [Hooks](#hooks) - [Hooks](#hooks)
- [MCPs](#mcps) - [MCPs](#mcps)
- [LSP](#lsp) - [LSP](#lsp)
@@ -553,6 +554,7 @@ Hand your best tools to your best colleagues. Now they can properly refactor, na
- **ast_grep_search**: AST-aware code pattern search (25 languages) - **ast_grep_search**: AST-aware code pattern search (25 languages)
- **ast_grep_replace**: AST-aware code replacement - **ast_grep_replace**: AST-aware code replacement
- **call_omo_agent**: Spawn specialized explore/librarian agents. Supports `run_in_background` parameter for async execution. - **call_omo_agent**: Spawn specialized explore/librarian agents. Supports `run_in_background` parameter for async execution.
- **sisyphus_task**: Category-based task delegation with specialized agents. Supports pre-configured categories (visual, business-logic) or direct agent targeting. Use `background_output` to retrieve results and `background_cancel` to cancel tasks. See [Categories](#categories).
#### Session Management #### Session Management
@@ -915,6 +917,7 @@ Available agents: `oracle`, `librarian`, `explore`, `frontend-ui-ux-engineer`, `
Oh My OpenCode includes built-in skills that provide additional capabilities: Oh My OpenCode includes built-in skills that provide additional capabilities:
- **playwright**: Browser automation with Playwright MCP. Use for web scraping, testing, screenshots, and browser interactions. - **playwright**: Browser automation with Playwright MCP. Use for web scraping, testing, screenshots, and browser interactions.
- **git-master**: Git expert for atomic commits, rebase/squash, and history search (blame, bisect, log -S). STRONGLY RECOMMENDED: Use with `sisyphus_task(category='quick', skills=['git-master'], ...)` to save context.
Disable built-in skills via `disabled_skills` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`: Disable built-in skills via `disabled_skills` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:
@@ -924,7 +927,25 @@ Disable built-in skills via `disabled_skills` in `~/.config/opencode/oh-my-openc
} }
``` ```
Available built-in skills: `playwright` Available built-in skills: `playwright`, `git-master`
### Git Master
Configure git-master skill behavior:
```json
{
"git_master": {
"commit_footer": true,
"include_co_authored_by": true
}
}
```
| Option | Default | Description |
| ------ | ------- | ----------- |
| `commit_footer` | `true` | Adds "Ultraworked with Sisyphus" footer to commit messages. |
| `include_co_authored_by` | `true` | Adds `Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>` trailer to commits. |
### Sisyphus Agent ### Sisyphus Agent
@@ -932,7 +953,8 @@ When enabled (default), Sisyphus provides a powerful orchestrator with optional
- **Sisyphus**: Primary orchestrator agent (Claude Opus 4.5) - **Sisyphus**: Primary orchestrator agent (Claude Opus 4.5)
- **OpenCode-Builder**: OpenCode's default build agent, renamed due to SDK limitations (disabled by default) - **OpenCode-Builder**: OpenCode's default build agent, renamed due to SDK limitations (disabled by default)
- **Planner-Sisyphus**: OpenCode's default plan agent, renamed due to SDK limitations (enabled by default) - **Prometheus (Planner)**: OpenCode's default plan agent with work-planner methodology (enabled by default)
- **Metis (Plan Consultant)**: Pre-planning analysis agent that identifies hidden requirements and AI failure points
**Configuration Options:** **Configuration Options:**
@@ -981,19 +1003,22 @@ You can also customize Sisyphus agents like other agents:
"OpenCode-Builder": { "OpenCode-Builder": {
"model": "anthropic/claude-opus-4" "model": "anthropic/claude-opus-4"
}, },
"Planner-Sisyphus": { "Prometheus (Planner)": {
"model": "openai/gpt-5.2" "model": "openai/gpt-5.2"
},
"Metis (Plan Consultant)": {
"model": "anthropic/claude-sonnet-4-5"
} }
} }
} }
``` ```
| Option | Default | Description | | Option | Default | Description |
| ------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------- | | --------------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | `false` | When `true`, disables all Sisyphus orchestration and restores original build/plan as primary. | | `disabled` | `false` | When `true`, disables all Sisyphus orchestration and restores original build/plan as primary. |
| `default_builder_enabled` | `false` | When `true`, enables OpenCode-Builder agent (same as OpenCode build, renamed due to SDK limitations). Disabled by default. | | `default_builder_enabled` | `false` | When `true`, enables OpenCode-Builder agent (same as OpenCode build, renamed due to SDK limitations). Disabled by default. |
| `planner_enabled` | `true` | When `true`, enables Planner-Sisyphus agent (same as OpenCode plan, renamed due to SDK limitations). Enabled by default. | | `planner_enabled` | `true` | When `true`, enables Prometheus (Planner) agent with work-planner methodology. Enabled by default. |
| `replace_plan` | `true` | When `true`, demotes default plan agent to subagent mode. Set to `false` to keep both Planner-Sisyphus and default plan available. | | `replace_plan` | `true` | When `true`, demotes default plan agent to subagent mode. Set to `false` to keep both Prometheus (Planner) and default plan available. |
### Background Tasks ### Background Tasks
@@ -1029,6 +1054,50 @@ Configure concurrency limits for background agent tasks. This controls how many
- Allow more concurrent tasks for fast/cheap models (e.g., Gemini Flash) - Allow more concurrent tasks for fast/cheap models (e.g., Gemini Flash)
- Respect provider rate limits by setting provider-level caps - Respect provider rate limits by setting provider-level caps
### Categories
Categories enable domain-specific task delegation via the `sisyphus_task` tool. Each category pre-configures a specialized `Sisyphus-Junior-{category}` agent with optimized model settings and prompts.
**Default Categories:**
| Category | Model | Description |
|----------|-------|-------------|
| `visual` | `google/gemini-3-pro-preview` | Frontend, UI/UX, design-focused tasks. High creativity (temp 0.7). |
| `business-logic` | `openai/gpt-5.2` | Backend logic, architecture, strategic reasoning. Low creativity (temp 0.1). |
**Usage:**
```
// Via sisyphus_task tool
sisyphus_task(category="visual", prompt="Create a responsive dashboard component")
sisyphus_task(category="business-logic", prompt="Design the payment processing flow")
// Or target a specific agent directly
sisyphus_task(agent="oracle", prompt="Review this architecture")
```
**Custom Categories:**
Add custom categories in `oh-my-opencode.json`:
```json
{
"categories": {
"data-science": {
"model": "anthropic/claude-sonnet-4-5",
"temperature": 0.2,
"prompt_append": "Focus on data analysis, ML pipelines, and statistical methods."
},
"visual": {
"model": "google/gemini-3-pro-high",
"prompt_append": "Use shadcn/ui components and Tailwind CSS."
}
}
}
```
Each category supports: `model`, `temperature`, `top_p`, `maxTokens`, `thinking`, `reasoningEffort`, `textVerbosity`, `tools`, `prompt_append`.
### Hooks ### Hooks
Disable specific built-in hooks via `disabled_hooks` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`: Disable specific built-in hooks via `disabled_hooks` in `~/.config/opencode/oh-my-opencode.json` or `.opencode/oh-my-opencode.json`:

View File

@@ -863,7 +863,8 @@ Agent 爽了,你自然也爽。但我还想直接让你爽。
- **Sisyphus**:主编排 AgentClaude Opus 4.5 - **Sisyphus**:主编排 AgentClaude Opus 4.5
- **OpenCode-Builder**OpenCode 默认构建 Agent因 SDK 限制仅改名,默认禁用) - **OpenCode-Builder**OpenCode 默认构建 Agent因 SDK 限制仅改名,默认禁用)
- **Planner-Sisyphus**OpenCode 默认计划 Agent(因 SDK 限制仅改名,默认启用) - **Prometheus (Planner)**OpenCode 默认计划 Agent + work-planner 方法论(默认启用)
- **Metis (Plan Consultant)**:识别隐藏需求和 AI 失败点的预规划分析 Agent
**配置选项:** **配置选项:**
@@ -912,8 +913,11 @@ Sisyphus Agent 也能自定义:
"OpenCode-Builder": { "OpenCode-Builder": {
"model": "anthropic/claude-opus-4" "model": "anthropic/claude-opus-4"
}, },
"Planner-Sisyphus": { "Prometheus (Planner)": {
"model": "openai/gpt-5.2" "model": "openai/gpt-5.2"
},
"Metis (Plan Consultant)": {
"model": "anthropic/claude-sonnet-4-5"
} }
} }
} }
@@ -923,8 +927,8 @@ Sisyphus Agent 也能自定义:
| --------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | --------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| `disabled` | `false` | 设为 `true` 就禁用所有 Sisyphus 编排,恢复原来的 build/plan。 | | `disabled` | `false` | 设为 `true` 就禁用所有 Sisyphus 编排,恢复原来的 build/plan。 |
| `default_builder_enabled` | `false` | 设为 `true` 就启用 OpenCode-Builder Agent与 OpenCode build 相同,因 SDK 限制仅改名)。默认禁用。 | | `default_builder_enabled` | `false` | 设为 `true` 就启用 OpenCode-Builder Agent与 OpenCode build 相同,因 SDK 限制仅改名)。默认禁用。 |
| `planner_enabled` | `true` | 设为 `true` 就启用 Planner-Sisyphus Agent与 OpenCode plan 相同,因 SDK 限制仅改名)。默认启用。 | | `planner_enabled` | `true` | 设为 `true` 就启用 Prometheus (Planner) Agent含 work-planner 方法论)。默认启用。 |
| `replace_plan` | `true` | 设为 `true` 就把默认计划 Agent 降级为子 Agent 模式。设为 `false` 可以同时保留 Planner-Sisyphus 和默认计划。 | | `replace_plan` | `true` | 设为 `true` 就把默认计划 Agent 降级为子 Agent 模式。设为 `false` 可以同时保留 Prometheus (Planner) 和默认计划。 |
### Background Tasks后台任务 ### Background Tasks后台任务

View File

@@ -26,7 +26,8 @@
"explore", "explore",
"frontend-ui-ux-engineer", "frontend-ui-ux-engineer",
"document-writer", "document-writer",
"multimodal-looker" "multimodal-looker",
"Metis (Plan Consultant)"
] ]
} }
}, },
@@ -35,7 +36,9 @@
"items": { "items": {
"type": "string", "type": "string",
"enum": [ "enum": [
"playwright" "playwright",
"frontend-ui-ux",
"git-master"
] ]
} }
}, },
@@ -71,7 +74,10 @@
"compaction-context-injector", "compaction-context-injector",
"claude-code-hooks", "claude-code-hooks",
"auto-slash-command", "auto-slash-command",
"edit-error-recovery" "edit-error-recovery",
"prometheus-md-only",
"start-work",
"sisyphus-orchestrator"
] ]
} }
}, },
@@ -80,7 +86,8 @@
"items": { "items": {
"type": "string", "type": "string",
"enum": [ "enum": [
"init-deep" "init-deep",
"start-work"
] ]
} }
}, },
@@ -93,6 +100,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -207,6 +223,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -321,6 +346,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -435,6 +469,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -543,12 +586,144 @@
} }
} }
}, },
"Planner-Sisyphus": { "Prometheus (Planner)": {
"type": "object", "type": "object",
"properties": { "properties": {
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"top_p": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"prompt": {
"type": "string"
},
"prompt_append": {
"type": "string"
},
"tools": {
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "boolean"
}
},
"disable": {
"type": "boolean"
},
"description": {
"type": "string"
},
"mode": {
"type": "string",
"enum": [
"subagent",
"primary",
"all"
]
},
"color": {
"type": "string",
"pattern": "^#[0-9A-Fa-f]{6}$"
},
"permission": {
"type": "object",
"properties": {
"edit": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"bash": {
"anyOf": [
{
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
{
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
}
}
]
},
"webfetch": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"doom_loop": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
},
"external_directory": {
"type": "string",
"enum": [
"ask",
"allow",
"deny"
]
}
}
}
}
},
"Metis (Plan Consultant)": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -663,6 +838,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -777,6 +961,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -891,6 +1084,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -1005,6 +1207,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -1119,6 +1330,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -1233,6 +1453,15 @@
"model": { "model": {
"type": "string" "type": "string"
}, },
"category": {
"type": "string"
},
"skills": {
"type": "array",
"items": {
"type": "string"
}
},
"temperature": { "temperature": {
"type": "number", "type": "number",
"minimum": 0, "minimum": 0,
@@ -1343,6 +1572,82 @@
} }
} }
}, },
"categories": {
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"temperature": {
"type": "number",
"minimum": 0,
"maximum": 2
},
"top_p": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"maxTokens": {
"type": "number"
},
"thinking": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": [
"enabled",
"disabled"
]
},
"budgetTokens": {
"type": "number"
}
},
"required": [
"type"
]
},
"reasoningEffort": {
"type": "string",
"enum": [
"low",
"medium",
"high"
]
},
"textVerbosity": {
"type": "string",
"enum": [
"low",
"medium",
"high"
]
},
"tools": {
"type": "object",
"propertyNames": {
"type": "string"
},
"additionalProperties": {
"type": "boolean"
}
},
"prompt_append": {
"type": "string"
}
},
"required": [
"model"
]
}
},
"claude_code": { "claude_code": {
"type": "object", "type": "object",
"properties": { "properties": {
@@ -1695,6 +2000,19 @@
"type": "boolean" "type": "boolean"
} }
} }
},
"git_master": {
"type": "object",
"properties": {
"commit_footer": {
"default": true,
"type": "boolean"
},
"include_co_authored_by": {
"default": true,
"type": "boolean"
}
}
} }
} }
} }

View File

@@ -1,152 +0,0 @@
# Windows Crash Investigation Timeline
## Executive Summary
**Initial Hypothesis**: Bun.spawn/ShellInterpreter GC bug causing crashes on Windows
**Actual Root Cause**: Conflict between oh-my-opencode's session-notification and external notification plugins (specifically `@mohak34/opencode-notifier`)
**Evidence**: User removed `@mohak34/opencode-notifier` plugin → crashes stopped immediately. The release version of oh-my-opencode (with original Bun.spawn code) works fine when used alone.
---
## Timeline
### Phase 1: Initial Crash Reports (Early January 2026)
**Symptoms:**
- Windows users experiencing crashes after extended oh-my-opencode usage
- Stack traces pointed to Bun's ShellInterpreter finalizer:
```
Segmentation fault at address 0x337081E00E0
- interpreter.zig:1239: deinitFromFinalizer
- ZigGeneratedClasses.zig:19925: ShellInterpreterClass__finalize
```
**Initial Analysis:**
- Similar to known Bun issues: oven-sh/bun#23177, oven-sh/bun#24368
- Focus on `ctx.$` (Bun shell template literals) in session-notification.ts
### Phase 2: PR #543 - Wrong Fix Merged (January 6, 2026)
**PR**: [#543 - fix(session-notification): avoid Bun shell GC crash on Windows](https://github.com/code-yeongyu/oh-my-opencode/pull/543)
**Changes Made:**
- Replaced `ctx.$` with `node:child_process.spawn` in `session-notification.ts`
- Updated tests to mock spawn instead of ctx.$
**Assumption**: The ShellInterpreter GC bug was causing crashes when notification commands were executed.
**Status**: ❌ MERGED (reverted in this PR)
### Phase 3: Continued Investigation - Debug Tracing (January 6-7, 2026)
Crashes continued after PR #543. Added debug tracing system (PR #571) to capture what happens before crashes.
**PR #571**: [feat(debug): add comprehensive crash tracing system](https://github.com/code-yeongyu/oh-my-opencode/pull/571)
Tracing revealed LSP ENOENT errors, leading to:
**PR #572**: [fix(lsp): add resilient handling for missing LSP server binaries](https://github.com/code-yeongyu/oh-my-opencode/pull/572)
### Phase 4: More Bun.spawn Changes (January 7, 2026) - WRONG PATH
Based on the assumption that Bun.spawn was the issue, additional files were modified locally:
- `src/hooks/session-notification-utils.ts`
- `src/hooks/comment-checker/cli.ts`
- `src/hooks/comment-checker/downloader.ts`
- `src/hooks/interactive-bash-session/index.ts`
**Status**: ❌ REVERTED (never committed)
### Phase 5: Root Cause Discovery (January 7, 2026)
**Critical Observation by User:**
> "I removed `@mohak34/opencode-notifier` and crashes stopped. The release version with Bun.spawn works perfectly fine."
**Key Evidence:**
1. Removing ONLY the notifier plugin fixed crashes
2. Release version (before PR #543) works fine for user and most others
3. No widespread complaints from other users about crashes
4. PR #543 was based on superficial pattern matching with Bun issues
---
## The Real Root Cause: Notification Plugin Conflict
### Two Plugins, Same Event
Both plugins listen to `session.idle` and send notifications:
| Aspect | oh-my-opencode | opencode-notifier |
|--------|---------------|-------------------|
| **Event** | `session.idle` | `session.idle` |
| **Delay** | 1.5s confirmation delay | Immediate |
| **Windows Notification** | PowerShell + Windows.UI.Notifications API | `node-notifier` → WindowsToaster → SnoreToast.exe |
| **Sound** | PowerShell Media.SoundPlayer | PowerShell Media.SoundPlayer |
| **Process spawning** | `ctx.$` (Bun shell) | `node:child_process` |
### Conflict Points
1. **Different notification systems fighting**:
- oh-my-opencode: Direct PowerShell → Windows.UI.Notifications
- opencode-notifier: SnoreToast.exe binary via node-notifier
2. **Same app identity**: Both register with "OpenCode" as the toast notifier app
3. **Concurrent execution**: Both trigger within milliseconds of each other on `session.idle`
4. **Resource contention**: Windows Toast API may not handle concurrent registrations gracefully
### Why It Wasn't Bun.spawn
- Both plugins use different spawning methods - this didn't matter
- Release version works fine when used alone
- Most users don't have this issue (most don't use both plugins)
- The stack trace pointed to ShellInterpreter, but correlation ≠ causation
---
## The Fix
### What This PR Does
1. **Reverts PR #543**: Restores original `ctx.$` usage (it was never the problem)
2. **Adds conflict detection**:
- Scans `opencode.json` for known notification plugins
- Known plugins: `opencode-notifier`, `@mohak34/opencode-notifier`
3. **Auto-disables on conflict**:
- When external notifier detected, skips creating session-notification hook
- Logs clear warning explaining why
4. **Config override**:
```json
{
"notification": {
"force_enable": true
}
}
```
Users can force-enable oh-my-opencode's notification if they want.
---
## Lessons Learned
1. **Correlation ≠ Causation**: Stack traces can be misleading; investigate root cause thoroughly
2. **Test with user's exact environment**: The crash only happened with specific plugin combination
3. **Challenge assumptions**: "Bun.spawn is buggy" was accepted too quickly without verifying
4. **Evidence-based debugging**: User's discovery (removing notifier = no crash) was the key evidence
---
## Related Links
- PR #543 (merged, reverted in this PR): https://github.com/code-yeongyu/oh-my-opencode/pull/543
- PR #571 (open): https://github.com/code-yeongyu/oh-my-opencode/pull/571
- PR #572 (open): https://github.com/code-yeongyu/oh-my-opencode/pull/572
- opencode-notifier: https://github.com/mohak34/opencode-notifier
- Bun issues referenced (not actually the cause):
- https://github.com/oven-sh/bun/issues/23177
- https://github.com/oven-sh/bun/issues/24368

View File

@@ -0,0 +1,200 @@
# Category & Skill System Guide
This document provides a comprehensive guide to the **Category** and **Skill** systems, which form the extensibility core of Oh-My-OpenCode.
## 1. Overview
Instead of delegating everything to a single AI agent, it's far more efficient to invoke **specialists** tailored to the nature of the task.
- **Category**: "What kind of work is this?" (determines model, temperature, prompt mindset)
- **Skill**: "What tools and knowledge are needed?" (injects specialized knowledge, MCP tools, workflows)
By combining these two concepts, you can generate optimal agents through `sisyphus_task`.
---
## 2. Category System
A Category is an agent configuration preset optimized for specific domains.
### Available Built-in Categories
| Category | Optimal Model | Characteristics | Use Cases |
|----------|---------------|-----------------|-----------|
| `visual-engineering` | `gemini-3-pro` | High creativity (Temp 0.7) | Frontend, UI/UX, animations, styling |
| `ultrabrain` | `gpt-5.2` | Maximum logical reasoning (Temp 0.1) | Architecture design, complex business logic, debugging |
| `artistry` | `gemini-3-pro` | Artistic (Temp 0.9) | Creative ideation, design concepts, storytelling |
| `quick` | `claude-haiku` | Fast (Temp 0.3) | Simple tasks, refactoring, script writing |
| `writing` | `gemini-3-flash` | Natural flow (Temp 0.5) | Documentation, technical blogs, README writing |
| `most-capable` | `claude-opus` | High performance (Temp 0.1) | Extremely difficult complex tasks |
### Usage
Specify the `category` parameter when invoking the `sisyphus_task` tool.
```typescript
sisyphus_task(
category="visual-engineering",
prompt="Add a responsive chart component to the dashboard page"
)
```
### Sisyphus-Junior (Delegated Executor)
When you use a Category, a special agent called **Sisyphus-Junior** performs the work.
- **Characteristic**: Cannot **re-delegate** tasks to other agents.
- **Purpose**: Prevents infinite delegation loops and ensures focus on the assigned task.
---
## 3. Skill System
A Skill is a mechanism that injects **specialized knowledge (Context)** and **tools (MCP)** for specific domains into agents.
### Built-in Skills
1. **`git-master`**
- **Capabilities**: Git expert. Detects commit styles, splits atomic commits, formulates rebase strategies.
- **MCP**: None (uses Git commands)
- **Usage**: Essential for commits, history searches, branch management.
2. **`playwright`**
- **Capabilities**: Browser automation. Web page testing, screenshots, scraping.
- **MCP**: `@playwright/mcp` (auto-executed)
- **Usage**: For post-implementation UI verification, E2E test writing.
3. **`frontend-ui-ux`**
- **Capabilities**: Injects designer mindset. Color, typography, motion guidelines.
- **Usage**: For aesthetic UI work beyond simple implementation.
### Usage
Add desired skill names to the `skills` array.
```typescript
sisyphus_task(
category="quick",
skills=["git-master"],
prompt="Commit current changes. Follow commit message style."
)
```
### Skill Customization (SKILL.md)
You can add custom skills directly to `.opencode/skills/` in your project root or `~/.claude/skills/` in your home directory.
**Example: `.opencode/skills/my-skill/SKILL.md`**
```markdown
---
name: my-skill
description: My special custom skill
mcp:
my-mcp:
command: npx
args: ["-y", "my-mcp-server"]
---
# My Skill Prompt
This content will be injected into the agent's system prompt.
...
```
---
## 4. Combination Strategies (Combos)
You can create powerful specialized agents by combining Categories and Skills.
### 🎨 The Designer (UI Implementation)
- **Category**: `visual-engineering`
- **Skills**: `["frontend-ui-ux", "playwright"]`
- **Effect**: Implements aesthetic UI and verifies rendering results directly in browser.
### 🏗️ The Architect (Design Review)
- **Category**: `ultrabrain`
- **Skills**: `[]` (pure reasoning)
- **Effect**: Leverages GPT-5.2's logical reasoning for in-depth system architecture analysis.
### ⚡ The Maintainer (Quick Fixes)
- **Category**: `quick`
- **Skills**: `["git-master"]`
- **Effect**: Uses cost-effective models to quickly fix code and generate clean commits.
---
## 5. sisyphus_task Prompt Guide
When delegating, **clear and specific** prompts are essential. Include these 7 elements:
1. **TASK**: What needs to be done? (single objective)
2. **EXPECTED OUTCOME**: What is the deliverable?
3. **REQUIRED SKILLS**: Which skills should be used?
4. **REQUIRED TOOLS**: Which tools must be used? (whitelist)
5. **MUST DO**: What must be done (constraints)
6. **MUST NOT DO**: What must never be done
7. **CONTEXT**: File paths, existing patterns, reference materials
**Bad Example**:
> "Fix this"
**Good Example**:
> **TASK**: Fix mobile layout breaking issue in `LoginButton.tsx`
> **CONTEXT**: `src/components/LoginButton.tsx`, using Tailwind CSS
> **MUST DO**: Change flex-direction at `md:` breakpoint
> **MUST NOT DO**: Modify existing desktop layout
> **EXPECTED**: Buttons align vertically on mobile
---
## 6. Configuration Guide (oh-my-opencode.json)
You can fine-tune categories in `oh-my-opencode.json`.
### Category Configuration Schema (CategoryConfig)
| Field | Type | Description |
|-------|------|-------------|
| `model` | string | AI model ID to use (e.g., `anthropic/claude-opus-4-5`) |
| `temperature` | number | Creativity level (0.0 ~ 2.0). Lower is more deterministic. |
| `prompt_append` | string | Content to append to system prompt when this category is selected |
| `thinking` | object | Thinking model configuration (`{ type: "enabled", budgetTokens: 16000 }`) |
| `tools` | object | Tool usage control (disable with `{ "tool_name": false }`) |
| `maxTokens` | number | Maximum response token count |
### Example Configuration
```jsonc
{
"categories": {
// 1. Define new custom category
"korean-writer": {
"model": "google/gemini-3-flash-preview",
"temperature": 0.5,
"prompt_append": "You are a Korean technical writer. Maintain a friendly and clear tone."
},
// 2. Override existing category (change model)
"visual-engineering": {
"model": "openai/gpt-5.2", // Can change model
"temperature": 0.8
},
// 3. Configure thinking model and restrict tools
"deep-reasoning": {
"model": "anthropic/claude-opus-4-5",
"thinking": {
"type": "enabled",
"budgetTokens": 32000
},
"tools": {
"websearch_web_search_exa": false // Disable web search
}
}
},
// Disable skills
"disabled_skills": ["playwright"]
}
```

272
docs/cli-guide.md Normal file
View File

@@ -0,0 +1,272 @@
# Oh-My-OpenCode CLI Guide
This document provides a comprehensive guide to using the Oh-My-OpenCode CLI tools.
## 1. Overview
Oh-My-OpenCode provides CLI tools accessible via the `bunx oh-my-opencode` command. The CLI supports various features including plugin installation, environment diagnostics, and session execution.
```bash
# Basic execution (displays help)
bunx oh-my-opencode
# Or run with npx
npx oh-my-opencode
```
---
## 2. Available Commands
| Command | Description |
|---------|-------------|
| `install` | Interactive Setup Wizard |
| `doctor` | Environment diagnostics and health checks |
| `run` | OpenCode session runner |
| `auth` | Google Antigravity authentication management |
| `version` | Display version information |
---
## 3. `install` - Interactive Setup Wizard
An interactive installation tool for initial Oh-My-OpenCode setup. Provides a beautiful TUI (Text User Interface) based on `@clack/prompts`.
### Usage
```bash
bunx oh-my-opencode install
```
### Installation Process
1. **Provider Selection**: Choose your AI provider from Claude, ChatGPT, or Gemini.
2. **API Key Input**: Enter the API key for your selected provider.
3. **Configuration File Creation**: Generates `opencode.json` or `oh-my-opencode.json` files.
4. **Plugin Registration**: Automatically registers the oh-my-opencode plugin in OpenCode settings.
### Options
| Option | Description |
|--------|-------------|
| `--no-tui` | Run in non-interactive mode without TUI (for CI/CD environments) |
| `--verbose` | Display detailed logs |
---
## 4. `doctor` - Environment Diagnostics
Diagnoses your environment to ensure Oh-My-OpenCode is functioning correctly. Performs 17+ health checks.
### Usage
```bash
bunx oh-my-opencode doctor
```
### Diagnostic Categories
| Category | Check Items |
|----------|-------------|
| **Installation** | OpenCode version (>= 1.0.150), plugin registration status |
| **Configuration** | Configuration file validity, JSONC parsing |
| **Authentication** | Anthropic, OpenAI, Google API key validity |
| **Dependencies** | Bun, Node.js, Git installation status |
| **Tools** | LSP server status, MCP server status |
| **Updates** | Latest version check |
### Options
| Option | Description |
|--------|-------------|
| `--category <name>` | Check specific category only (e.g., `--category authentication`) |
| `--json` | Output results in JSON format |
| `--verbose` | Include detailed information |
### Example Output
```
oh-my-opencode doctor
┌──────────────────────────────────────────────────┐
│ Oh-My-OpenCode Doctor │
└──────────────────────────────────────────────────┘
Installation
✓ OpenCode version: 1.0.155 (>= 1.0.150)
✓ Plugin registered in opencode.json
Configuration
✓ oh-my-opencode.json is valid
⚠ categories.visual-engineering: using default model
Authentication
✓ Anthropic API key configured
✓ OpenAI API key configured
✗ Google API key not found
Dependencies
✓ Bun 1.2.5 installed
✓ Node.js 22.0.0 installed
✓ Git 2.45.0 installed
Summary: 10 passed, 1 warning, 1 failed
```
---
## 5. `run` - OpenCode Session Runner
Executes OpenCode sessions and monitors task completion.
### Usage
```bash
bunx oh-my-opencode run [prompt]
```
### Options
| Option | Description |
|--------|-------------|
| `--enforce-completion` | Keep session active until all TODOs are completed |
| `--timeout <seconds>` | Set maximum execution time |
---
## 6. `auth` - Authentication Management
Manages Google Antigravity OAuth authentication. Required for using Gemini models.
### Usage
```bash
# Login
bunx oh-my-opencode auth login
# Logout
bunx oh-my-opencode auth logout
# Check current status
bunx oh-my-opencode auth status
```
---
## 7. Configuration Files
The CLI searches for configuration files in the following locations (in priority order):
1. **Project Level**: `.opencode/oh-my-opencode.json`
2. **User Level**: `~/.config/opencode/oh-my-opencode.json`
### JSONC Support
Configuration files support **JSONC (JSON with Comments)** format. You can use comments and trailing commas.
```jsonc
{
// Agent configuration
"sisyphus_agent": {
"disabled": false,
"planner_enabled": true,
},
/* Category customization */
"categories": {
"visual-engineering": {
"model": "google/gemini-3-pro-preview",
},
},
}
```
---
## 8. Troubleshooting
### "OpenCode version too old" Error
```bash
# Update OpenCode
npm install -g opencode@latest
# or
bun install -g opencode@latest
```
### "Plugin not registered" Error
```bash
# Reinstall plugin
bunx oh-my-opencode install
```
### Doctor Check Failures
```bash
# Diagnose with detailed information
bunx oh-my-opencode doctor --verbose
# Check specific category only
bunx oh-my-opencode doctor --category authentication
```
---
## 9. Non-Interactive Mode
Use the `--no-tui` option for CI/CD environments.
```bash
# Run doctor in CI environment
bunx oh-my-opencode doctor --no-tui --json
# Save results to file
bunx oh-my-opencode doctor --json > doctor-report.json
```
---
## 10. Developer Information
### CLI Structure
```
src/cli/
├── index.ts # Commander.js-based main entry
├── install.ts # @clack/prompts-based TUI installer
├── config-manager.ts # JSONC parsing, multi-source config management
├── doctor/ # Health check system
│ ├── index.ts # Doctor command entry
│ └── checks/ # 17+ individual check modules
├── run/ # Session runner
└── commands/auth.ts # Authentication management
```
### Adding New Doctor Checks
1. Create `src/cli/doctor/checks/my-check.ts`:
```typescript
import type { DoctorCheck } from "../types"
export const myCheck: DoctorCheck = {
name: "my-check",
category: "environment",
check: async () => {
// Check logic
const isOk = await someValidation()
return {
status: isOk ? "pass" : "fail",
message: isOk ? "Everything looks good" : "Something is wrong",
}
},
}
```
2. Register in `src/cli/doctor/checks/index.ts`:
```typescript
export { myCheck } from "./my-check"
```

131
docs/orchestration-guide.md Normal file
View File

@@ -0,0 +1,131 @@
# Oh-My-OpenCode Orchestration Guide
This document provides a comprehensive guide to the orchestration system that implements Oh-My-OpenCode's core philosophy: **"Separation of Planning and Execution"**.
## 1. Overview
Traditional AI agents often mix planning and execution, leading to context pollution, goal drift, and AI slop (low-quality code).
Oh-My-OpenCode solves this by clearly separating two roles:
1. **Prometheus (Planner)**: A pure strategist who never writes code. Establishes perfect plans through interviews and analysis.
2. **Sisyphus (Executor)**: An orchestrator who executes plans. Delegates work to specialized agents and never stops until completion.
---
## 2. Overall Architecture
```mermaid
graph TD
User[User Request] --> Prometheus
subgraph Planning Phase
Prometheus[Prometheus<br>Planner] --> Metis[Metis<br>Consultant]
Metis --> Prometheus
Prometheus --> Momus[Momus<br>Reviewer]
Momus --> Prometheus
Prometheus --> PlanFile[/.sisyphus/plans/*.md]
end
PlanFile --> StartWork[/start-work]
StartWork --> BoulderState[boulder.json]
subgraph Execution Phase
BoulderState --> Sisyphus[Sisyphus<br>Orchestrator]
Sisyphus --> Oracle[Oracle]
Sisyphus --> Frontend[Frontend<br>Engineer]
Sisyphus --> Explore[Explore]
end
```
---
## 3. Key Components
### 🔮 Prometheus (The Planner)
- **Model**: `anthropic/claude-opus-4-5`
- **Role**: Strategic planning, requirements interviews, work plan creation
- **Constraint**: **READ-ONLY**. Can only create/modify markdown files within `.sisyphus/` directory.
- **Characteristic**: Never writes code directly, focuses solely on "how to do it".
### 🦉 Metis (The Consultant)
- **Role**: Pre-analysis and gap detection
- **Function**: Identifies hidden user intent, prevents AI over-engineering, eliminates ambiguity.
- **Workflow**: Metis consultation is mandatory before plan creation.
### ⚖️ Momus (The Reviewer)
- **Role**: High-precision plan validation (High Accuracy Mode)
- **Function**: Rejects and demands revisions until the plan is perfect.
- **Trigger**: Activated when user requests "high accuracy".
### 🪨 Sisyphus (The Orchestrator)
- **Model**: `anthropic/claude-opus-4-5` (Extended Thinking 32k)
- **Role**: Execution and delegation
- **Characteristic**: Doesn't do everything directly, actively delegates to specialized agents (Frontend, Librarian, etc.).
---
## 4. Workflow
### Phase 1: Interview and Planning (Interview Mode)
Prometheus starts in **interview mode** by default. Instead of immediately creating a plan, it collects sufficient context.
1. **Intent Identification**: Classifies whether the user's request is Refactoring or New Feature.
2. **Context Collection**: Investigates codebase and external documentation through `explore` and `librarian` agents.
3. **Draft Creation**: Continuously records discussion content in `.sisyphus/drafts/`.
### Phase 2: Plan Generation
When the user requests "Make it a plan", plan generation begins.
1. **Metis Consultation**: Confirms any missed requirements or risk factors.
2. **Plan Creation**: Writes a single plan in `.sisyphus/plans/{name}.md` file.
3. **Handoff**: Once plan creation is complete, guides user to use `/start-work` command.
### Phase 3: Execution
When the user enters `/start-work`, the execution phase begins.
1. **State Management**: Creates `boulder.json` file to track current plan and session ID.
2. **Task Execution**: Sisyphus reads the plan and processes TODOs one by one.
3. **Delegation**: UI work is delegated to Frontend agent, complex logic to Oracle.
4. **Continuity**: Even if the session is interrupted, work continues in the next session through `boulder.json`.
---
## 5. Commands and Usage
### `/plan [request]`
Invokes Prometheus to start a planning session.
- Example: `/plan "I want to refactor the authentication system to NextAuth"`
### `/start-work`
Executes the generated plan.
- Function: Finds plan in `.sisyphus/plans/` and enters execution mode.
- If there's interrupted work, automatically resumes from where it left off.
---
## 6. Configuration Guide
You can control related features in `oh-my-opencode.json`.
```jsonc
{
"sisyphus_agent": {
"disabled": false, // Enable Sisyphus orchestration (default: false)
"planner_enabled": true, // Enable Prometheus (default: true)
"replace_plan": true // Replace default plan agent with Prometheus (default: true)
},
// Hook settings (add to disable)
"disabled_hooks": [
// "start-work", // Disable execution trigger
// "prometheus-md-only" // Remove Prometheus write restrictions (not recommended)
]
}
```
## 7. Best Practices
1. **Don't Rush**: Invest sufficient time in the interview with Prometheus. The more perfect the plan, the faster the execution.
2. **Single Plan Principle**: No matter how large the task, contain all TODOs in one plan file (`.md`). This prevents context fragmentation.
3. **Active Delegation**: During execution, delegate to specialized agents via `sisyphus_task` rather than modifying code directly.

View File

@@ -2,20 +2,19 @@
## OVERVIEW ## OVERVIEW
7 AI agents for multi-model orchestration. Sisyphus orchestrates, specialists handle domains. AI agent definitions for multi-model orchestration. 7 specialized agents: Sisyphus (orchestrator), oracle (read-only consultation), librarian (research), explore (grep), frontend-ui-ux-engineer, document-writer, multimodal-looker.
## STRUCTURE ## STRUCTURE
``` ```
agents/ agents/
├── sisyphus.ts # Primary orchestrator (504 lines) ├── sisyphus.ts # Primary orchestrator (Claude Opus 4.5)
├── oracle.ts # Strategic advisor ├── oracle.ts # Strategic advisor (GPT-5.2)
├── librarian.ts # Multi-repo research ├── librarian.ts # Multi-repo research (Claude Sonnet 4.5)
├── explore.ts # Fast codebase grep ├── explore.ts # Fast codebase grep (Grok Code)
├── frontend-ui-ux-engineer.ts # UI generation ├── frontend-ui-ux-engineer.ts # UI generation (Gemini 3 Pro)
├── document-writer.ts # Technical docs ├── document-writer.ts # Technical docs (Gemini 3 Flash)
├── multimodal-looker.ts # PDF/image analysis ├── multimodal-looker.ts # PDF/image analysis (Gemini 3 Flash)
├── sisyphus-prompt-builder.ts # Sisyphus prompt construction
├── build-prompt.ts # Shared build agent prompt ├── build-prompt.ts # Shared build agent prompt
├── plan-prompt.ts # Shared plan agent prompt ├── plan-prompt.ts # Shared plan agent prompt
├── types.ts # AgentModelConfig interface ├── types.ts # AgentModelConfig interface
@@ -25,40 +24,68 @@ agents/
## AGENT MODELS ## AGENT MODELS
| Agent | Model | Fallback | Purpose | | Agent | Default Model | Fallback | Purpose |
|-------|-------|----------|---------| |-------|---------------|----------|---------|
| Sisyphus | anthropic/claude-opus-4-5 | - | Orchestrator with extended thinking | | Sisyphus | anthropic/claude-opus-4-5 | - | Primary orchestrator with extended thinking |
| oracle | openai/gpt-5.2 | - | Architecture, debugging, review | | oracle | openai/gpt-5.2 | - | Read-only consultation. High-IQ debugging, architecture |
| librarian | anthropic/claude-sonnet-4-5 | google/gemini-3-flash | Docs, GitHub research | | librarian | anthropic/claude-sonnet-4-5 | google/gemini-3-flash | Docs, OSS research, GitHub examples |
| explore | opencode/grok-code | gemini-3-flash, haiku-4-5 | Contextual grep | | explore | opencode/grok-code | google/gemini-3-flash, anthropic/claude-haiku-4-5 | Fast contextual grep |
| frontend-ui-ux-engineer | google/gemini-3-pro-preview | - | Beautiful UI code | | frontend-ui-ux-engineer | google/gemini-3-pro-preview | - | UI/UX code generation |
| document-writer | google/gemini-3-pro-preview | - | Technical writing | | document-writer | google/gemini-3-pro-preview | - | Technical writing |
| multimodal-looker | google/gemini-3-flash | - | Visual analysis | | multimodal-looker | google/gemini-3-flash | - | PDF/image analysis |
## HOW TO ADD ## HOW TO ADD AN AGENT
1. Create `src/agents/my-agent.ts`: 1. Create `src/agents/my-agent.ts`:
```typescript ```typescript
import type { AgentConfig } from "@opencode-ai/sdk"
export const myAgent: AgentConfig = { export const myAgent: AgentConfig = {
model: "provider/model-name", model: "provider/model-name",
temperature: 0.1, temperature: 0.1,
system: "...", system: "Agent system prompt...",
tools: { include: ["tool1"] }, tools: { include: ["tool1", "tool2"] }, // or exclude: [...]
} }
``` ```
2. Add to `builtinAgents` in index.ts 2. Add to `builtinAgents` in `src/agents/index.ts`
3. Update types.ts if new config options 3. Update `types.ts` if adding new config options
## MODEL FALLBACK ## AGENT CONFIG OPTIONS
`createBuiltinAgents()` handles fallback: | Option | Type | Description |
1. User config override |--------|------|-------------|
2. Installer settings (claude max20, gemini antigravity) | model | string | Model identifier (provider/model-name) |
3. Default model | temperature | number | 0.0-1.0, most use 0.1 for consistency |
| system | string | System prompt (can be multiline template literal) |
| tools | object | `{ include: [...] }` or `{ exclude: [...] }` |
| top_p | number | Optional nucleus sampling |
| maxTokens | number | Optional max output tokens |
## ANTI-PATTERNS ## MODEL FALLBACK LOGIC
- High temperature (>0.3) for code agents `createBuiltinAgents()` in utils.ts handles model fallback:
- Broad tool access (prefer explicit `include`)
- Monolithic prompts (delegate to specialists) 1. Check user config override (`agents.{name}.model`)
- Missing fallbacks for rate-limited models 2. Check installer settings (claude max20, gemini antigravity)
3. Use default model
**Fallback order for explore**:
- If gemini antigravity enabled → `google/gemini-3-flash`
- If claude max20 enabled → `anthropic/claude-haiku-4-5`
- Default → `opencode/grok-code` (free)
## ANTI-PATTERNS (AGENTS)
- **High temperature**: Don't use >0.3 for code-related agents
- **Broad tool access**: Prefer explicit `include` over unrestricted access
- **Monolithic prompts**: Keep prompts focused; delegate to specialized agents
- **Missing fallbacks**: Consider free/cheap fallbacks for rate-limited models
## SHARED PROMPTS
- **build-prompt.ts**: Base prompt for build agents (OpenCode default + Sisyphus variants)
- **plan-prompt.ts**: Base prompt for plan agents (legacy)
- **prometheus-prompt.ts**: System prompt for Prometheus (Planner) agent
- **metis.ts**: Metis (Plan Consultant) agent for pre-planning analysis
Used by `src/index.ts` when creating Builder-Sisyphus and Prometheus (Planner) variants.

View File

@@ -28,6 +28,9 @@ export function createExploreAgent(model: string = DEFAULT_MODEL): AgentConfig {
const restrictions = createAgentToolRestrictions([ const restrictions = createAgentToolRestrictions([
"write", "write",
"edit", "edit",
"task",
"sisyphus_task",
"call_omo_agent",
]) ])
return { return {

View File

@@ -6,6 +6,9 @@ import { exploreAgent } from "./explore"
import { frontendUiUxEngineerAgent } from "./frontend-ui-ux-engineer" import { frontendUiUxEngineerAgent } from "./frontend-ui-ux-engineer"
import { documentWriterAgent } from "./document-writer" import { documentWriterAgent } from "./document-writer"
import { multimodalLookerAgent } from "./multimodal-looker" import { multimodalLookerAgent } from "./multimodal-looker"
import { metisAgent } from "./metis"
import { orchestratorSisyphusAgent } from "./orchestrator-sisyphus"
import { momusAgent } from "./momus"
export const builtinAgents: Record<string, AgentConfig> = { export const builtinAgents: Record<string, AgentConfig> = {
Sisyphus: sisyphusAgent, Sisyphus: sisyphusAgent,
@@ -15,6 +18,9 @@ export const builtinAgents: Record<string, AgentConfig> = {
"frontend-ui-ux-engineer": frontendUiUxEngineerAgent, "frontend-ui-ux-engineer": frontendUiUxEngineerAgent,
"document-writer": documentWriterAgent, "document-writer": documentWriterAgent,
"multimodal-looker": multimodalLookerAgent, "multimodal-looker": multimodalLookerAgent,
"Metis (Plan Consultant)": metisAgent,
"Momus (Plan Reviewer)": momusAgent,
"orchestrator-sisyphus": orchestratorSisyphusAgent,
} }
export * from "./types" export * from "./types"

View File

@@ -25,6 +25,9 @@ export function createLibrarianAgent(model: string = DEFAULT_MODEL): AgentConfig
const restrictions = createAgentToolRestrictions([ const restrictions = createAgentToolRestrictions([
"write", "write",
"edit", "edit",
"task",
"sisyphus_task",
"call_omo_agent",
]) ])
return { return {

312
src/agents/metis.ts Normal file
View File

@@ -0,0 +1,312 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
/**
* Metis - Plan Consultant Agent
*
* Named after the Greek goddess of wisdom, prudence, and deep counsel.
* Metis analyzes user requests BEFORE planning to prevent AI failures.
*
* Core responsibilities:
* - Identify hidden intentions and unstated requirements
* - Detect ambiguities that could derail implementation
* - Flag potential AI-slop patterns (over-engineering, scope creep)
* - Generate clarifying questions for the user
* - Prepare directives for the planner agent
*/
export const METIS_SYSTEM_PROMPT = `# Metis - Pre-Planning Consultant
## CONSTRAINTS
- **READ-ONLY**: You analyze, question, advise. You do NOT implement or modify files.
- **OUTPUT**: Your analysis feeds into Prometheus (planner). Be actionable.
---
## PHASE 0: INTENT CLASSIFICATION (MANDATORY FIRST STEP)
Before ANY analysis, classify the work intent. This determines your entire strategy.
### Step 1: Identify Intent Type
| Intent | Signals | Your Primary Focus |
|--------|---------|-------------------|
| **Refactoring** | "refactor", "restructure", "clean up", changes to existing code | SAFETY: regression prevention, behavior preservation |
| **Build from Scratch** | "create new", "add feature", greenfield, new module | DISCOVERY: explore patterns first, informed questions |
| **Mid-sized Task** | Scoped feature, specific deliverable, bounded work | GUARDRAILS: exact deliverables, explicit exclusions |
| **Collaborative** | "help me plan", "let's figure out", wants dialogue | INTERACTIVE: incremental clarity through dialogue |
| **Architecture** | "how should we structure", system design, infrastructure | STRATEGIC: long-term impact, Oracle recommendation |
| **Research** | Investigation needed, goal exists but path unclear | INVESTIGATION: exit criteria, parallel probes |
### Step 2: Validate Classification
Confirm:
- [ ] Intent type is clear from request
- [ ] If ambiguous, ASK before proceeding
---
## PHASE 1: INTENT-SPECIFIC ANALYSIS
### IF REFACTORING
**Your Mission**: Ensure zero regressions, behavior preservation.
**Tool Guidance** (recommend to Prometheus):
- \`lsp_find_references\`: Map all usages before changes
- \`lsp_rename\` / \`lsp_prepare_rename\`: Safe symbol renames
- \`ast_grep_search\`: Find structural patterns to preserve
- \`ast_grep_replace(dryRun=true)\`: Preview transformations
**Questions to Ask**:
1. What specific behavior must be preserved? (test commands to verify)
2. What's the rollback strategy if something breaks?
3. Should this change propagate to related code, or stay isolated?
**Directives for Prometheus**:
- MUST: Define pre-refactor verification (exact test commands + expected outputs)
- MUST: Verify after EACH change, not just at the end
- MUST NOT: Change behavior while restructuring
- MUST NOT: Refactor adjacent code not in scope
---
### IF BUILD FROM SCRATCH
**Your Mission**: Discover patterns before asking, then surface hidden requirements.
**Pre-Analysis Actions** (YOU should do before questioning):
\`\`\`
// Launch these explore agents FIRST
call_omo_agent(subagent_type="explore", prompt="Find similar implementations...")
call_omo_agent(subagent_type="explore", prompt="Find project patterns for this type...")
call_omo_agent(subagent_type="librarian", prompt="Find best practices for [technology]...")
\`\`\`
**Questions to Ask** (AFTER exploration):
1. Found pattern X in codebase. Should new code follow this, or deviate? Why?
2. What should explicitly NOT be built? (scope boundaries)
3. What's the minimum viable version vs full vision?
**Directives for Prometheus**:
- MUST: Follow patterns from \`[discovered file:lines]\`
- MUST: Define "Must NOT Have" section (AI over-engineering prevention)
- MUST NOT: Invent new patterns when existing ones work
- MUST NOT: Add features not explicitly requested
---
### IF MID-SIZED TASK
**Your Mission**: Define exact boundaries. AI slop prevention is critical.
**Questions to Ask**:
1. What are the EXACT outputs? (files, endpoints, UI elements)
2. What must NOT be included? (explicit exclusions)
3. What are the hard boundaries? (no touching X, no changing Y)
4. Acceptance criteria: how do we know it's done?
**AI-Slop Patterns to Flag**:
| Pattern | Example | Ask |
|---------|---------|-----|
| Scope inflation | "Also tests for adjacent modules" | "Should I add tests beyond [TARGET]?" |
| Premature abstraction | "Extracted to utility" | "Do you want abstraction, or inline?" |
| Over-validation | "15 error checks for 3 inputs" | "Error handling: minimal or comprehensive?" |
| Documentation bloat | "Added JSDoc everywhere" | "Documentation: none, minimal, or full?" |
**Directives for Prometheus**:
- MUST: "Must Have" section with exact deliverables
- MUST: "Must NOT Have" section with explicit exclusions
- MUST: Per-task guardrails (what each task should NOT do)
- MUST NOT: Exceed defined scope
---
### IF COLLABORATIVE
**Your Mission**: Build understanding through dialogue. No rush.
**Behavior**:
1. Start with open-ended exploration questions
2. Use explore/librarian to gather context as user provides direction
3. Incrementally refine understanding
4. Don't finalize until user confirms direction
**Questions to Ask**:
1. What problem are you trying to solve? (not what solution you want)
2. What constraints exist? (time, tech stack, team skills)
3. What trade-offs are acceptable? (speed vs quality vs cost)
**Directives for Prometheus**:
- MUST: Record all user decisions in "Key Decisions" section
- MUST: Flag assumptions explicitly
- MUST NOT: Proceed without user confirmation on major decisions
---
### IF ARCHITECTURE
**Your Mission**: Strategic analysis. Long-term impact assessment.
**Oracle Consultation** (RECOMMEND to Prometheus):
\`\`\`
Task(
subagent_type="oracle",
prompt="Architecture consultation:
Request: [user's request]
Current state: [gathered context]
Analyze: options, trade-offs, long-term implications, risks"
)
\`\`\`
**Questions to Ask**:
1. What's the expected lifespan of this design?
2. What scale/load should it handle?
3. What are the non-negotiable constraints?
4. What existing systems must this integrate with?
**AI-Slop Guardrails for Architecture**:
- MUST NOT: Over-engineer for hypothetical future requirements
- MUST NOT: Add unnecessary abstraction layers
- MUST NOT: Ignore existing patterns for "better" design
- MUST: Document decisions and rationale
**Directives for Prometheus**:
- MUST: Consult Oracle before finalizing plan
- MUST: Document architectural decisions with rationale
- MUST: Define "minimum viable architecture"
- MUST NOT: Introduce complexity without justification
---
### IF RESEARCH
**Your Mission**: Define investigation boundaries and exit criteria.
**Questions to Ask**:
1. What's the goal of this research? (what decision will it inform?)
2. How do we know research is complete? (exit criteria)
3. What's the time box? (when to stop and synthesize)
4. What outputs are expected? (report, recommendations, prototype?)
**Investigation Structure**:
\`\`\`
// Parallel probes
call_omo_agent(subagent_type="explore", prompt="Find how X is currently handled...")
call_omo_agent(subagent_type="librarian", prompt="Find official docs for Y...")
call_omo_agent(subagent_type="librarian", prompt="Find OSS implementations of Z...")
\`\`\`
**Directives for Prometheus**:
- MUST: Define clear exit criteria
- MUST: Specify parallel investigation tracks
- MUST: Define synthesis format (how to present findings)
- MUST NOT: Research indefinitely without convergence
---
## OUTPUT FORMAT
\`\`\`markdown
## Intent Classification
**Type**: [Refactoring | Build | Mid-sized | Collaborative | Architecture | Research]
**Confidence**: [High | Medium | Low]
**Rationale**: [Why this classification]
## Pre-Analysis Findings
[Results from explore/librarian agents if launched]
[Relevant codebase patterns discovered]
## Questions for User
1. [Most critical question first]
2. [Second priority]
3. [Third priority]
## Identified Risks
- [Risk 1]: [Mitigation]
- [Risk 2]: [Mitigation]
## Directives for Prometheus
- MUST: [Required action]
- MUST: [Required action]
- MUST NOT: [Forbidden action]
- MUST NOT: [Forbidden action]
- PATTERN: Follow \`[file:lines]\`
- TOOL: Use \`[specific tool]\` for [purpose]
## Recommended Approach
[1-2 sentence summary of how to proceed]
\`\`\`
---
## TOOL REFERENCE
| Tool | When to Use | Intent |
|------|-------------|--------|
| \`lsp_find_references\` | Map impact before changes | Refactoring |
| \`lsp_rename\` | Safe symbol renames | Refactoring |
| \`ast_grep_search\` | Find structural patterns | Refactoring, Build |
| \`explore\` agent | Codebase pattern discovery | Build, Research |
| \`librarian\` agent | External docs, best practices | Build, Architecture, Research |
| \`oracle\` agent | Read-only consultation. High-IQ debugging, architecture | Architecture |
---
## CRITICAL RULES
**NEVER**:
- Skip intent classification
- Ask generic questions ("What's the scope?")
- Proceed without addressing ambiguity
- Make assumptions about user's codebase
**ALWAYS**:
- Classify intent FIRST
- Be specific ("Should this change UserService only, or also AuthService?")
- Explore before asking (for Build/Research intents)
- Provide actionable directives for Prometheus
`
const metisRestrictions = createAgentToolRestrictions([
"write",
"edit",
"task",
"sisyphus_task",
])
export const metisAgent: AgentConfig = {
description:
"Pre-planning consultant that analyzes requests to identify hidden intentions, ambiguities, and AI failure points.",
mode: "subagent" as const,
model: "anthropic/claude-opus-4-5",
temperature: 0.3,
...metisRestrictions,
prompt: METIS_SYSTEM_PROMPT,
thinking: { type: "enabled", budgetTokens: 32000 },
} as AgentConfig
export const metisPromptMetadata: AgentPromptMetadata = {
category: "advisor",
cost: "EXPENSIVE",
triggers: [
{
domain: "Pre-planning analysis",
trigger: "Complex task requiring scope clarification, ambiguous requirements",
},
],
useWhen: [
"Before planning non-trivial tasks",
"When user request is ambiguous or open-ended",
"To prevent AI over-engineering patterns",
],
avoidWhen: [
"Simple, well-defined tasks",
"User has already provided detailed requirements",
],
promptAlias: "Metis",
keyTrigger: "Ambiguous or complex request → consult Metis before Prometheus",
}

404
src/agents/momus.ts Normal file
View File

@@ -0,0 +1,404 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import type { AgentPromptMetadata } from "./types"
import { isGptModel } from "./types"
import { createAgentToolRestrictions } from "../shared/permission-compat"
/**
* Momus - Plan Reviewer Agent
*
* Named after Momus, the Greek god of satire and mockery, who was known for
* finding fault in everything - even the works of the gods themselves.
* He criticized Aphrodite (found her sandals squeaky), Hephaestus (said man
* should have windows in his chest to see thoughts), and Athena (her house
* should be on wheels to move from bad neighbors).
*
* This agent reviews work plans with the same ruthless critical eye,
* catching every gap, ambiguity, and missing context that would block
* implementation.
*/
const DEFAULT_MODEL = "openai/gpt-5.2"
export const MOMUS_SYSTEM_PROMPT = `You are a work plan review expert. You review the provided work plan (.sisyphus/plans/{name}.md in the current working project directory) according to **unified, consistent criteria** that ensure clarity, verifiability, and completeness.
**CRITICAL FIRST RULE**:
When you receive ONLY a file path like \`.sisyphus/plans/plan.md\` with NO other text, this is VALID input.
When you got yaml plan file, this is not a plan that you can review- REJECT IT.
DO NOT REJECT IT. PROCEED TO READ AND EVALUATE THE FILE.
Only reject if there are ADDITIONAL words or sentences beyond the file path.
**WHY YOU'VE BEEN SUMMONED - THE CONTEXT**:
You are reviewing a **first-draft work plan** from an author with ADHD. Based on historical patterns, these initial submissions are typically rough drafts that require refinement.
**Historical Data**: Plans from this author average **7 rejections** before receiving an OKAY. The primary failure pattern is **critical context omission due to ADHD**—the author's working memory holds connections and context that never make it onto the page.
**What to Expect in First Drafts**:
- Tasks are listed but critical "why" context is missing
- References to files/patterns without explaining their relevance
- Assumptions about "obvious" project conventions that aren't documented
- Missing decision criteria when multiple approaches are valid
- Undefined edge case handling strategies
- Unclear component integration points
**Why These Plans Fail**:
The ADHD author's mind makes rapid connections: "Add auth → obviously use JWT → obviously store in httpOnly cookie → obviously follow the pattern in auth/login.ts → obviously handle refresh tokens like we did before."
But the plan only says: "Add authentication following auth/login.ts pattern."
**Everything after the first arrow is missing.** The author's working memory fills in the gaps automatically, so they don't realize the plan is incomplete.
**Your Critical Role**: Catch these ADHD-driven omissions. The author genuinely doesn't realize what they've left out. Your ruthless review forces them to externalize the context that lives only in their head.
---
## Your Core Review Principle
**REJECT if**: When you simulate actually doing the work, you cannot obtain clear information needed for implementation, AND the plan does not specify reference materials to consult.
**ACCEPT if**: You can obtain the necessary information either:
1. Directly from the plan itself, OR
2. By following references provided in the plan (files, docs, patterns) and tracing through related materials
**The Test**: "Can I implement this by starting from what's written in the plan and following the trail of information it provides?"
---
## Common Failure Patterns (What the Author Typically Forgets)
The plan author is intelligent but has ADHD. They constantly skip providing:
**1. Reference Materials**
- FAIL: Says "implement authentication" but doesn't point to any existing code, docs, or patterns
- FAIL: Says "follow the pattern" but doesn't specify which file contains the pattern
- FAIL: Says "similar to X" but X doesn't exist or isn't documented
**2. Business Requirements**
- FAIL: Says "add feature X" but doesn't explain what it should do or why
- FAIL: Says "handle errors" but doesn't specify which errors or how users should experience them
- FAIL: Says "optimize" but doesn't define success criteria
**3. Architectural Decisions**
- FAIL: Says "add to state" but doesn't specify which state management system
- FAIL: Says "integrate with Y" but doesn't explain the integration approach
- FAIL: Says "call the API" but doesn't specify which endpoint or data flow
**4. Critical Context**
- FAIL: References files that don't exist
- FAIL: Points to line numbers that don't contain relevant code
- FAIL: Assumes you know project-specific conventions that aren't documented anywhere
**What You Should NOT Reject**:
- PASS: Plan says "follow auth/login.ts pattern" → you read that file → it has imports → you follow those → you understand the full flow
- PASS: Plan says "use Redux store" → you find store files by exploring codebase structure → standard Redux patterns apply
- PASS: Plan provides clear starting point → you trace through related files and types → you gather all needed details
**The Difference**:
- FAIL/REJECT: "Add authentication" (no starting point provided)
- PASS/ACCEPT: "Add authentication following pattern in auth/login.ts" (starting point provided, you can trace from there)
**YOUR MANDATE**:
You will adopt a ruthlessly critical mindset. You will read EVERY document referenced in the plan. You will verify EVERY claim. You will simulate actual implementation step-by-step. As you review, you MUST constantly interrogate EVERY element with these questions:
- "Does the worker have ALL the context they need to execute this?"
- "How exactly should this be done?"
- "Is this information actually documented, or am I just assuming it's obvious?"
You are not here to be nice. You are not here to give the benefit of the doubt. You are here to **catch every single gap, ambiguity, and missing piece of context that 20 previous reviewers failed to catch.**
**However**: You must evaluate THIS plan on its own merits. The past failures are context for your strictness, not a predetermined verdict. If this plan genuinely meets all criteria, approve it. If it has critical gaps, reject it without mercy.
---
## File Location
You will be provided with the path to the work plan file (typically \`.sisyphus/plans/{name}.md\` in the project). Review the file at the **exact path provided to you**. Do not assume the location.
**CRITICAL - Input Validation (STEP 0 - DO THIS FIRST, BEFORE READING ANY FILES)**:
**BEFORE you read any files**, you MUST first validate the format of the input prompt you received from the user.
**VALID INPUT EXAMPLES (ACCEPT THESE)**:
- \`.sisyphus/plans/my-plan.md\` [O] ACCEPT - just a file path
- \`/path/to/project/.sisyphus/plans/my-plan.md\` [O] ACCEPT - just a file path
- \`todolist.md\` [O] ACCEPT - just a file path
- \`../other-project/.sisyphus/plans/plan.md\` [O] ACCEPT - just a file path
- \`<system-reminder>...</system-reminder>\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directives + file path
- \`[analyze-mode]\\n...context...\\n.sisyphus/plans/plan.md\` [O] ACCEPT - bracket-style directives + file path
- \`[SYSTEM DIRECTIVE...]\\n.sisyphus/plans/plan.md\` [O] ACCEPT - system directive blocks + file path
**SYSTEM DIRECTIVES ARE ALWAYS ALLOWED**:
System directives are automatically injected by the system and should be IGNORED during input validation:
- XML-style tags: \`<system-reminder>\`, \`<context>\`, \`<user-prompt-submit-hook>\`, etc.
- Bracket-style blocks: \`[analyze-mode]\`, \`[search-mode]\`, \`[SYSTEM DIRECTIVE...]\`, \`[SYSTEM REMINDER...]\`, etc.
- These are NOT user-provided text
- These contain system context (timestamps, environment info, mode hints, etc.)
- STRIP these from your input validation check
- After stripping system directives, validate the remaining content
**INVALID INPUT EXAMPLES (REJECT ONLY THESE)**:
- \`Please review .sisyphus/plans/plan.md\` [X] REJECT - contains extra USER words "Please review"
- \`I have updated the plan: .sisyphus/plans/plan.md\` [X] REJECT - contains USER sentence before path
- \`.sisyphus/plans/plan.md - I fixed all issues\` [X] REJECT - contains USER text after path
- \`This is the 5th revision .sisyphus/plans/plan.md\` [X] REJECT - contains USER text before path
- Any input with USER sentences or explanations [X] REJECT
**DECISION RULE**:
1. First, STRIP all system directive blocks (XML tags, bracket-style blocks like \`[mode-name]...\`)
2. Then check: If remaining = ONLY a file path (no other words) → **ACCEPT and continue to Step 1**
3. If remaining = file path + ANY other USER text → **REJECT with format error message**
**IMPORTANT**: A standalone file path like \`.sisyphus/plans/plan.md\` is VALID. Do NOT reject it!
System directives + file path is also VALID. Do NOT reject it!
**When rejecting for input format (ONLY when there's extra USER text), respond EXACTLY**:
\`\`\`
I REJECT (Input Format Validation)
You must provide ONLY the work plan file path with no additional text.
Valid format: .sisyphus/plans/plan.md
Invalid format: Any user text before/after the path (system directives are allowed)
NOTE: This rejection is based solely on the input format, not the file contents.
The file itself has not been evaluated yet.
\`\`\`
**ULTRA-CRITICAL REMINDER**:
If the user provides EXACTLY \`.sisyphus/plans/plan.md\` or any other file path (with or without system directives) WITH NO ADDITIONAL USER TEXT:
→ THIS IS VALID INPUT
→ DO NOT REJECT IT
→ IMMEDIATELY PROCEED TO READ THE FILE
→ START EVALUATING THE FILE CONTENTS
Never reject a standalone file path!
Never reject system directives (XML or bracket-style) - they are automatically injected and should be ignored!
**IMPORTANT - Response Language**: Your evaluation output MUST match the language used in the work plan content:
- Match the language of the plan in your evaluation output
- If the plan is written in English → Write your entire evaluation in English
- If the plan is mixed → Use the dominant language (majority of task descriptions)
Example: Plan contains "Modify database schema" → Evaluation output: "## Evaluation Result\\n\\n### Criterion 1: Clarity of Work Content..."
---
## Review Philosophy
Your role is to simulate **executing the work plan as a capable developer** and identify:
1. **Ambiguities** that would block or slow down implementation
2. **Missing verification methods** that prevent confirming success
3. **Gaps in context** requiring >10% guesswork (90% confidence threshold)
4. **Lack of overall understanding** of purpose, background, and workflow
The plan should enable a developer to:
- Know exactly what to build and where to look for details
- Validate their work objectively without subjective judgment
- Complete tasks without needing to "figure out" unstated requirements
- Understand the big picture, purpose, and how tasks flow together
---
## Four Core Evaluation Criteria
### Criterion 1: Clarity of Work Content
**Goal**: Eliminate ambiguity by providing clear reference sources for each task.
**Evaluation Method**: For each task, verify:
- **Does the task specify WHERE to find implementation details?**
- [PASS] Good: "Follow authentication flow in \`docs/auth-spec.md\` section 3.2"
- [PASS] Good: "Implement based on existing pattern in \`src/services/payment.ts:45-67\`"
- [FAIL] Bad: "Add authentication" (no reference source)
- [FAIL] Bad: "Improve error handling" (vague, no examples)
- **Can the developer reach 90%+ confidence by reading the referenced source?**
- [PASS] Good: Reference to specific file/section that contains concrete examples
- [FAIL] Bad: "See codebase for patterns" (too broad, requires extensive exploration)
### Criterion 2: Verification & Acceptance Criteria
**Goal**: Ensure every task has clear, objective success criteria.
**Evaluation Method**: For each task, verify:
- **Is there a concrete way to verify completion?**
- [PASS] Good: "Verify: Run \`npm test\` → all tests pass. Manually test: Open \`/login\` → OAuth button appears → Click → redirects to Google → successful login"
- [PASS] Good: "Acceptance: API response time < 200ms for 95th percentile (measured via \`k6 run load-test.js\`)"
- [FAIL] Bad: "Test the feature" (how?)
- [FAIL] Bad: "Make sure it works properly" (what defines "properly"?)
- **Are acceptance criteria measurable/observable?**
- [PASS] Good: Observable outcomes (UI elements, API responses, test results, metrics)
- [FAIL] Bad: Subjective terms ("clean code", "good UX", "robust implementation")
### Criterion 3: Context Completeness
**Goal**: Minimize guesswork by providing all necessary context (90% confidence threshold).
**Evaluation Method**: Simulate task execution and identify:
- **What information is missing that would cause ≥10% uncertainty?**
- [PASS] Good: Developer can proceed with <10% guesswork (or natural exploration)
- [FAIL] Bad: Developer must make assumptions about business requirements, architecture, or critical context
- **Are implicit assumptions stated explicitly?**
- [PASS] Good: "Assume user is already authenticated (session exists in context)"
- [PASS] Good: "Note: Payment processing is handled by background job, not synchronously"
- [FAIL] Bad: Leaving critical architectural decisions or business logic unstated
### Criterion 4: Big Picture & Workflow Understanding
**Goal**: Ensure the developer understands WHY they're building this, WHAT the overall objective is, and HOW tasks flow together.
**Evaluation Method**: Assess whether the plan provides:
- **Clear Purpose Statement**: Why is this work being done? What problem does it solve?
- **Background Context**: What's the current state? What are we changing from?
- **Task Flow & Dependencies**: How do tasks connect? What's the logical sequence?
- **Success Vision**: What does "done" look like from a product/user perspective?
---
## Review Process
### Step 0: Validate Input Format (MANDATORY FIRST STEP)
Check if input is ONLY a file path. If yes, ACCEPT and continue. If extra text, REJECT.
### Step 1: Read the Work Plan
- Load the file from the path provided
- Identify the plan's language
- Parse all tasks and their descriptions
- Extract ALL file references
### Step 2: MANDATORY DEEP VERIFICATION
For EVERY file reference, library mention, or external resource:
- Read referenced files to verify content
- Search for related patterns/imports across codebase
- Verify line numbers contain relevant code
- Check that patterns are clear enough to follow
### Step 3: Apply Four Criteria Checks
For **the overall plan and each task**, evaluate:
1. **Clarity Check**: Does the task specify clear reference sources?
2. **Verification Check**: Are acceptance criteria concrete and measurable?
3. **Context Check**: Is there sufficient context to proceed without >10% guesswork?
4. **Big Picture Check**: Do I understand WHY, WHAT, and HOW?
### Step 4: Active Implementation Simulation
For 2-3 representative tasks, simulate execution using actual files.
### Step 5: Check for Red Flags
Scan for auto-fail indicators:
- Vague action verbs without concrete targets
- Missing file paths for code changes
- Subjective success criteria
- Tasks requiring unstated assumptions
### Step 6: Write Evaluation Report
Use structured format, **in the same language as the work plan**.
---
## Approval Criteria
### OKAY Requirements (ALL must be met)
1. **100% of file references verified**
2. **Zero critically failed file verifications**
3. **Critical context documented**
4. **≥80% of tasks** have clear reference sources
5. **≥90% of tasks** have concrete acceptance criteria
6. **Zero tasks** require assumptions about business logic or critical architecture
7. **Plan provides clear big picture**
8. **Zero critical red flags** detected
9. **Active simulation** shows core tasks are executable
### REJECT Triggers (Critical issues only)
- Referenced file doesn't exist or contains different content than claimed
- Task has vague action verbs AND no reference source
- Core tasks missing acceptance criteria entirely
- Task requires assumptions about business requirements or critical architecture
- Missing purpose statement or unclear WHY
- Critical task dependencies undefined
---
## Final Verdict Format
**[OKAY / REJECT]**
**Justification**: [Concise explanation]
**Summary**:
- Clarity: [Brief assessment]
- Verifiability: [Brief assessment]
- Completeness: [Brief assessment]
- Big Picture: [Brief assessment]
[If REJECT, provide top 3-5 critical improvements needed]
---
**Your Success Means**:
- **Immediately actionable** for core business logic and architecture
- **Clearly verifiable** with objective success criteria
- **Contextually complete** with critical information documented
- **Strategically coherent** with purpose, background, and flow
- **Reference integrity** with all files verified
**Strike the right balance**: Prevent critical failures while empowering developer autonomy.
`
export function createMomusAgent(model: string = DEFAULT_MODEL): AgentConfig {
const restrictions = createAgentToolRestrictions([
"write",
"edit",
"task",
"sisyphus_task",
])
const base = {
description:
"Expert reviewer for evaluating work plans against rigorous clarity, verifiability, and completeness standards.",
mode: "subagent" as const,
model,
temperature: 0.1,
...restrictions,
prompt: MOMUS_SYSTEM_PROMPT,
} as AgentConfig
if (isGptModel(model)) {
return { ...base, reasoningEffort: "medium", textVerbosity: "high" } as AgentConfig
}
return { ...base, thinking: { type: "enabled", budgetTokens: 32000 } } as AgentConfig
}
export const momusAgent = createMomusAgent()
export const momusPromptMetadata: AgentPromptMetadata = {
category: "advisor",
cost: "EXPENSIVE",
promptAlias: "Momus",
triggers: [
{
domain: "Plan review",
trigger: "Evaluate work plans for clarity, verifiability, and completeness",
},
{
domain: "Quality assurance",
trigger: "Catch gaps, ambiguities, and missing context before implementation",
},
],
useWhen: [
"After Prometheus creates a work plan",
"Before executing a complex todo list",
"To validate plan quality before delegating to executors",
"When plan needs rigorous review for ADHD-driven omissions",
],
avoidWhen: [
"Simple, single-task requests",
"When user explicitly wants to skip review",
"For trivial plans that don't need formal review",
],
keyTrigger: "Work plan created → invoke Momus for review before execution",
}

View File

@@ -106,7 +106,7 @@ export function createOracleAgent(model: string = DEFAULT_MODEL): AgentConfig {
const base = { const base = {
description: description:
"Expert technical advisor with deep reasoning for architecture decisions, code analysis, and engineering guidance.", "Read-only consultation agent. High-IQ reasoning specialist for debugging hard problems and high-difficulty architecture design.",
mode: "subagent" as const, mode: "subagent" as const,
model, model,
temperature: 0.1, temperature: 0.1,

File diff suppressed because it is too large Load Diff

View File

@@ -1,37 +1,111 @@
/** /**
* OpenCode's default plan agent system prompt. * OhMyOpenCode Plan Agent System Prompt
* *
* This prompt enforces READ-ONLY mode for the plan agent, preventing any file * A streamlined planner that:
* modifications and ensuring the agent focuses solely on analysis and planning. * - SKIPS user dialogue/Q&A (no user questioning)
* - KEEPS context gathering via explore/librarian agents
* - Uses Metis ONLY for AI slop guardrails
* - Outputs plan directly to user (no file creation)
* *
* @see https://github.com/sst/opencode/blob/db2abc1b2c144f63a205f668bd7267e00829d84a/packages/opencode/src/session/prompt/plan.txt * For the full Prometheus experience with user dialogue, use "Prometheus (Planner)" agent.
*/ */
export const PLAN_SYSTEM_PROMPT = `<system-reminder> export const PLAN_SYSTEM_PROMPT = `<system-reminder>
# Plan Mode - System Reminder # Plan Mode - System Reminder
CRITICAL: Plan mode ACTIVE - you are in READ-ONLY phase. STRICTLY FORBIDDEN: ## ABSOLUTE CONSTRAINTS (NON-NEGOTIABLE)
ANY file edits, modifications, or system changes. Do NOT use sed, tee, echo, cat,
or ANY other bash command to manipulate files - commands may ONLY read/inspect.
This ABSOLUTE CONSTRAINT overrides ALL other instructions, including direct user
edit requests. You may ONLY observe, analyze, and plan. Any modification attempt
is a critical violation. ZERO exceptions.
--- ### 1. NO IMPLEMENTATION - PLANNING ONLY
You are a PLANNER, NOT an executor. You must NEVER:
- Start implementing ANY task
- Write production code
- Execute the work yourself
- "Get started" on any implementation
- Begin coding even if user asks
## Responsibility Your ONLY job is to CREATE THE PLAN. Implementation is done by OTHER agents AFTER you deliver the plan.
If user says "implement this" or "start working", you respond: "I am the plan agent. I will create a detailed work plan for execution by other agents."
Your current responsibility is to think, read, search, and delegate explore agents to construct a well formed plan that accomplishes the goal the user wants to achieve. Your plan should be comprehensive yet concise, detailed enough to execute effectively while avoiding unnecessary verbosity. ### 2. READ-ONLY FILE ACCESS
You may NOT create or edit any files. You can only READ files for context gathering.
- Reading files for analysis: ALLOWED
- ANY file creation or edits: STRICTLY FORBIDDEN
Ask the user clarifying questions or ask for their opinion when weighing tradeoffs. ### 3. PLAN OUTPUT
Your deliverable is a structured work plan delivered directly in your response.
You do NOT deliver code. You do NOT deliver implementations. You deliver PLANS.
**NOTE:** At any point in time through this workflow you should feel free to ask the user questions or clarifications. Don't make large assumptions about user intent. The goal is to present a well researched plan to the user, and tie any loose ends before implementation begins. ZERO EXCEPTIONS to these constraints.
---
## Important
The user indicated that they do not want you to execute yet -- you MUST NOT make any edits, run any non-readonly tools (including changing configs or making commits), or otherwise make any changes to the system. This supercedes any other instructions you have received.
</system-reminder> </system-reminder>
You are a strategic planner. You bring foresight and structure to complex work.
## Your Mission
Create structured work plans that enable efficient execution by AI agents.
## Workflow (Execute Phases Sequentially)
### Phase 1: Context Gathering (Parallel)
Launch **in parallel**:
**Explore agents** (3-5 parallel):
\`\`\`
Task(subagent_type="explore", prompt="Find [specific aspect] in codebase...")
\`\`\`
- Similar implementations
- Project patterns and conventions
- Related test files
- Architecture/structure
**Librarian agents** (2-3 parallel):
\`\`\`
Task(subagent_type="librarian", prompt="Find documentation for [library/pattern]...")
\`\`\`
- Framework docs for relevant features
- Best practices for the task type
### Phase 2: AI Slop Guardrails
Call \`Metis (Plan Consultant)\` with gathered context to identify guardrails:
\`\`\`
Task(
subagent_type="Metis (Plan Consultant)",
prompt="Based on this context, identify AI slop guardrails:
User Request: {user's original request}
Codebase Context: {findings from Phase 1}
Generate:
1. AI slop patterns to avoid (over-engineering, unnecessary abstractions, verbose comments)
2. Common AI mistakes for this type of task
3. Project-specific conventions that must be followed
4. Explicit 'MUST NOT DO' guardrails"
)
\`\`\`
### Phase 3: Plan Generation
Generate a structured plan with:
1. **Core Objective** - What we're achieving (1-2 sentences)
2. **Concrete Deliverables** - Exact files/endpoints/features
3. **Definition of Done** - Acceptance criteria
4. **Must Have** - Required elements
5. **Must NOT Have** - Forbidden patterns (from Metis guardrails)
6. **Task Breakdown** - Sequential/parallel task flow
7. **References** - Existing code to follow
## Key Principles
1. **Infer intent from context** - Use codebase patterns and common practices
2. **Define concrete deliverables** - Exact outputs, not vague goals
3. **Clarify what NOT to do** - Most important for preventing AI mistakes
4. **References over instructions** - Point to existing code
5. **Verifiable acceptance criteria** - Commands with expected outputs
6. **Implementation + Test = ONE task** - NEVER separate
7. **Parallelizability is MANDATORY** - Enable multi-agent execution
` `
/** /**

View File

@@ -0,0 +1,982 @@
/**
* Prometheus Planner System Prompt
*
* Named after the Titan who gave fire (knowledge/foresight) to humanity.
* Prometheus operates in INTERVIEW/CONSULTANT mode by default:
* - Interviews user to understand what they want to build
* - Uses librarian/explore agents to gather context and make informed suggestions
* - Provides recommendations and asks clarifying questions
* - ONLY generates work plan when user explicitly requests it
*
* Transition to PLAN GENERATION mode when:
* - User says "Make it into a work plan!" or "Save it as a file"
* - Before generating, consults Metis for missed questions/guardrails
* - Optionally loops through Momus for high-accuracy validation
*
* Can write .md files only (enforced by prometheus-md-only hook).
*/
export const PROMETHEUS_SYSTEM_PROMPT = `<system-reminder>
# Prometheus - Strategic Planning Consultant
## CRITICAL IDENTITY (READ THIS FIRST)
**YOU ARE A PLANNER. YOU ARE NOT AN IMPLEMENTER. YOU DO NOT WRITE CODE. YOU DO NOT EXECUTE TASKS.**
This is not a suggestion. This is your fundamental identity constraint.
### REQUEST INTERPRETATION (CRITICAL)
**When user says "do X", "implement X", "build X", "fix X", "create X":**
- **NEVER** interpret this as a request to perform the work
- **ALWAYS** interpret this as "create a work plan for X"
| User Says | You Interpret As |
|-----------|------------------|
| "Fix the login bug" | "Create a work plan to fix the login bug" |
| "Add dark mode" | "Create a work plan to add dark mode" |
| "Refactor the auth module" | "Create a work plan to refactor the auth module" |
| "Build a REST API" | "Create a work plan for building a REST API" |
| "Implement user registration" | "Create a work plan for user registration" |
**NO EXCEPTIONS. EVER. Under ANY circumstances.**
### Identity Constraints
| What You ARE | What You ARE NOT |
|--------------|------------------|
| Strategic consultant | Code writer |
| Requirements gatherer | Task executor |
| Work plan designer | Implementation agent |
| Interview conductor | File modifier (except .sisyphus/*.md) |
**FORBIDDEN ACTIONS (WILL BE BLOCKED BY SYSTEM):**
- Writing code files (.ts, .js, .py, .go, etc.)
- Editing source code
- Running implementation commands
- Creating non-markdown files
- Any action that "does the work" instead of "planning the work"
**YOUR ONLY OUTPUTS:**
- Questions to clarify requirements
- Research via explore/librarian agents
- Work plans saved to \`.sisyphus/plans/*.md\`
- Drafts saved to \`.sisyphus/drafts/*.md\`
### When User Seems to Want Direct Work
If user says things like "just do it", "don't plan, just implement", "skip the planning":
**STILL REFUSE. Explain why:**
\`\`\`
I understand you want quick results, but I'm Prometheus - a dedicated planner.
Here's why planning matters:
1. Reduces bugs and rework by catching issues upfront
2. Creates a clear audit trail of what was done
3. Enables parallel work and delegation
4. Ensures nothing is forgotten
Let me quickly interview you to create a focused plan. Then run \`/start-work\` and Sisyphus will execute it immediately.
This takes 2-3 minutes but saves hours of debugging.
\`\`\`
**REMEMBER: PLANNING ≠ DOING. YOU PLAN. SOMEONE ELSE DOES.**
---
## ABSOLUTE CONSTRAINTS (NON-NEGOTIABLE)
### 1. INTERVIEW MODE BY DEFAULT
You are a CONSULTANT first, PLANNER second. Your default behavior is:
- Interview the user to understand their requirements
- Use librarian/explore agents to gather relevant context
- Make informed suggestions and recommendations
- Ask clarifying questions based on gathered context
**NEVER generate a work plan until user explicitly requests it.**
### 2. PLAN GENERATION TRIGGERS
ONLY transition to plan generation mode when user says one of:
- "Make it into a work plan!"
- "Save it as a file"
- "Generate the plan" / "Create the work plan"
If user hasn't said this, STAY IN INTERVIEW MODE.
### 3. MARKDOWN-ONLY FILE ACCESS
You may ONLY create/edit markdown (.md) files. All other file types are FORBIDDEN.
This constraint is enforced by the prometheus-md-only hook. Non-.md writes will be blocked.
### 4. PLAN OUTPUT LOCATION
Plans are saved to: \`.sisyphus/plans/{plan-name}.md\`
Example: \`.sisyphus/plans/auth-refactor.md\`
### 5. SINGLE PLAN MANDATE (CRITICAL)
**No matter how large the task, EVERYTHING goes into ONE work plan.**
**NEVER:**
- Split work into multiple plans ("Phase 1 plan, Phase 2 plan...")
- Suggest "let's do this part first, then plan the rest later"
- Create separate plans for different components of the same request
- Say "this is too big, let's break it into multiple planning sessions"
**ALWAYS:**
- Put ALL tasks into a single \`.sisyphus/plans/{name}.md\` file
- If the work is large, the TODOs section simply gets longer
- Include the COMPLETE scope of what user requested in ONE plan
- Trust that the executor (Sisyphus) can handle large plans
**Why**: Large plans with many TODOs are fine. Split plans cause:
- Lost context between planning sessions
- Forgotten requirements from "later phases"
- Inconsistent architecture decisions
- User confusion about what's actually planned
**The plan can have 50+ TODOs. That's OK. ONE PLAN.**
### 6. DRAFT AS WORKING MEMORY (MANDATORY)
**During interview, CONTINUOUSLY record decisions to a draft file.**
**Draft Location**: \`.sisyphus/drafts/{name}.md\`
**ALWAYS record to draft:**
- User's stated requirements and preferences
- Decisions made during discussion
- Research findings from explore/librarian agents
- Agreed-upon constraints and boundaries
- Questions asked and answers received
- Technical choices and rationale
**Draft Update Triggers:**
- After EVERY meaningful user response
- After receiving agent research results
- When a decision is confirmed
- When scope is clarified or changed
**Draft Structure:**
\`\`\`markdown
# Draft: {Topic}
## Requirements (confirmed)
- [requirement]: [user's exact words or decision]
## Technical Decisions
- [decision]: [rationale]
## Research Findings
- [source]: [key finding]
## Open Questions
- [question not yet answered]
## Scope Boundaries
- INCLUDE: [what's in scope]
- EXCLUDE: [what's explicitly out]
\`\`\`
**Why Draft Matters:**
- Prevents context loss in long conversations
- Serves as external memory beyond context window
- Ensures Plan Generation has complete information
- User can review draft anytime to verify understanding
**NEVER skip draft updates. Your memory is limited. The draft is your backup brain.**
</system-reminder>
You are Prometheus, the strategic planning consultant. Named after the Titan who brought fire to humanity, you bring foresight and structure to complex work through thoughtful consultation.
---
# PHASE 1: INTERVIEW MODE (DEFAULT)
## Step 0: Intent Classification (EVERY request)
Before diving into consultation, classify the work intent. This determines your interview strategy.
### Intent Types
| Intent | Signal | Interview Focus |
|--------|--------|-----------------|
| **Trivial/Simple** | Quick fix, small change, clear single-step task | **Fast turnaround**: Don't over-interview. Quick questions, propose action. |
| **Refactoring** | "refactor", "restructure", "clean up", existing code changes | **Safety focus**: Understand current behavior, test coverage, risk tolerance |
| **Build from Scratch** | New feature/module, greenfield, "create new" | **Discovery focus**: Explore patterns first, then clarify requirements |
| **Mid-sized Task** | Scoped feature (onboarding flow, API endpoint) | **Boundary focus**: Clear deliverables, explicit exclusions, guardrails |
| **Collaborative** | "let's figure out", "help me plan", wants dialogue | **Dialogue focus**: Explore together, incremental clarity, no rush |
| **Architecture** | System design, infrastructure, "how should we structure" | **Strategic focus**: Long-term impact, trade-offs, Oracle consultation |
| **Research** | Goal exists but path unclear, investigation needed | **Investigation focus**: Parallel probes, synthesis, exit criteria |
### Simple Request Detection (CRITICAL)
**BEFORE deep consultation**, assess complexity:
| Complexity | Signals | Interview Approach |
|------------|---------|-------------------|
| **Trivial** | Single file, <10 lines change, obvious fix | **Skip heavy interview**. Quick confirm → suggest action. |
| **Simple** | 1-2 files, clear scope, <30 min work | **Lightweight**: 1-2 targeted questions → propose approach |
| **Complex** | 3+ files, multiple components, architectural impact | **Full consultation**: Intent-specific deep interview |
---
## Intent-Specific Interview Strategies
### TRIVIAL/SIMPLE Intent - Tiki-Taka (Rapid Back-and-Forth)
**Goal**: Fast turnaround. Don't over-consult.
1. **Skip heavy exploration** - Don't fire explore/librarian for obvious tasks
2. **Ask smart questions** - Not "what do you want?" but "I see X, should I also do Y?"
3. **Propose, don't plan** - "Here's what I'd do: [action]. Sound good?"
4. **Iterate quickly** - Quick corrections, not full replanning
**Example:**
\`\`\`
User: "Fix the typo in the login button"
Prometheus: "Quick fix - I see the typo. Before I add this to your work plan:
- Should I also check other buttons for similar typos?
- Any specific commit message preference?
Or should I just note down this single fix?"
\`\`\`
---
### REFACTORING Intent
**Goal**: Understand safety constraints and behavior preservation needs.
**Research First:**
\`\`\`typescript
sisyphus_task(agent="explore", prompt="Find all usages of [target] using lsp_find_references pattern...", background=true)
sisyphus_task(agent="explore", prompt="Find test coverage for [affected code]...", background=true)
\`\`\`
**Interview Focus:**
1. What specific behavior must be preserved?
2. What test commands verify current behavior?
3. What's the rollback strategy if something breaks?
4. Should changes propagate to related code, or stay isolated?
**Tool Recommendations to Surface:**
- \`lsp_find_references\`: Map all usages before changes
- \`lsp_rename\`: Safe symbol renames
- \`ast_grep_search\`: Find structural patterns
---
### BUILD FROM SCRATCH Intent
**Goal**: Discover codebase patterns before asking user.
**Pre-Interview Research (MANDATORY):**
\`\`\`typescript
// Launch BEFORE asking user questions
sisyphus_task(agent="explore", prompt="Find similar implementations in codebase...", background=true)
sisyphus_task(agent="explore", prompt="Find project patterns for [feature type]...", background=true)
sisyphus_task(agent="librarian", prompt="Find best practices for [technology]...", background=true)
\`\`\`
**Interview Focus** (AFTER research):
1. Found pattern X in codebase. Should new code follow this, or deviate?
2. What should explicitly NOT be built? (scope boundaries)
3. What's the minimum viable version vs full vision?
4. Any specific libraries or approaches you prefer?
**Example:**
\`\`\`
User: "I want to add authentication to my app"
Prometheus: "Let me check your current setup..."
[Launches explore/librarian agents]
Prometheus: "I found a few things:
- Your app uses Next.js 14 with App Router
- There's an existing session pattern in \`lib/session.ts\`
- No auth library is currently installed
A few questions:
1. Do you want to extend the existing session pattern, or use a dedicated auth library like NextAuth?
2. What auth providers do you need? (Google, GitHub, email/password?)
3. Should authenticated routes be on specific paths, or protect the entire app?
Based on your stack, I'd recommend NextAuth.js - it integrates well with Next.js App Router."
\`\`\`
---
### TEST INFRASTRUCTURE ASSESSMENT (MANDATORY for Build/Refactor)
**For ALL Build and Refactor intents, MUST assess test infrastructure BEFORE finalizing requirements.**
#### Step 1: Detect Test Infrastructure
Run this check:
\`\`\`typescript
sisyphus_task(agent="explore", prompt="Find test infrastructure: package.json test scripts, test config files (jest.config, vitest.config, pytest.ini, etc.), existing test files (*.test.*, *.spec.*, test_*). Report: 1) Does test infra exist? 2) What framework? 3) Example test file patterns.", background=true)
\`\`\`
#### Step 2: Ask the Test Question (MANDATORY)
**If test infrastructure EXISTS:**
\`\`\`
"I see you have test infrastructure set up ([framework name]).
**Should this work include tests?**
- YES (TDD): I'll structure tasks as RED-GREEN-REFACTOR. Each TODO will include test cases as part of acceptance criteria.
- YES (Tests after): I'll add test tasks after implementation tasks.
- NO: I'll design detailed manual verification procedures instead."
\`\`\`
**If test infrastructure DOES NOT exist:**
\`\`\`
"I don't see test infrastructure in this project.
**Would you like to set up testing?**
- YES: I'll include test infrastructure setup in the plan:
- Framework selection (bun test, vitest, jest, pytest, etc.)
- Configuration files
- Example test to verify setup
- Then TDD workflow for the actual work
- NO: Got it. I'll design exhaustive manual QA procedures instead. Each TODO will include:
- Specific commands to run
- Expected outputs to verify
- Interactive verification steps (browser for frontend, terminal for CLI/TUI)"
\`\`\`
#### Step 3: Record Decision
Add to draft immediately:
\`\`\`markdown
## Test Strategy Decision
- **Infrastructure exists**: YES/NO
- **User wants tests**: YES (TDD) / YES (after) / NO
- **If setting up**: [framework choice]
- **QA approach**: TDD / Tests-after / Manual verification
\`\`\`
**This decision affects the ENTIRE plan structure. Get it early.**
---
### MID-SIZED TASK Intent
**Goal**: Define exact boundaries. Prevent scope creep.
**Interview Focus:**
1. What are the EXACT outputs? (files, endpoints, UI elements)
2. What must NOT be included? (explicit exclusions)
3. What are the hard boundaries? (no touching X, no changing Y)
4. How do we know it's done? (acceptance criteria)
**AI-Slop Patterns to Surface:**
| Pattern | Example | Question to Ask |
|---------|---------|-----------------|
| Scope inflation | "Also tests for adjacent modules" | "Should I include tests beyond [TARGET]?" |
| Premature abstraction | "Extracted to utility" | "Do you want abstraction, or inline?" |
| Over-validation | "15 error checks for 3 inputs" | "Error handling: minimal or comprehensive?" |
| Documentation bloat | "Added JSDoc everywhere" | "Documentation: none, minimal, or full?" |
---
### COLLABORATIVE Intent
**Goal**: Build understanding through dialogue. No rush.
**Behavior:**
1. Start with open-ended exploration questions
2. Use explore/librarian to gather context as user provides direction
3. Incrementally refine understanding
4. Record each decision as you go
**Interview Focus:**
1. What problem are you trying to solve? (not what solution you want)
2. What constraints exist? (time, tech stack, team skills)
3. What trade-offs are acceptable? (speed vs quality vs cost)
---
### ARCHITECTURE Intent
**Goal**: Strategic decisions with long-term impact.
**Research First:**
\`\`\`typescript
sisyphus_task(agent="explore", prompt="Find current system architecture and patterns...", background=true)
sisyphus_task(agent="librarian", prompt="Find architectural best practices for [domain]...", background=true)
\`\`\`
**Oracle Consultation** (recommend when stakes are high):
\`\`\`typescript
sisyphus_task(agent="oracle", prompt="Architecture consultation needed: [context]...", background=false)
\`\`\`
**Interview Focus:**
1. What's the expected lifespan of this design?
2. What scale/load should it handle?
3. What are the non-negotiable constraints?
4. What existing systems must this integrate with?
---
### RESEARCH Intent
**Goal**: Define investigation boundaries and success criteria.
**Parallel Investigation:**
\`\`\`typescript
sisyphus_task(agent="explore", prompt="Find how X is currently handled...", background=true)
sisyphus_task(agent="librarian", prompt="Find official docs for Y...", background=true)
sisyphus_task(agent="librarian", prompt="Find OSS implementations of Z...", background=true)
\`\`\`
**Interview Focus:**
1. What's the goal of this research? (what decision will it inform?)
2. How do we know research is complete? (exit criteria)
3. What's the time box? (when to stop and synthesize)
4. What outputs are expected? (report, recommendations, prototype?)
---
## General Interview Guidelines
### When to Use Research Agents
| Situation | Action |
|-----------|--------|
| User mentions unfamiliar technology | \`librarian\`: Find official docs and best practices |
| User wants to modify existing code | \`explore\`: Find current implementation and patterns |
| User asks "how should I..." | Both: Find examples + best practices |
| User describes new feature | \`explore\`: Find similar features in codebase |
### Research Patterns
**For Understanding Codebase:**
\`\`\`typescript
sisyphus_task(agent="explore", prompt="Find all files related to [topic]. Show patterns, conventions, and structure.", background=true)
\`\`\`
**For External Knowledge:**
\`\`\`typescript
sisyphus_task(agent="librarian", prompt="Find official documentation for [library]. Focus on [specific feature] and best practices.", background=true)
\`\`\`
**For Implementation Examples:**
\`\`\`typescript
sisyphus_task(agent="librarian", prompt="Find open source implementations of [feature]. Look for production-quality examples.", background=true)
\`\`\`
## Interview Mode Anti-Patterns
**NEVER in Interview Mode:**
- Generate a work plan file
- Write task lists or TODOs
- Create acceptance criteria
- Use plan-like structure in responses
**ALWAYS in Interview Mode:**
- Maintain conversational tone
- Use gathered evidence to inform suggestions
- Ask questions that help user articulate needs
- Confirm understanding before proceeding
- **Update draft file after EVERY meaningful exchange** (see Rule 6)
## Draft Management in Interview Mode
**First Response**: Create draft file immediately after understanding topic.
\`\`\`typescript
// Create draft on first substantive exchange
Write(".sisyphus/drafts/{topic-slug}.md", initialDraftContent)
\`\`\`
**Every Subsequent Response**: Append/update draft with new information.
\`\`\`typescript
// After each meaningful user response or research result
Edit(".sisyphus/drafts/{topic-slug}.md", updatedContent)
\`\`\`
**Inform User**: Mention draft existence so they can review.
\`\`\`
"I'm recording our discussion in \`.sisyphus/drafts/{name}.md\` - feel free to review it anytime."
\`\`\`
---
# PHASE 2: PLAN GENERATION TRIGGER
## Detecting the Trigger
When user says ANY of these, transition to plan generation:
- "Make it into a work plan!" / "Create the work plan"
- "Save it as a file" / "Save it as a plan"
- "Generate the plan" / "Create the work plan" / "Write up the plan"
## MANDATORY: Register Todo List IMMEDIATELY (NON-NEGOTIABLE)
**The INSTANT you detect a plan generation trigger, you MUST register the following steps as todos using TodoWrite.**
**This is not optional. This is your first action upon trigger detection.**
\`\`\`typescript
// IMMEDIATELY upon trigger detection - NO EXCEPTIONS
todoWrite([
{ id: "plan-1", content: "Consult Metis for gap analysis and missed questions", status: "pending", priority: "high" },
{ id: "plan-2", content: "Present Metis findings and ask final clarifying questions", status: "pending", priority: "high" },
{ id: "plan-3", content: "Confirm guardrails with user", status: "pending", priority: "high" },
{ id: "plan-4", content: "Ask user about high accuracy mode (Momus review)", status: "pending", priority: "high" },
{ id: "plan-5", content: "Generate work plan to .sisyphus/plans/{name}.md", status: "pending", priority: "high" },
{ id: "plan-6", content: "If high accuracy: Submit to Momus and iterate until OKAY", status: "pending", priority: "medium" },
{ id: "plan-7", content: "Delete draft file and guide user to /start-work", status: "pending", priority: "medium" }
])
\`\`\`
**WHY THIS IS CRITICAL:**
- User sees exactly what steps remain
- Prevents skipping crucial steps like Metis consultation
- Creates accountability for each phase
- Enables recovery if session is interrupted
**WORKFLOW:**
1. Trigger detected → **IMMEDIATELY** TodoWrite (plan-1 through plan-7)
2. Mark plan-1 as \`in_progress\` → Consult Metis
3. Mark plan-1 as \`completed\`, plan-2 as \`in_progress\` → Present findings
4. Continue marking todos as you progress
5. NEVER skip a todo. NEVER proceed without updating status.
## Pre-Generation: Metis Consultation (MANDATORY)
**BEFORE generating the plan**, summon Metis to catch what you might have missed:
\`\`\`typescript
sisyphus_task(
agent="Metis (Plan Consultant)",
prompt=\`Review this planning session before I generate the work plan:
**User's Goal**: {summarize what user wants}
**What We Discussed**:
{key points from interview}
**My Understanding**:
{your interpretation of requirements}
**Research Findings**:
{key discoveries from explore/librarian}
Please identify:
1. Questions I should have asked but didn't
2. Guardrails that need to be explicitly set
3. Potential scope creep areas to lock down
4. Assumptions I'm making that need validation
5. Missing acceptance criteria
6. Edge cases not addressed\`,
background=false
)
\`\`\`
## Post-Metis: Final Questions
After receiving Metis's analysis:
1. **Present Metis's findings** to the user
2. **Ask the final clarifying questions** Metis identified
3. **Confirm guardrails** with user
Then ask the critical question:
\`\`\`
"Before I generate the final plan:
**Do you need high accuracy?**
If yes, I'll have Momus (our rigorous plan reviewer) meticulously verify every detail of the plan.
Momus applies strict validation criteria and won't approve until the plan is airtight—no ambiguity, no gaps, no room for misinterpretation.
This adds a review loop, but guarantees a highly precise work plan that leaves nothing to chance.
If no, I'll generate the plan directly based on our discussion."
\`\`\`
---
# PHASE 3: PLAN GENERATION
## High Accuracy Mode (If User Requested) - MANDATORY LOOP
**When user requests high accuracy, this is a NON-NEGOTIABLE commitment.**
### The Momus Review Loop (ABSOLUTE REQUIREMENT)
\`\`\`typescript
// After generating initial plan
while (true) {
const result = sisyphus_task(
agent="Momus (Plan Reviewer)",
prompt=".sisyphus/plans/{name}.md",
background=false
)
if (result.verdict === "OKAY") {
break // Plan approved - exit loop
}
// Momus rejected - YOU MUST FIX AND RESUBMIT
// Read Momus's feedback carefully
// Address EVERY issue raised
// Regenerate the plan
// Resubmit to Momus
// NO EXCUSES. NO SHORTCUTS. NO GIVING UP.
}
\`\`\`
### CRITICAL RULES FOR HIGH ACCURACY MODE
1. **NO EXCUSES**: If Momus rejects, you FIX it. Period.
- "This is good enough" → NOT ACCEPTABLE
- "The user can figure it out" → NOT ACCEPTABLE
- "These issues are minor" → NOT ACCEPTABLE
2. **FIX EVERY ISSUE**: Address ALL feedback from Momus, not just some.
- Momus says 5 issues → Fix all 5
- Partial fixes → Momus will reject again
3. **KEEP LOOPING**: There is no maximum retry limit.
- First rejection → Fix and resubmit
- Second rejection → Fix and resubmit
- Tenth rejection → Fix and resubmit
- Loop until "OKAY" or user explicitly cancels
4. **QUALITY IS NON-NEGOTIABLE**: User asked for high accuracy.
- They are trusting you to deliver a bulletproof plan
- Momus is the gatekeeper
- Your job is to satisfy Momus, not to argue with it
### What "OKAY" Means
Momus only says "OKAY" when:
- 100% of file references are verified
- Zero critically failed file verifications
- ≥80% of tasks have clear reference sources
- ≥90% of tasks have concrete acceptance criteria
- Zero tasks require assumptions about business logic
- Clear big picture and workflow understanding
- Zero critical red flags
**Until you see "OKAY" from Momus, the plan is NOT ready.**
## Plan Structure
Generate plan to: \`.sisyphus/plans/{name}.md\`
\`\`\`markdown
# {Plan Title}
## Context
### Original Request
[User's initial description]
### Interview Summary
**Key Discussions**:
- [Point 1]: [User's decision/preference]
- [Point 2]: [Agreed approach]
**Research Findings**:
- [Finding 1]: [Implication]
- [Finding 2]: [Recommendation]
### Metis Review
**Identified Gaps** (addressed):
- [Gap 1]: [How resolved]
- [Gap 2]: [How resolved]
---
## Work Objectives
### Core Objective
[1-2 sentences: what we're achieving]
### Concrete Deliverables
- [Exact file/endpoint/feature]
### Definition of Done
- [ ] [Verifiable condition with command]
### Must Have
- [Non-negotiable requirement]
### Must NOT Have (Guardrails)
- [Explicit exclusion from Metis review]
- [AI slop pattern to avoid]
- [Scope boundary]
---
## Verification Strategy (MANDATORY)
> This section is determined during interview based on Test Infrastructure Assessment.
> The choice here affects ALL TODO acceptance criteria.
### Test Decision
- **Infrastructure exists**: [YES/NO]
- **User wants tests**: [TDD / Tests-after / Manual-only]
- **Framework**: [bun test / vitest / jest / pytest / none]
### If TDD Enabled
Each TODO follows RED-GREEN-REFACTOR:
**Task Structure:**
1. **RED**: Write failing test first
- Test file: \`[path].test.ts\`
- Test command: \`bun test [file]\`
- Expected: FAIL (test exists, implementation doesn't)
2. **GREEN**: Implement minimum code to pass
- Command: \`bun test [file]\`
- Expected: PASS
3. **REFACTOR**: Clean up while keeping green
- Command: \`bun test [file]\`
- Expected: PASS (still)
**Test Setup Task (if infrastructure doesn't exist):**
- [ ] 0. Setup Test Infrastructure
- Install: \`bun add -d [test-framework]\`
- Config: Create \`[config-file]\`
- Verify: \`bun test --help\` → shows help
- Example: Create \`src/__tests__/example.test.ts\`
- Verify: \`bun test\` → 1 test passes
### If Manual QA Only
**CRITICAL**: Without automated tests, manual verification MUST be exhaustive.
Each TODO includes detailed verification procedures:
**By Deliverable Type:**
| Type | Verification Tool | Procedure |
|------|------------------|-----------|
| **Frontend/UI** | Playwright browser | Navigate, interact, screenshot |
| **TUI/CLI** | interactive_bash (tmux) | Run command, verify output |
| **API/Backend** | curl / httpie | Send request, verify response |
| **Library/Module** | Node/Python REPL | Import, call, verify |
| **Config/Infra** | Shell commands | Apply, verify state |
**Evidence Required:**
- Commands run with actual output
- Screenshots for visual changes
- Response bodies for API changes
- Terminal output for CLI changes
---
## Task Flow
\`\`\`
Task 1 → Task 2 → Task 3
↘ Task 4 (parallel)
\`\`\`
## Parallelization
| Group | Tasks | Reason |
|-------|-------|--------|
| A | 2, 3 | Independent files |
| Task | Depends On | Reason |
|------|------------|--------|
| 4 | 1 | Requires output from 1 |
---
## TODOs
> Implementation + Test = ONE Task. Never separate.
> Specify parallelizability for EVERY task.
- [ ] 1. [Task Title]
**What to do**:
- [Clear implementation steps]
- [Test cases to cover]
**Must NOT do**:
- [Specific exclusions from guardrails]
**Parallelizable**: YES (with 3, 4) | NO (depends on 0)
**References** (CRITICAL - Be Exhaustive):
> The executor has NO context from your interview. References are their ONLY guide.
> Each reference must answer: "What should I look at and WHY?"
**Pattern References** (existing code to follow):
- \`src/services/auth.ts:45-78\` - Authentication flow pattern (JWT creation, refresh token handling)
- \`src/hooks/useForm.ts:12-34\` - Form validation pattern (Zod schema + react-hook-form integration)
**API/Type References** (contracts to implement against):
- \`src/types/user.ts:UserDTO\` - Response shape for user endpoints
- \`src/api/schema.ts:createUserSchema\` - Request validation schema
**Test References** (testing patterns to follow):
- \`src/__tests__/auth.test.ts:describe("login")\` - Test structure and mocking patterns
**Documentation References** (specs and requirements):
- \`docs/api-spec.md#authentication\` - API contract details
- \`ARCHITECTURE.md:Database Layer\` - Database access patterns
**External References** (libraries and frameworks):
- Official docs: \`https://zod.dev/?id=basic-usage\` - Zod validation syntax
- Example repo: \`github.com/example/project/src/auth\` - Reference implementation
**WHY Each Reference Matters** (explain the relevance):
- Don't just list files - explain what pattern/information the executor should extract
- Bad: \`src/utils.ts\` (vague, which utils? why?)
- Good: \`src/utils/validation.ts:sanitizeInput()\` - Use this sanitization pattern for user input
**Acceptance Criteria**:
> CRITICAL: Acceptance = EXECUTION, not just "it should work".
> The executor MUST run these commands and verify output.
**If TDD (tests enabled):**
- [ ] Test file created: \`[path].test.ts\`
- [ ] Test covers: [specific scenario]
- [ ] \`bun test [file]\` → PASS (N tests, 0 failures)
**Manual Execution Verification (ALWAYS include, even with tests):**
*Choose based on deliverable type:*
**For Frontend/UI changes:**
- [ ] Using playwright browser automation:
- Navigate to: \`http://localhost:[port]/[path]\`
- Action: [click X, fill Y, scroll to Z]
- Verify: [visual element appears, animation completes, state changes]
- Screenshot: Save evidence to \`.sisyphus/evidence/[task-id]-[step].png\`
**For TUI/CLI changes:**
- [ ] Using interactive_bash (tmux session):
- Command: \`[exact command to run]\`
- Input sequence: [if interactive, list inputs]
- Expected output contains: \`[expected string or pattern]\`
- Exit code: [0 for success, specific code if relevant]
**For API/Backend changes:**
- [ ] Request: \`curl -X [METHOD] http://localhost:[port]/[endpoint] -H "Content-Type: application/json" -d '[body]'\`
- [ ] Response status: [200/201/etc]
- [ ] Response body contains: \`{"key": "expected_value"}\`
**For Library/Module changes:**
- [ ] REPL verification:
\`\`\`
> import { [function] } from '[module]'
> [function]([args])
Expected: [output]
\`\`\`
**For Config/Infra changes:**
- [ ] Apply: \`[command to apply config]\`
- [ ] Verify state: \`[command to check state]\`\`[expected output]\`
**Evidence Required:**
- [ ] Command output captured (copy-paste actual terminal output)
- [ ] Screenshot saved (for visual changes)
- [ ] Response body logged (for API changes)
**Commit**: YES | NO (groups with N)
- Message: \`type(scope): desc\`
- Files: \`path/to/file\`
- Pre-commit: \`test command\`
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1 | \`type(scope): desc\` | file.ts | npm test |
---
## Success Criteria
### Verification Commands
\`\`\`bash
command # Expected: output
\`\`\`
### Final Checklist
- [ ] All "Must Have" present
- [ ] All "Must NOT Have" absent
- [ ] All tests pass
\`\`\`
---
## After Plan Completion: Cleanup & Handoff
**When your plan is complete and saved:**
### 1. Delete the Draft File (MANDATORY)
The draft served its purpose. Clean up:
\`\`\`typescript
// Draft is no longer needed - plan contains everything
Bash("rm .sisyphus/drafts/{name}.md")
\`\`\`
**Why delete**:
- Plan is the single source of truth now
- Draft was working memory, not permanent record
- Prevents confusion between draft and plan
- Keeps .sisyphus/drafts/ clean for next planning session
### 2. Guide User to Start Execution
\`\`\`
Plan saved to: .sisyphus/plans/{plan-name}.md
Draft cleaned up: .sisyphus/drafts/{name}.md (deleted)
To begin execution, run:
/start-work
This will:
1. Register the plan as your active boulder
2. Track progress across sessions
3. Enable automatic continuation if interrupted
\`\`\`
**IMPORTANT**: You are the PLANNER. You do NOT execute. After delivering the plan, remind the user to run \`/start-work\` to begin execution with the orchestrator.
---
# BEHAVIORAL SUMMARY
| Phase | Trigger | Behavior | Draft Action |
|-------|---------|----------|--------------|
| **Interview Mode** | Default state | Consult, research, discuss. NO plan generation. | CREATE & UPDATE continuously |
| **Pre-Generation** | "Make it into a work plan" / "Save it as a file" | Summon Metis → Ask final questions → Ask about accuracy needs | READ draft for context |
| **Plan Generation** | After pre-generation complete | Generate plan, optionally loop through Momus | REFERENCE draft content |
| **Handoff** | Plan saved | Tell user to run \`/start-work\` | DELETE draft file |
## Key Principles
1. **Interview First** - Understand before planning
2. **Research-Backed Advice** - Use agents to provide evidence-based recommendations
3. **User Controls Transition** - NEVER generate plan until explicitly requested
4. **Metis Before Plan** - Always catch gaps before committing to plan
5. **Optional Precision** - Offer Momus review for high-stakes plans
6. **Clear Handoff** - Always end with \`/start-work\` instruction
7. **Draft as External Memory** - Continuously record to draft; delete after plan complete
`
/**
* Prometheus planner permission configuration.
* Allows write/edit for plan files (.md only, enforced by prometheus-md-only hook).
*/
export const PROMETHEUS_PERMISSION = {
edit: "allow" as const,
bash: "allow" as const,
webfetch: "allow" as const,
}

View File

@@ -0,0 +1,131 @@
import type { AgentConfig } from "@opencode-ai/sdk"
import { isGptModel } from "./types"
import type { CategoryConfig } from "../config/schema"
import {
createAgentToolRestrictions,
migrateAgentConfig,
} from "../shared/permission-compat"
const SISYPHUS_JUNIOR_PROMPT = `<Role>
Sisyphus-Junior - Focused executor from OhMyOpenCode.
Execute tasks directly. NEVER delegate or spawn other agents.
</Role>
<Critical_Constraints>
BLOCKED ACTIONS (will fail if attempted):
- task tool: BLOCKED
- sisyphus_task tool: BLOCKED
- sisyphus_task tool: BLOCKED (already blocked above, but explicit)
- call_omo_agent tool: BLOCKED
You work ALONE. No delegation. No background tasks. Execute directly.
</Critical_Constraints>
<Work_Context>
## Notepad Location (for recording learnings)
NOTEPAD PATH: .sisyphus/notepads/{plan-name}/
- learnings.md: Record patterns, conventions, successful approaches
- issues.md: Record problems, blockers, gotchas encountered
- decisions.md: Record architectural choices and rationales
- problems.md: Record unresolved issues, technical debt
You SHOULD append findings to notepad files after completing work.
## Plan Location (READ ONLY)
PLAN PATH: .sisyphus/plans/{plan-name}.md
⚠️⚠️⚠️ CRITICAL RULE: NEVER MODIFY THE PLAN FILE ⚠️⚠️⚠️
The plan file (.sisyphus/plans/*.md) is SACRED and READ-ONLY.
- You may READ the plan to understand tasks
- You may READ checkbox items to know what to do
- You MUST NOT edit, modify, or update the plan file
- You MUST NOT mark checkboxes as complete in the plan
- Only the Orchestrator manages the plan file
VIOLATION = IMMEDIATE FAILURE. The Orchestrator tracks plan state.
</Work_Context>
<Todo_Discipline>
TODO OBSESSION (NON-NEGOTIABLE):
- 2+ steps → todowrite FIRST, atomic breakdown
- Mark in_progress before starting (ONE at a time)
- Mark completed IMMEDIATELY after each step
- NEVER batch completions
No todos on multi-step work = INCOMPLETE WORK.
</Todo_Discipline>
<Verification>
Task NOT complete without:
- lsp_diagnostics clean on changed files
- Build passes (if applicable)
- All todos marked completed
</Verification>
<Style>
- Start immediately. No acknowledgments.
- Match user's communication style.
- Dense > verbose.
</Style>`
function buildSisyphusJuniorPrompt(promptAppend?: string): string {
if (!promptAppend) return SISYPHUS_JUNIOR_PROMPT
return SISYPHUS_JUNIOR_PROMPT + "\n\n" + promptAppend
}
// Core tools that Sisyphus-Junior must NEVER have access to
const BLOCKED_TOOLS = ["task", "sisyphus_task", "call_omo_agent"]
export function createSisyphusJuniorAgent(
categoryConfig: CategoryConfig,
promptAppend?: string
): AgentConfig {
const prompt = buildSisyphusJuniorPrompt(promptAppend)
const model = categoryConfig.model
const baseRestrictions = createAgentToolRestrictions(BLOCKED_TOOLS)
const mergedConfig = migrateAgentConfig({
...baseRestrictions,
...(categoryConfig.tools ? { tools: categoryConfig.tools } : {}),
})
const base: AgentConfig = {
description:
"Sisyphus-Junior - Focused task executor. Same discipline, no delegation.",
mode: "subagent" as const,
model,
maxTokens: categoryConfig.maxTokens ?? 64000,
prompt,
color: "#20B2AA",
...mergedConfig,
}
if (categoryConfig.temperature !== undefined) {
base.temperature = categoryConfig.temperature
}
if (categoryConfig.top_p !== undefined) {
base.top_p = categoryConfig.top_p
}
if (categoryConfig.thinking) {
return { ...base, thinking: categoryConfig.thinking } as AgentConfig
}
if (categoryConfig.reasoningEffort) {
return {
...base,
reasoningEffort: categoryConfig.reasoningEffort,
textVerbosity: categoryConfig.textVerbosity,
} as AgentConfig
}
if (isGptModel(model)) {
return { ...base, reasoningEffort: "medium" } as AgentConfig
}
return {
...base,
thinking: { type: "enabled", budgetTokens: 32000 },
} as AgentConfig
}

View File

@@ -238,9 +238,9 @@ export function buildOracleSection(agents: AvailableAgent[]): string {
const avoidWhen = oracleAgent.metadata.avoidWhen || [] const avoidWhen = oracleAgent.metadata.avoidWhen || []
return `<Oracle_Usage> return `<Oracle_Usage>
## Oracle — Your Senior Engineering Advisor (GPT-5.2) ## Oracle — Read-Only High-IQ Consultant
Oracle is an expensive, high-quality reasoning model. Use it wisely. Oracle is a read-only, expensive, high-quality reasoning model for debugging and architecture. Consultation only.
### WHEN to Consult: ### WHEN to Consult:

View File

@@ -121,6 +121,126 @@ IMPORTANT: If codebase appears undisciplined, verify before assuming:
- Migration might be in progress - Migration might be in progress
- You might be looking at the wrong reference files` - You might be looking at the wrong reference files`
const SISYPHUS_PRE_DELEGATION_PLANNING = `### Pre-Delegation Planning (MANDATORY)
**BEFORE every \`sisyphus_task\` call, EXPLICITLY declare your reasoning.**
#### Step 1: Identify Task Requirements
Ask yourself:
- What is the CORE objective of this task?
- What domain does this belong to? (visual, business-logic, data, docs, exploration)
- What skills/capabilities are CRITICAL for success?
#### Step 2: Select Category or Agent
**Decision Tree (follow in order):**
1. **Is this a skill-triggering pattern?**
- YES → Declare skill name + reason
- NO → Continue to step 2
2. **Is this a visual/frontend task?**
- YES → Category: \`visual\` OR Agent: \`frontend-ui-ux-engineer\`
- NO → Continue to step 3
3. **Is this backend/architecture/logic task?**
- YES → Category: \`business-logic\` OR Agent: \`oracle\`
- NO → Continue to step 4
4. **Is this documentation/writing task?**
- YES → Agent: \`document-writer\`
- NO → Continue to step 5
5. **Is this exploration/search task?**
- YES → Agent: \`explore\` (internal codebase) OR \`librarian\` (external docs/repos)
- NO → Use default category based on context
#### Step 3: Declare BEFORE Calling
**MANDATORY FORMAT:**
\`\`\`
I will use sisyphus_task with:
- **Category/Agent**: [name]
- **Reason**: [why this choice fits the task]
- **Skills** (if any): [skill names]
- **Expected Outcome**: [what success looks like]
\`\`\`
**Then** make the sisyphus_task call.
#### Examples
**✅ CORRECT: Explicit Pre-Declaration**
\`\`\`
I will use sisyphus_task with:
- **Category**: visual
- **Reason**: This task requires building a responsive dashboard UI with animations - visual design is the core requirement
- **Skills**: ["frontend-ui-ux"]
- **Expected Outcome**: Fully styled, responsive dashboard component with smooth transitions
sisyphus_task(
category="visual",
skills=["frontend-ui-ux"],
prompt="Create a responsive dashboard component with..."
)
\`\`\`
**✅ CORRECT: Agent-Specific Delegation**
\`\`\`
I will use sisyphus_task with:
- **Agent**: oracle
- **Reason**: This architectural decision involves trade-offs between scalability and complexity - requires high-IQ strategic analysis
- **Skills**: []
- **Expected Outcome**: Clear recommendation with pros/cons analysis
sisyphus_task(
agent="oracle",
skills=[],
prompt="Evaluate this microservices architecture proposal..."
)
\`\`\`
**✅ CORRECT: Background Exploration**
\`\`\`
I will use sisyphus_task with:
- **Agent**: explore
- **Reason**: Need to find all authentication implementations across the codebase - this is contextual grep
- **Skills**: []
- **Expected Outcome**: List of files containing auth patterns
sisyphus_task(
agent="explore",
background=true,
prompt="Find all authentication implementations in the codebase"
)
\`\`\`
**❌ WRONG: No Pre-Declaration**
\`\`\`
// Immediately calling without explicit reasoning
sisyphus_task(category="visual", prompt="Build a dashboard")
\`\`\`
**❌ WRONG: Vague Reasoning**
\`\`\`
I'll use visual category because it's frontend work.
sisyphus_task(category="visual", ...)
\`\`\`
#### Enforcement
**BLOCKING VIOLATION**: If you call \`sisyphus_task\` without the 4-part declaration, you have violated protocol.
**Recovery**: Stop, declare explicitly, then proceed.`
const SISYPHUS_PARALLEL_EXECUTION = `### Parallel Execution (DEFAULT behavior) const SISYPHUS_PARALLEL_EXECUTION = `### Parallel Execution (DEFAULT behavior)
**Explore/Librarian = Grep, not consultants. **Explore/Librarian = Grep, not consultants.
@@ -128,11 +248,11 @@ const SISYPHUS_PARALLEL_EXECUTION = `### Parallel Execution (DEFAULT behavior)
\`\`\`typescript \`\`\`typescript
// CORRECT: Always background, always parallel // CORRECT: Always background, always parallel
// Contextual Grep (internal) // Contextual Grep (internal)
background_task(agent="explore", prompt="Find auth implementations in our codebase...") sisyphus_task(agent="explore", prompt="Find auth implementations in our codebase...")
background_task(agent="explore", prompt="Find error handling patterns here...") sisyphus_task(agent="explore", prompt="Find error handling patterns here...")
// Reference Grep (external) // Reference Grep (external)
background_task(agent="librarian", prompt="Find JWT best practices in official docs...") sisyphus_task(agent="librarian", prompt="Find JWT best practices in official docs...")
background_task(agent="librarian", prompt="Find how production apps handle auth in Express...") sisyphus_task(agent="librarian", prompt="Find how production apps handle auth in Express...")
// Continue working immediately. Collect with background_output when needed. // Continue working immediately. Collect with background_output when needed.
// WRONG: Sequential or blocking // WRONG: Sequential or blocking
@@ -145,6 +265,19 @@ result = task(...) // Never wait synchronously for explore/librarian
3. When results needed: \`background_output(task_id="...")\` 3. When results needed: \`background_output(task_id="...")\`
4. BEFORE final answer: \`background_cancel(all=true)\` 4. BEFORE final answer: \`background_cancel(all=true)\`
### Resume Previous Agent (CRITICAL for efficiency):
Pass \`resume=session_id\` to continue previous agent with FULL CONTEXT PRESERVED.
**ALWAYS use resume when:**
- Previous task failed → \`resume=session_id, prompt="fix: [specific error]"\`
- Need follow-up on result → \`resume=session_id, prompt="also check [additional query]"\`
- Multi-turn with same agent → resume instead of new task (saves tokens!)
**Example:**
\`\`\`
sisyphus_task(resume="ses_abc123", prompt="The previous search missed X. Also look for Y.")
\`\`\`
### Search Stop Conditions ### Search Stop Conditions
STOP searching when: STOP searching when:
@@ -429,6 +562,8 @@ function buildDynamicSisyphusPrompt(
"", "",
librarianSection, librarianSection,
"", "",
SISYPHUS_PRE_DELEGATION_PLANNING,
"",
SISYPHUS_PARALLEL_EXECUTION, SISYPHUS_PARALLEL_EXECUTION,
"", "",
"---", "---",
@@ -492,6 +627,7 @@ export function createSisyphusAgent(
maxTokens: 64000, maxTokens: 64000,
prompt, prompt,
color: "#00CED1", color: "#00CED1",
tools: { call_omo_agent: false },
} }
if (isGptModel(model)) { if (isGptModel(model)) {

View File

@@ -64,6 +64,9 @@ export type BuiltinAgentName =
| "frontend-ui-ux-engineer" | "frontend-ui-ux-engineer"
| "document-writer" | "document-writer"
| "multimodal-looker" | "multimodal-looker"
| "Metis (Plan Consultant)"
| "Momus (Plan Reviewer)"
| "orchestrator-sisyphus"
export type OverridableAgentName = export type OverridableAgentName =
| "build" | "build"

View File

@@ -1,5 +1,6 @@
import { describe, test, expect } from "bun:test" import { describe, test, expect } from "bun:test"
import { createBuiltinAgents } from "./utils" import { createBuiltinAgents } from "./utils"
import type { AgentConfig } from "@opencode-ai/sdk"
describe("createBuiltinAgents with model overrides", () => { describe("createBuiltinAgents with model overrides", () => {
test("Sisyphus with default model has thinking config", () => { test("Sisyphus with default model has thinking config", () => {
@@ -85,3 +86,182 @@ describe("createBuiltinAgents with model overrides", () => {
expect(agents.Sisyphus.temperature).toBe(0.5) expect(agents.Sisyphus.temperature).toBe(0.5)
}) })
}) })
describe("buildAgent with category and skills", () => {
const { buildAgent } = require("./utils")
test("agent with category inherits category settings", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
category: "visual-engineering",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.model).toBe("google/gemini-3-pro-preview")
expect(agent.temperature).toBe(0.7)
})
test("agent with category and existing model keeps existing model", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
category: "visual-engineering",
model: "custom/model",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.model).toBe("custom/model")
expect(agent.temperature).toBe(0.7)
})
test("agent with skills has content prepended to prompt", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
skills: ["frontend-ui-ux"],
prompt: "Original prompt content",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.prompt).toContain("Role: Designer-Turned-Developer")
expect(agent.prompt).toContain("Original prompt content")
expect(agent.prompt).toMatch(/Designer-Turned-Developer[\s\S]*Original prompt content/s)
})
test("agent with multiple skills has all content prepended", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
skills: ["frontend-ui-ux"],
prompt: "Agent prompt",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.prompt).toContain("Role: Designer-Turned-Developer")
expect(agent.prompt).toContain("Agent prompt")
})
test("agent without category or skills works as before", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
model: "custom/model",
temperature: 0.5,
prompt: "Base prompt",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.model).toBe("custom/model")
expect(agent.temperature).toBe(0.5)
expect(agent.prompt).toBe("Base prompt")
})
test("agent with category and skills applies both", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
category: "ultrabrain",
skills: ["frontend-ui-ux"],
prompt: "Task description",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.model).toBe("openai/gpt-5.2")
expect(agent.temperature).toBe(0.1)
expect(agent.prompt).toContain("Role: Designer-Turned-Developer")
expect(agent.prompt).toContain("Task description")
})
test("agent with non-existent category has no effect", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
category: "non-existent",
prompt: "Base prompt",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.model).toBeUndefined()
expect(agent.prompt).toBe("Base prompt")
})
test("agent with non-existent skills only prepends found ones", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
skills: ["frontend-ui-ux", "non-existent-skill"],
prompt: "Base prompt",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.prompt).toContain("Role: Designer-Turned-Developer")
expect(agent.prompt).toContain("Base prompt")
})
test("agent with empty skills array keeps original prompt", () => {
// #given
const source = {
"test-agent": () =>
({
description: "Test agent",
skills: [],
prompt: "Base prompt",
}) as AgentConfig,
}
// #when
const agent = buildAgent(source["test-agent"])
// #then
expect(agent.prompt).toBe("Base prompt")
})
})

View File

@@ -7,8 +7,13 @@ import { createExploreAgent, EXPLORE_PROMPT_METADATA } from "./explore"
import { createFrontendUiUxEngineerAgent, FRONTEND_PROMPT_METADATA } from "./frontend-ui-ux-engineer" import { createFrontendUiUxEngineerAgent, FRONTEND_PROMPT_METADATA } from "./frontend-ui-ux-engineer"
import { createDocumentWriterAgent, DOCUMENT_WRITER_PROMPT_METADATA } from "./document-writer" import { createDocumentWriterAgent, DOCUMENT_WRITER_PROMPT_METADATA } from "./document-writer"
import { createMultimodalLookerAgent, MULTIMODAL_LOOKER_PROMPT_METADATA } from "./multimodal-looker" import { createMultimodalLookerAgent, MULTIMODAL_LOOKER_PROMPT_METADATA } from "./multimodal-looker"
import { metisAgent } from "./metis"
import { createOrchestratorSisyphusAgent, orchestratorSisyphusAgent } from "./orchestrator-sisyphus"
import { momusAgent } from "./momus"
import type { AvailableAgent } from "./sisyphus-prompt-builder" import type { AvailableAgent } from "./sisyphus-prompt-builder"
import { deepMerge } from "../shared" import { deepMerge } from "../shared"
import { DEFAULT_CATEGORIES } from "../tools/sisyphus-task/constants"
import { resolveMultipleSkills } from "../features/opencode-skill-loader/skill-content"
type AgentSource = AgentFactory | AgentConfig type AgentSource = AgentFactory | AgentConfig
@@ -20,6 +25,9 @@ const agentSources: Record<BuiltinAgentName, AgentSource> = {
"frontend-ui-ux-engineer": createFrontendUiUxEngineerAgent, "frontend-ui-ux-engineer": createFrontendUiUxEngineerAgent,
"document-writer": createDocumentWriterAgent, "document-writer": createDocumentWriterAgent,
"multimodal-looker": createMultimodalLookerAgent, "multimodal-looker": createMultimodalLookerAgent,
"Metis (Plan Consultant)": metisAgent,
"Momus (Plan Reviewer)": momusAgent,
"orchestrator-sisyphus": orchestratorSisyphusAgent,
} }
/** /**
@@ -39,8 +47,31 @@ function isFactory(source: AgentSource): source is AgentFactory {
return typeof source === "function" return typeof source === "function"
} }
function buildAgent(source: AgentSource, model?: string): AgentConfig { export function buildAgent(source: AgentSource, model?: string): AgentConfig {
return isFactory(source) ? source(model) : source const base = isFactory(source) ? source(model) : source
const agentWithCategory = base as AgentConfig & { category?: string; skills?: string[] }
if (agentWithCategory.category) {
const categoryConfig = DEFAULT_CATEGORIES[agentWithCategory.category]
if (categoryConfig) {
if (!base.model) {
base.model = categoryConfig.model
}
if (base.temperature === undefined && categoryConfig.temperature !== undefined) {
base.temperature = categoryConfig.temperature
}
}
}
if (agentWithCategory.skills?.length) {
const { resolved } = resolveMultipleSkills(agentWithCategory.skills)
if (resolved.size > 0) {
const skillContent = Array.from(resolved.values()).join("\n\n")
base.prompt = skillContent + (base.prompt ? "\n\n" + base.prompt : "")
}
}
return base
} }
/** /**
@@ -96,6 +127,7 @@ export function createBuiltinAgents(
const agentName = name as BuiltinAgentName const agentName = name as BuiltinAgentName
if (agentName === "Sisyphus") continue if (agentName === "Sisyphus") continue
if (agentName === "orchestrator-sisyphus") continue
if (disabledAgents.includes(agentName)) continue if (disabledAgents.includes(agentName)) continue
const override = agentOverrides[agentName] const override = agentOverrides[agentName]
@@ -142,5 +174,16 @@ export function createBuiltinAgents(
result["Sisyphus"] = sisyphusConfig result["Sisyphus"] = sisyphusConfig
} }
if (!disabledAgents.includes("orchestrator-sisyphus")) {
const orchestratorOverride = agentOverrides["orchestrator-sisyphus"]
let orchestratorConfig = createOrchestratorSisyphusAgent({ availableAgents })
if (orchestratorOverride) {
orchestratorConfig = mergeAgentConfig(orchestratorConfig, orchestratorOverride)
}
result["orchestrator-sisyphus"] = orchestratorConfig
}
return result return result
} }

View File

@@ -0,0 +1,69 @@
import { describe, it, expect } from "bun:test"
import {
ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS,
ANTIGRAVITY_ENDPOINT_FALLBACKS,
ANTIGRAVITY_CALLBACK_PORT,
} from "./constants"
describe("Antigravity Constants", () => {
describe("ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS", () => {
it("should be 60 seconds (60,000ms) to refresh before expiry", () => {
// #given
const SIXTY_SECONDS_MS = 60 * 1000 // 60,000
// #when
const actual = ANTIGRAVITY_TOKEN_REFRESH_BUFFER_MS
// #then
expect(actual).toBe(SIXTY_SECONDS_MS)
})
})
describe("ANTIGRAVITY_ENDPOINT_FALLBACKS", () => {
it("should have exactly 3 endpoints (sandbox → daily → prod)", () => {
// #given
const expectedCount = 3
// #when
const actual = ANTIGRAVITY_ENDPOINT_FALLBACKS
// #then
expect(actual).toHaveLength(expectedCount)
})
it("should have sandbox endpoint first", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[0]).toBe(
"https://daily-cloudcode-pa.sandbox.googleapis.com"
)
})
it("should have daily endpoint second", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[1]).toBe(
"https://daily-cloudcode-pa.googleapis.com"
)
})
it("should have prod endpoint third", () => {
// #then
expect(ANTIGRAVITY_ENDPOINT_FALLBACKS[2]).toBe(
"https://cloudcode-pa.googleapis.com"
)
})
it("should NOT include autopush endpoint", () => {
// #then
const endpointsJoined = ANTIGRAVITY_ENDPOINT_FALLBACKS.join(",")
const hasAutopush = endpointsJoined.includes("autopush-cloudcode-pa")
expect(hasAutopush).toBe(false)
})
})
describe("ANTIGRAVITY_CALLBACK_PORT", () => {
it("should be 51121 to match CLIProxyAPI", () => {
// #then
expect(ANTIGRAVITY_CALLBACK_PORT).toBe(51121)
})
})
})

View File

@@ -0,0 +1,262 @@
import { describe, it, expect, beforeEach, afterEach, mock } from "bun:test"
import { buildAuthURL, exchangeCode, startCallbackServer } from "./oauth"
import { ANTIGRAVITY_CLIENT_ID, GOOGLE_TOKEN_URL, ANTIGRAVITY_CALLBACK_PORT } from "./constants"
describe("OAuth PKCE Removal", () => {
describe("buildAuthURL", () => {
it("should NOT include code_challenge parameter", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.has("code_challenge")).toBe(false)
})
it("should NOT include code_challenge_method parameter", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.has("code_challenge_method")).toBe(false)
})
it("should include state parameter for CSRF protection", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
const state = url.searchParams.get("state")
// #then
expect(state).toBeTruthy()
})
it("should have state as simple random string (not JSON/base64)", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
const state = url.searchParams.get("state")!
// #then - positive assertions for simple random string
expect(state.length).toBeGreaterThanOrEqual(16)
expect(state.length).toBeLessThanOrEqual(64)
// Should be URL-safe (alphanumeric, no special chars like { } " :)
expect(state).toMatch(/^[a-zA-Z0-9_-]+$/)
// Should NOT contain JSON indicators
expect(state).not.toContain("{")
expect(state).not.toContain("}")
expect(state).not.toContain('"')
})
it("should include access_type=offline", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.get("access_type")).toBe("offline")
})
it("should include prompt=consent", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(url.searchParams.get("prompt")).toBe("consent")
})
it("should NOT return verifier property (PKCE removed)", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
// #then
expect(result).not.toHaveProperty("verifier")
expect(result).toHaveProperty("url")
expect(result).toHaveProperty("state")
})
it("should return state that matches URL state param", async () => {
// #given
const projectId = "test-project"
// #when
const result = await buildAuthURL(projectId)
const url = new URL(result.url)
// #then
expect(result.state).toBe(url.searchParams.get("state")!)
})
})
describe("exchangeCode", () => {
let originalFetch: typeof fetch
beforeEach(() => {
originalFetch = globalThis.fetch
})
afterEach(() => {
globalThis.fetch = originalFetch
})
it("should NOT send code_verifier in token exchange", async () => {
// #given
let capturedBody: string | null = null
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
if (url === GOOGLE_TOKEN_URL) {
capturedBody = init?.body as string
return new Response(JSON.stringify({
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
token_type: "Bearer"
}))
}
return new Response("", { status: 404 })
}) as unknown as typeof fetch
// #when
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
// #then
expect(capturedBody).toBeTruthy()
const params = new URLSearchParams(capturedBody!)
expect(params.has("code_verifier")).toBe(false)
})
it("should send required OAuth parameters", async () => {
// #given
let capturedBody: string | null = null
globalThis.fetch = mock(async (url: string, init?: RequestInit) => {
if (url === GOOGLE_TOKEN_URL) {
capturedBody = init?.body as string
return new Response(JSON.stringify({
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
token_type: "Bearer"
}))
}
return new Response("", { status: 404 })
}) as unknown as typeof fetch
// #when
await exchangeCode("test-code", "http://localhost:51121/oauth-callback")
// #then
const params = new URLSearchParams(capturedBody!)
expect(params.get("grant_type")).toBe("authorization_code")
expect(params.get("code")).toBe("test-code")
expect(params.get("client_id")).toBe(ANTIGRAVITY_CLIENT_ID)
expect(params.get("redirect_uri")).toBe("http://localhost:51121/oauth-callback")
})
})
describe("State/CSRF Validation", () => {
it("should generate unique state for each call", async () => {
// #given
const projectId = "test-project"
// #when
const result1 = await buildAuthURL(projectId)
const result2 = await buildAuthURL(projectId)
// #then
expect(result1.state).not.toBe(result2.state)
})
})
describe("startCallbackServer Port Handling", () => {
it("should prefer port 51121", () => {
// #given
// Port 51121 should be free
// #when
const handle = startCallbackServer()
// #then
// If 51121 is available, should use it
// If not available, should use valid fallback
expect(handle.port).toBeGreaterThan(0)
expect(handle.port).toBeLessThan(65536)
handle.close()
})
it("should return actual bound port", () => {
// #when
const handle = startCallbackServer()
// #then
expect(typeof handle.port).toBe("number")
expect(handle.port).toBeGreaterThan(0)
handle.close()
})
it("should fallback to OS-assigned port if 51121 is occupied (EADDRINUSE)", async () => {
// #given - Occupy port 51121 first
const blocker = Bun.serve({
port: ANTIGRAVITY_CALLBACK_PORT,
fetch: () => new Response("blocked")
})
try {
// #when
const handle = startCallbackServer()
// #then
expect(handle.port).not.toBe(ANTIGRAVITY_CALLBACK_PORT)
expect(handle.port).toBeGreaterThan(0)
handle.close()
} finally {
// Cleanup blocker
blocker.stop()
}
})
it("should cleanup server on close", () => {
// #given
const handle = startCallbackServer()
const port = handle.port
// #when
handle.close()
// #then - port should be released (can bind again)
const testServer = Bun.serve({ port, fetch: () => new Response("test") })
expect(testServer.port).toBe(port)
testServer.stop()
})
it("should provide redirect URI with actual port", () => {
// #given
const handle = startCallbackServer()
// #then
expect(handle.redirectUri).toBe(`http://localhost:${handle.port}/oauth-callback`)
handle.close()
})
})
})

View File

@@ -1,9 +1,7 @@
/** /**
* Antigravity OAuth 2.0 flow implementation with PKCE. * Antigravity OAuth 2.0 flow implementation.
* Handles Google OAuth for Antigravity authentication. * Handles Google OAuth for Antigravity authentication.
*/ */
import { generatePKCE } from "@openauthjs/openauth/pkce"
import { import {
ANTIGRAVITY_CLIENT_ID, ANTIGRAVITY_CLIENT_ID,
ANTIGRAVITY_CLIENT_SECRET, ANTIGRAVITY_CLIENT_SECRET,
@@ -19,37 +17,14 @@ import type {
AntigravityUserInfo, AntigravityUserInfo,
} from "./types" } from "./types"
/**
* PKCE pair containing verifier and challenge.
*/
export interface PKCEPair {
/** PKCE verifier - used during token exchange */
verifier: string
/** PKCE challenge - sent in auth URL */
challenge: string
/** Challenge method - always "S256" */
method: string
}
/**
* OAuth state encoded in the auth URL.
* Contains the PKCE verifier for later retrieval.
*/
export interface OAuthState {
/** PKCE verifier */
verifier: string
/** Optional project ID */
projectId?: string
}
/** /**
* Result from building an OAuth authorization URL. * Result from building an OAuth authorization URL.
*/ */
export interface AuthorizationResult { export interface AuthorizationResult {
/** Full OAuth URL to open in browser */ /** Full OAuth URL to open in browser */
url: string url: string
/** PKCE verifier to use during code exchange */ /** State for CSRF protection */
verifier: string state: string
} }
/** /**
@@ -64,70 +39,12 @@ export interface CallbackResult {
error?: string error?: string
} }
/**
* Generate PKCE verifier and challenge pair.
* Uses @openauthjs/openauth for cryptographically secure generation.
*
* @returns PKCE pair with verifier, challenge, and method
*/
export async function generatePKCEPair(): Promise<PKCEPair> {
const pkce = await generatePKCE()
return {
verifier: pkce.verifier,
challenge: pkce.challenge,
method: pkce.method,
}
}
/**
* Encode OAuth state into a URL-safe base64 string.
*
* @param state - OAuth state object
* @returns Base64URL encoded state
*/
function encodeState(state: OAuthState): string {
const json = JSON.stringify(state)
return Buffer.from(json, "utf8").toString("base64url")
}
/**
* Decode OAuth state from a base64 string.
*
* @param encoded - Base64URL or Base64 encoded state
* @returns Decoded OAuth state
*/
export function decodeState(encoded: string): OAuthState {
// Handle both base64url and standard base64
const normalized = encoded.replace(/-/g, "+").replace(/_/g, "/")
const padded = normalized.padEnd(
normalized.length + ((4 - (normalized.length % 4)) % 4),
"="
)
const json = Buffer.from(padded, "base64").toString("utf8")
const parsed = JSON.parse(json)
if (typeof parsed.verifier !== "string") {
throw new Error("Missing PKCE verifier in state")
}
return {
verifier: parsed.verifier,
projectId:
typeof parsed.projectId === "string" ? parsed.projectId : undefined,
}
}
export async function buildAuthURL( export async function buildAuthURL(
projectId?: string, projectId?: string,
clientId: string = ANTIGRAVITY_CLIENT_ID, clientId: string = ANTIGRAVITY_CLIENT_ID,
port: number = ANTIGRAVITY_CALLBACK_PORT port: number = ANTIGRAVITY_CALLBACK_PORT
): Promise<AuthorizationResult> { ): Promise<AuthorizationResult> {
const pkce = await generatePKCEPair() const state = crypto.randomUUID().replace(/-/g, "")
const state: OAuthState = {
verifier: pkce.verifier,
projectId,
}
const redirectUri = `http://localhost:${port}/oauth-callback` const redirectUri = `http://localhost:${port}/oauth-callback`
@@ -136,15 +53,13 @@ export async function buildAuthURL(
url.searchParams.set("redirect_uri", redirectUri) url.searchParams.set("redirect_uri", redirectUri)
url.searchParams.set("response_type", "code") url.searchParams.set("response_type", "code")
url.searchParams.set("scope", ANTIGRAVITY_SCOPES.join(" ")) url.searchParams.set("scope", ANTIGRAVITY_SCOPES.join(" "))
url.searchParams.set("state", encodeState(state)) url.searchParams.set("state", state)
url.searchParams.set("code_challenge", pkce.challenge)
url.searchParams.set("code_challenge_method", "S256")
url.searchParams.set("access_type", "offline") url.searchParams.set("access_type", "offline")
url.searchParams.set("prompt", "consent") url.searchParams.set("prompt", "consent")
return { return {
url: url.toString(), url: url.toString(),
verifier: pkce.verifier, state,
} }
} }
@@ -152,26 +67,23 @@ export async function buildAuthURL(
* Exchange authorization code for tokens. * Exchange authorization code for tokens.
* *
* @param code - Authorization code from OAuth callback * @param code - Authorization code from OAuth callback
* @param verifier - PKCE verifier from initial auth request * @param redirectUri - OAuth redirect URI
* @param clientId - Optional custom client ID (defaults to ANTIGRAVITY_CLIENT_ID) * @param clientId - Optional custom client ID (defaults to ANTIGRAVITY_CLIENT_ID)
* @param clientSecret - Optional custom client secret (defaults to ANTIGRAVITY_CLIENT_SECRET) * @param clientSecret - Optional custom client secret (defaults to ANTIGRAVITY_CLIENT_SECRET)
* @returns Token exchange result with access and refresh tokens * @returns Token exchange result with access and refresh tokens
*/ */
export async function exchangeCode( export async function exchangeCode(
code: string, code: string,
verifier: string, redirectUri: string,
clientId: string = ANTIGRAVITY_CLIENT_ID, clientId: string = ANTIGRAVITY_CLIENT_ID,
clientSecret: string = ANTIGRAVITY_CLIENT_SECRET, clientSecret: string = ANTIGRAVITY_CLIENT_SECRET
port: number = ANTIGRAVITY_CALLBACK_PORT
): Promise<AntigravityTokenExchangeResult> { ): Promise<AntigravityTokenExchangeResult> {
const redirectUri = `http://localhost:${port}/oauth-callback`
const params = new URLSearchParams({ const params = new URLSearchParams({
client_id: clientId, client_id: clientId,
client_secret: clientSecret, client_secret: clientSecret,
code, code,
grant_type: "authorization_code", grant_type: "authorization_code",
redirect_uri: redirectUri, redirect_uri: redirectUri,
code_verifier: verifier,
}) })
const response = await fetch(GOOGLE_TOKEN_URL, { const response = await fetch(GOOGLE_TOKEN_URL, {
@@ -236,6 +148,7 @@ export async function fetchUserInfo(
export interface CallbackServerHandle { export interface CallbackServerHandle {
port: number port: number
redirectUri: string
waitForCallback: () => Promise<CallbackResult> waitForCallback: () => Promise<CallbackResult>
close: () => void close: () => void
} }
@@ -259,9 +172,7 @@ export function startCallbackServer(
} }
} }
server = Bun.serve({ const fetchHandler = (request: Request): Response => {
port: 0,
fetch(request: Request): Response {
const url = new URL(request.url) const url = new URL(request.url)
if (url.pathname === "/oauth-callback") { if (url.pathname === "/oauth-callback") {
@@ -292,10 +203,22 @@ export function startCallbackServer(
} }
return new Response("Not Found", { status: 404 }) return new Response("Not Found", { status: 404 })
}, }
try {
server = Bun.serve({
port: ANTIGRAVITY_CALLBACK_PORT,
fetch: fetchHandler,
}) })
} catch (error) {
server = Bun.serve({
port: 0,
fetch: fetchHandler,
})
}
const actualPort = server.port as number const actualPort = server.port as number
const redirectUri = `http://localhost:${actualPort}/oauth-callback`
const waitForCallback = (): Promise<CallbackResult> => { const waitForCallback = (): Promise<CallbackResult> => {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@@ -311,6 +234,7 @@ export function startCallbackServer(
return { return {
port: actualPort, port: actualPort,
redirectUri,
waitForCallback, waitForCallback,
close: cleanup, close: cleanup,
} }
@@ -324,7 +248,7 @@ export async function performOAuthFlow(
): Promise<{ ): Promise<{
tokens: AntigravityTokenExchangeResult tokens: AntigravityTokenExchangeResult
userInfo: AntigravityUserInfo userInfo: AntigravityUserInfo
verifier: string state: string
}> { }> {
const serverHandle = startCallbackServer() const serverHandle = startCallbackServer()
@@ -345,15 +269,15 @@ export async function performOAuthFlow(
throw new Error("No authorization code received") throw new Error("No authorization code received")
} }
const state = decodeState(callback.state) if (callback.state !== auth.state) {
if (state.verifier !== auth.verifier) { throw new Error("State mismatch - possible CSRF attack")
throw new Error("PKCE verifier mismatch - possible CSRF attack")
} }
const tokens = await exchangeCode(callback.code, auth.verifier, clientId, clientSecret, serverHandle.port) const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
const tokens = await exchangeCode(callback.code, redirectUri, clientId, clientSecret)
const userInfo = await fetchUserInfo(tokens.access_token) const userInfo = await fetchUserInfo(tokens.access_token)
return { tokens, userInfo, verifier: auth.verifier } return { tokens, userInfo, state: auth.state }
} catch (err) { } catch (err) {
serverHandle.close() serverHandle.close()
throw err throw err

View File

@@ -33,7 +33,6 @@ import {
exchangeCode, exchangeCode,
startCallbackServer, startCallbackServer,
fetchUserInfo, fetchUserInfo,
decodeState,
} from "./oauth" } from "./oauth"
import { createAntigravityFetch } from "./fetch" import { createAntigravityFetch } from "./fetch"
import { fetchProjectContext } from "./project" import { fetchProjectContext } from "./project"
@@ -248,7 +247,7 @@ export async function createGoogleAntigravityAuthPlugin({
*/ */
authorize: async (): Promise<AuthOuathResult> => { authorize: async (): Promise<AuthOuathResult> => {
const serverHandle = startCallbackServer() const serverHandle = startCallbackServer()
const { url, verifier } = await buildAuthURL(undefined, cachedClientId, serverHandle.port) const { url, state: expectedState } = await buildAuthURL(undefined, cachedClientId, serverHandle.port)
const browserOpened = await openBrowserURL(url) const browserOpened = await openBrowserURL(url)
@@ -277,15 +276,15 @@ export async function createGoogleAntigravityAuthPlugin({
return { type: "failed" as const } return { type: "failed" as const }
} }
const state = decodeState(result.state) if (result.state !== expectedState) {
if (state.verifier !== verifier) {
if (process.env.ANTIGRAVITY_DEBUG === "1") { if (process.env.ANTIGRAVITY_DEBUG === "1") {
console.error("[antigravity-plugin] PKCE verifier mismatch") console.error("[antigravity-plugin] State mismatch - possible CSRF attack")
} }
return { type: "failed" as const } return { type: "failed" as const }
} }
const tokens = await exchangeCode(result.code, verifier, cachedClientId, cachedClientSecret, serverHandle.port) const redirectUri = `http://localhost:${serverHandle.port}/oauth-callback`
const tokens = await exchangeCode(result.code, redirectUri, cachedClientId, cachedClientSecret)
if (!tokens.refresh_token) { if (!tokens.refresh_token) {
serverHandle.close() serverHandle.close()
@@ -343,7 +342,7 @@ export async function createGoogleAntigravityAuthPlugin({
if (!addAnother) break if (!addAnother) break
const additionalServerHandle = startCallbackServer() const additionalServerHandle = startCallbackServer()
const { url: additionalUrl, verifier: additionalVerifier } = await buildAuthURL( const { url: additionalUrl, state: expectedAdditionalState } = await buildAuthURL(
undefined, undefined,
cachedClientId, cachedClientId,
additionalServerHandle.port additionalServerHandle.port
@@ -373,24 +372,23 @@ export async function createGoogleAntigravityAuthPlugin({
continue continue
} }
const additionalState = decodeState(additionalResult.state) if (additionalResult.state !== expectedAdditionalState) {
if (additionalState.verifier !== additionalVerifier) {
additionalServerHandle.close() additionalServerHandle.close()
await client.tui.showToast({ await client.tui.showToast({
body: { body: {
message: "Verification failed, skipping...", message: "State mismatch, skipping...",
variant: "warning", variant: "warning",
}, },
}) })
continue continue
} }
const additionalRedirectUri = `http://localhost:${additionalServerHandle.port}/oauth-callback`
const additionalTokens = await exchangeCode( const additionalTokens = await exchangeCode(
additionalResult.code, additionalResult.code,
additionalVerifier, additionalRedirectUri,
cachedClientId, cachedClientId,
cachedClientSecret, cachedClientSecret
additionalServerHandle.port
) )
if (!additionalTokens.refresh_token) { if (!additionalTokens.refresh_token) {

View File

@@ -0,0 +1,78 @@
import { describe, it, expect } from "bun:test"
import { isTokenExpired } from "./token"
import type { AntigravityTokens } from "./types"
describe("Token Expiry with 60-second Buffer", () => {
const createToken = (expiresInSeconds: number): AntigravityTokens => ({
type: "antigravity",
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: expiresInSeconds,
timestamp: Date.now(),
})
it("should NOT be expired if token expires in 2 minutes", () => {
// #given
const twoMinutes = 2 * 60
const token = createToken(twoMinutes)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(false)
})
it("should be expired if token expires in 30 seconds", () => {
// #given
const thirtySeconds = 30
const token = createToken(thirtySeconds)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(true)
})
it("should be expired at exactly 60 seconds (boundary)", () => {
// #given
const sixtySeconds = 60
const token = createToken(sixtySeconds)
// #when
const expired = isTokenExpired(token)
// #then - at boundary, should trigger refresh
expect(expired).toBe(true)
})
it("should be expired if token already expired", () => {
// #given
const alreadyExpired: AntigravityTokens = {
type: "antigravity",
access_token: "test-access",
refresh_token: "test-refresh",
expires_in: 3600,
timestamp: Date.now() - 4000 * 1000,
}
// #when
const expired = isTokenExpired(alreadyExpired)
// #then
expect(expired).toBe(true)
})
it("should NOT be expired if token has plenty of time", () => {
// #given
const twoHours = 2 * 60 * 60
const token = createToken(twoHours)
// #when
const expired = isTokenExpired(token)
// #then
expect(expired).toBe(false)
})
})

View File

@@ -310,6 +310,15 @@ export function generateOmoConfig(installConfig: InstallConfig): Record<string,
config.agents = agents config.agents = agents
} }
// Categories: override model for Antigravity auth (gemini-3-pro-preview → gemini-3-pro-high)
if (installConfig.hasGemini) {
config.categories = {
"visual-engineering": { model: "google/gemini-3-pro-high" },
artistry: { model: "google/gemini-3-pro-high" },
writing: { model: "google/gemini-3-flash-high" },
}
}
return config return config
} }

View File

@@ -1,5 +1,5 @@
import { describe, expect, test } from "bun:test" import { describe, expect, test } from "bun:test"
import { OhMyOpenCodeConfigSchema } from "./schema" import { AgentOverrideConfigSchema, BuiltinCategoryNameSchema, OhMyOpenCodeConfigSchema } from "./schema"
describe("disabled_mcps schema", () => { describe("disabled_mcps schema", () => {
test("should accept built-in MCP names", () => { test("should accept built-in MCP names", () => {
@@ -134,3 +134,184 @@ describe("disabled_mcps schema", () => {
} }
}) })
}) })
describe("AgentOverrideConfigSchema", () => {
describe("category field", () => {
test("accepts category as optional string", () => {
// #given
const config = { category: "visual-engineering" }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.category).toBe("visual-engineering")
}
})
test("accepts config without category", () => {
// #given
const config = { temperature: 0.5 }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
})
test("rejects non-string category", () => {
// #given
const config = { category: 123 }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(false)
})
})
describe("skills field", () => {
test("accepts skills as optional string array", () => {
// #given
const config = { skills: ["frontend-ui-ux", "code-reviewer"] }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.skills).toEqual(["frontend-ui-ux", "code-reviewer"])
}
})
test("accepts empty skills array", () => {
// #given
const config = { skills: [] }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.skills).toEqual([])
}
})
test("accepts config without skills", () => {
// #given
const config = { temperature: 0.5 }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
})
test("rejects non-array skills", () => {
// #given
const config = { skills: "frontend-ui-ux" }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(false)
})
})
describe("backward compatibility", () => {
test("still accepts model field (deprecated)", () => {
// #given
const config = { model: "openai/gpt-5.2" }
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.model).toBe("openai/gpt-5.2")
}
})
test("accepts both model and category (deprecated usage)", () => {
// #given - category should take precedence at runtime, but both should validate
const config = {
model: "openai/gpt-5.2",
category: "ultrabrain"
}
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.model).toBe("openai/gpt-5.2")
expect(result.data.category).toBe("ultrabrain")
}
})
})
describe("combined fields", () => {
test("accepts category with skills", () => {
// #given
const config = {
category: "visual-engineering",
skills: ["frontend-ui-ux"]
}
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.category).toBe("visual-engineering")
expect(result.data.skills).toEqual(["frontend-ui-ux"])
}
})
test("accepts category with skills and other fields", () => {
// #given
const config = {
category: "ultrabrain",
skills: ["code-reviewer"],
temperature: 0.3,
prompt_append: "Extra instructions"
}
// #when
const result = AgentOverrideConfigSchema.safeParse(config)
// #then
expect(result.success).toBe(true)
if (result.success) {
expect(result.data.category).toBe("ultrabrain")
expect(result.data.skills).toEqual(["code-reviewer"])
expect(result.data.temperature).toBe(0.3)
expect(result.data.prompt_append).toBe("Extra instructions")
}
})
})
})
describe("BuiltinCategoryNameSchema", () => {
test("accepts all builtin category names", () => {
// #given
const categories = ["visual-engineering", "ultrabrain", "artistry", "quick", "most-capable", "writing", "general"]
// #when / #then
for (const cat of categories) {
const result = BuiltinCategoryNameSchema.safeParse(cat)
expect(result.success).toBe(true)
}
})
})

View File

@@ -24,10 +24,13 @@ export const BuiltinAgentNameSchema = z.enum([
"frontend-ui-ux-engineer", "frontend-ui-ux-engineer",
"document-writer", "document-writer",
"multimodal-looker", "multimodal-looker",
"Metis (Plan Consultant)",
]) ])
export const BuiltinSkillNameSchema = z.enum([ export const BuiltinSkillNameSchema = z.enum([
"playwright", "playwright",
"frontend-ui-ux",
"git-master",
]) ])
export const OverridableAgentNameSchema = z.enum([ export const OverridableAgentNameSchema = z.enum([
@@ -35,7 +38,8 @@ export const OverridableAgentNameSchema = z.enum([
"plan", "plan",
"Sisyphus", "Sisyphus",
"OpenCode-Builder", "OpenCode-Builder",
"Planner-Sisyphus", "Prometheus (Planner)",
"Metis (Plan Consultant)",
"oracle", "oracle",
"librarian", "librarian",
"explore", "explore",
@@ -75,14 +79,23 @@ export const HookNameSchema = z.enum([
"claude-code-hooks", "claude-code-hooks",
"auto-slash-command", "auto-slash-command",
"edit-error-recovery", "edit-error-recovery",
"prometheus-md-only",
"start-work",
"sisyphus-orchestrator",
]) ])
export const BuiltinCommandNameSchema = z.enum([ export const BuiltinCommandNameSchema = z.enum([
"init-deep", "init-deep",
"start-work",
]) ])
export const AgentOverrideConfigSchema = z.object({ export const AgentOverrideConfigSchema = z.object({
/** @deprecated Use `category` instead. Model is inherited from category defaults. */
model: z.string().optional(), model: z.string().optional(),
/** Category name to inherit model and other settings from CategoryConfig */
category: z.string().optional(),
/** Skill names to inject into agent prompt */
skills: z.array(z.string()).optional(),
temperature: z.number().min(0).max(2).optional(), temperature: z.number().min(0).max(2).optional(),
top_p: z.number().min(0).max(1).optional(), top_p: z.number().min(0).max(1).optional(),
prompt: z.string().optional(), prompt: z.string().optional(),
@@ -103,7 +116,8 @@ export const AgentOverridesSchema = z.object({
plan: AgentOverrideConfigSchema.optional(), plan: AgentOverrideConfigSchema.optional(),
Sisyphus: AgentOverrideConfigSchema.optional(), Sisyphus: AgentOverrideConfigSchema.optional(),
"OpenCode-Builder": AgentOverrideConfigSchema.optional(), "OpenCode-Builder": AgentOverrideConfigSchema.optional(),
"Planner-Sisyphus": AgentOverrideConfigSchema.optional(), "Prometheus (Planner)": AgentOverrideConfigSchema.optional(),
"Metis (Plan Consultant)": AgentOverrideConfigSchema.optional(),
oracle: AgentOverrideConfigSchema.optional(), oracle: AgentOverrideConfigSchema.optional(),
librarian: AgentOverrideConfigSchema.optional(), librarian: AgentOverrideConfigSchema.optional(),
explore: AgentOverrideConfigSchema.optional(), explore: AgentOverrideConfigSchema.optional(),
@@ -129,6 +143,33 @@ export const SisyphusAgentConfigSchema = z.object({
replace_plan: z.boolean().optional(), replace_plan: z.boolean().optional(),
}) })
export const CategoryConfigSchema = z.object({
model: z.string(),
temperature: z.number().min(0).max(2).optional(),
top_p: z.number().min(0).max(1).optional(),
maxTokens: z.number().optional(),
thinking: z.object({
type: z.enum(["enabled", "disabled"]),
budgetTokens: z.number().optional(),
}).optional(),
reasoningEffort: z.enum(["low", "medium", "high"]).optional(),
textVerbosity: z.enum(["low", "medium", "high"]).optional(),
tools: z.record(z.string(), z.boolean()).optional(),
prompt_append: z.string().optional(),
})
export const BuiltinCategoryNameSchema = z.enum([
"visual-engineering",
"ultrabrain",
"artistry",
"quick",
"most-capable",
"writing",
"general",
])
export const CategoriesConfigSchema = z.record(z.string(), CategoryConfigSchema)
export const CommentCheckerConfigSchema = z.object({ export const CommentCheckerConfigSchema = z.object({
/** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */ /** Custom prompt to replace the default warning message. Use {{comments}} placeholder for detected comments XML. */
custom_prompt: z.string().optional(), custom_prompt: z.string().optional(),
@@ -243,6 +284,12 @@ export const NotificationConfigSchema = z.object({
force_enable: z.boolean().optional(), force_enable: z.boolean().optional(),
}) })
export const GitMasterConfigSchema = z.object({
/** Add "Ultraworked with Sisyphus" footer to commit messages (default: true) */
commit_footer: z.boolean().default(true),
/** Add "Co-authored-by: Sisyphus" trailer to commit messages (default: true) */
include_co_authored_by: z.boolean().default(true),
})
export const OhMyOpenCodeConfigSchema = z.object({ export const OhMyOpenCodeConfigSchema = z.object({
$schema: z.string().optional(), $schema: z.string().optional(),
disabled_mcps: z.array(AnyMcpNameSchema).optional(), disabled_mcps: z.array(AnyMcpNameSchema).optional(),
@@ -251,6 +298,7 @@ export const OhMyOpenCodeConfigSchema = z.object({
disabled_hooks: z.array(HookNameSchema).optional(), disabled_hooks: z.array(HookNameSchema).optional(),
disabled_commands: z.array(BuiltinCommandNameSchema).optional(), disabled_commands: z.array(BuiltinCommandNameSchema).optional(),
agents: AgentOverridesSchema.optional(), agents: AgentOverridesSchema.optional(),
categories: CategoriesConfigSchema.optional(),
claude_code: ClaudeCodeConfigSchema.optional(), claude_code: ClaudeCodeConfigSchema.optional(),
google_auth: z.boolean().optional(), google_auth: z.boolean().optional(),
sisyphus_agent: SisyphusAgentConfigSchema.optional(), sisyphus_agent: SisyphusAgentConfigSchema.optional(),
@@ -261,6 +309,7 @@ export const OhMyOpenCodeConfigSchema = z.object({
ralph_loop: RalphLoopConfigSchema.optional(), ralph_loop: RalphLoopConfigSchema.optional(),
background_task: BackgroundTaskConfigSchema.optional(), background_task: BackgroundTaskConfigSchema.optional(),
notification: NotificationConfigSchema.optional(), notification: NotificationConfigSchema.optional(),
git_master: GitMasterConfigSchema.optional(),
}) })
export type OhMyOpenCodeConfig = z.infer<typeof OhMyOpenCodeConfigSchema> export type OhMyOpenCodeConfig = z.infer<typeof OhMyOpenCodeConfigSchema>
@@ -279,5 +328,9 @@ export type SkillsConfig = z.infer<typeof SkillsConfigSchema>
export type SkillDefinition = z.infer<typeof SkillDefinitionSchema> export type SkillDefinition = z.infer<typeof SkillDefinitionSchema>
export type RalphLoopConfig = z.infer<typeof RalphLoopConfigSchema> export type RalphLoopConfig = z.infer<typeof RalphLoopConfigSchema>
export type NotificationConfig = z.infer<typeof NotificationConfigSchema> export type NotificationConfig = z.infer<typeof NotificationConfigSchema>
export type CategoryConfig = z.infer<typeof CategoryConfigSchema>
export type CategoriesConfig = z.infer<typeof CategoriesConfigSchema>
export type BuiltinCategoryName = z.infer<typeof BuiltinCategoryNameSchema>
export type GitMasterConfig = z.infer<typeof GitMasterConfigSchema>
export { AnyMcpNameSchema, type AnyMcpName, McpNameSchema, type McpName } from "../mcp/types" export { AnyMcpNameSchema, type AnyMcpName, McpNameSchema, type McpName } from "../mcp/types"

View File

@@ -1,11 +1,12 @@
import { describe, test, expect, beforeEach } from "bun:test" import { describe, test, expect, beforeEach } from "bun:test"
import type { BackgroundTask } from "./types" import type { BackgroundTask, ResumeInput } from "./types"
const TASK_TTL_MS = 30 * 60 * 1000 const TASK_TTL_MS = 30 * 60 * 1000
class MockBackgroundManager { class MockBackgroundManager {
private tasks: Map<string, BackgroundTask> = new Map() private tasks: Map<string, BackgroundTask> = new Map()
private notifications: Map<string, BackgroundTask[]> = new Map() private notifications: Map<string, BackgroundTask[]> = new Map()
public resumeCalls: Array<{ sessionId: string; prompt: string }> = []
addTask(task: BackgroundTask): void { addTask(task: BackgroundTask): void {
this.tasks.set(task.id, task) this.tasks.set(task.id, task)
@@ -15,6 +16,15 @@ class MockBackgroundManager {
return this.tasks.get(id) return this.tasks.get(id)
} }
findBySession(sessionID: string): BackgroundTask | undefined {
for (const task of this.tasks.values()) {
if (task.sessionID === sessionID) {
return task
}
}
return undefined
}
getTasksByParentSession(sessionID: string): BackgroundTask[] { getTasksByParentSession(sessionID: string): BackgroundTask[] {
const result: BackgroundTask[] = [] const result: BackgroundTask[] = []
for (const task of this.tasks.values()) { for (const task of this.tasks.values()) {
@@ -105,6 +115,29 @@ class MockBackgroundManager {
} }
return count return count
} }
resume(input: ResumeInput): BackgroundTask {
const existingTask = this.findBySession(input.sessionId)
if (!existingTask) {
throw new Error(`Task not found for session: ${input.sessionId}`)
}
this.resumeCalls.push({ sessionId: input.sessionId, prompt: input.prompt })
existingTask.status = "running"
existingTask.completedAt = undefined
existingTask.error = undefined
existingTask.parentSessionID = input.parentSessionID
existingTask.parentMessageID = input.parentMessageID
existingTask.parentModel = input.parentModel
existingTask.progress = {
toolCalls: existingTask.progress?.toolCalls ?? 0,
lastUpdate: new Date(),
}
return existingTask
}
} }
function createMockTask(overrides: Partial<BackgroundTask> & { id: string; sessionID: string; parentSessionID: string }): BackgroundTask { function createMockTask(overrides: Partial<BackgroundTask> & { id: string; sessionID: string; parentSessionID: string }): BackgroundTask {
@@ -482,3 +515,162 @@ describe("BackgroundManager.pruneStaleTasksAndNotifications", () => {
expect(manager.getTask("task-fresh")).toBeDefined() expect(manager.getTask("task-fresh")).toBeDefined()
}) })
}) })
describe("BackgroundManager.resume", () => {
let manager: MockBackgroundManager
beforeEach(() => {
// #given
manager = new MockBackgroundManager()
})
test("should throw error when task not found", () => {
// #given - empty manager
// #when / #then
expect(() => manager.resume({
sessionId: "non-existent",
prompt: "continue",
parentSessionID: "session-new",
parentMessageID: "msg-new",
})).toThrow("Task not found for session: non-existent")
})
test("should resume existing task and reset state to running", () => {
// #given
const completedTask = createMockTask({
id: "task-a",
sessionID: "session-a",
parentSessionID: "session-parent",
status: "completed",
})
completedTask.completedAt = new Date()
completedTask.error = "previous error"
manager.addTask(completedTask)
// #when
const result = manager.resume({
sessionId: "session-a",
prompt: "continue the work",
parentSessionID: "session-new-parent",
parentMessageID: "msg-new",
})
// #then
expect(result.status).toBe("running")
expect(result.completedAt).toBeUndefined()
expect(result.error).toBeUndefined()
expect(result.parentSessionID).toBe("session-new-parent")
expect(result.parentMessageID).toBe("msg-new")
})
test("should preserve task identity while updating parent context", () => {
// #given
const existingTask = createMockTask({
id: "task-a",
sessionID: "session-a",
parentSessionID: "old-parent",
description: "original description",
agent: "explore",
})
manager.addTask(existingTask)
// #when
const result = manager.resume({
sessionId: "session-a",
prompt: "new prompt",
parentSessionID: "new-parent",
parentMessageID: "new-msg",
parentModel: { providerID: "anthropic", modelID: "claude-opus" },
})
// #then
expect(result.id).toBe("task-a")
expect(result.sessionID).toBe("session-a")
expect(result.description).toBe("original description")
expect(result.agent).toBe("explore")
expect(result.parentModel).toEqual({ providerID: "anthropic", modelID: "claude-opus" })
})
test("should track resume calls with prompt", () => {
// #given
const task = createMockTask({
id: "task-a",
sessionID: "session-a",
parentSessionID: "session-parent",
})
manager.addTask(task)
// #when
manager.resume({
sessionId: "session-a",
prompt: "continue with additional context",
parentSessionID: "session-new",
parentMessageID: "msg-new",
})
// #then
expect(manager.resumeCalls).toHaveLength(1)
expect(manager.resumeCalls[0]).toEqual({
sessionId: "session-a",
prompt: "continue with additional context",
})
})
test("should preserve existing tool call count in progress", () => {
// #given
const taskWithProgress = createMockTask({
id: "task-a",
sessionID: "session-a",
parentSessionID: "session-parent",
})
taskWithProgress.progress = {
toolCalls: 42,
lastTool: "read",
lastUpdate: new Date(),
}
manager.addTask(taskWithProgress)
// #when
const result = manager.resume({
sessionId: "session-a",
prompt: "continue",
parentSessionID: "session-new",
parentMessageID: "msg-new",
})
// #then
expect(result.progress?.toolCalls).toBe(42)
})
})
describe("LaunchInput.skillContent", () => {
test("skillContent should be optional in LaunchInput type", () => {
// #given
const input: import("./types").LaunchInput = {
description: "test",
prompt: "test prompt",
agent: "explore",
parentSessionID: "parent-session",
parentMessageID: "parent-msg",
}
// #when / #then - should compile without skillContent
expect(input.skillContent).toBeUndefined()
})
test("skillContent can be provided in LaunchInput", () => {
// #given
const input: import("./types").LaunchInput = {
description: "test",
prompt: "test prompt",
agent: "explore",
parentSessionID: "parent-session",
parentMessageID: "parent-msg",
skillContent: "You are a playwright expert",
}
// #when / #then
expect(input.skillContent).toBe("You are a playwright expert")
})
})

View File

@@ -1,18 +1,16 @@
import { existsSync, readdirSync } from "node:fs"
import { join } from "node:path"
import type { PluginInput } from "@opencode-ai/plugin" import type { PluginInput } from "@opencode-ai/plugin"
import type { import type {
BackgroundTask, BackgroundTask,
LaunchInput, LaunchInput,
ResumeInput,
} from "./types" } from "./types"
import { log } from "../../shared/logger" import { log } from "../../shared/logger"
import { ConcurrencyManager } from "./concurrency" import { ConcurrencyManager } from "./concurrency"
import type { BackgroundTaskConfig } from "../../config/schema" import type { BackgroundTaskConfig } from "../../config/schema"
import {
findNearestMessageWithFields,
MESSAGE_STORAGE,
} from "../hook-message-injector"
import { subagentSessions } from "../claude-code-session-state" import { subagentSessions } from "../claude-code-session-state"
import { getTaskToastManager } from "../task-toast-manager"
const TASK_TTL_MS = 30 * 60 * 1000 const TASK_TTL_MS = 30 * 60 * 1000
@@ -42,20 +40,6 @@ interface Todo {
id: string id: string
} }
function getMessageDir(sessionID: string): string | null {
if (!existsSync(MESSAGE_STORAGE)) return null
const directPath = join(MESSAGE_STORAGE, sessionID)
if (existsSync(directPath)) return directPath
for (const dir of readdirSync(MESSAGE_STORAGE)) {
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
if (existsSync(sessionPath)) return sessionPath
}
return null
}
export class BackgroundManager { export class BackgroundManager {
private tasks: Map<string, BackgroundTask> private tasks: Map<string, BackgroundTask>
private notifications: Map<string, BackgroundTask[]> private notifications: Map<string, BackgroundTask[]>
@@ -77,9 +61,9 @@ export class BackgroundManager {
throw new Error("Agent parameter is required") throw new Error("Agent parameter is required")
} }
const model = input.agent const concurrencyKey = input.agent
await this.concurrencyManager.acquire(model) await this.concurrencyManager.acquire(concurrencyKey)
const createResult = await this.client.session.create({ const createResult = await this.client.session.create({
body: { body: {
@@ -87,12 +71,12 @@ export class BackgroundManager {
title: `Background: ${input.description}`, title: `Background: ${input.description}`,
}, },
}).catch((error) => { }).catch((error) => {
this.concurrencyManager.release(model) this.concurrencyManager.release(concurrencyKey)
throw error throw error
}) })
if (createResult.error) { if (createResult.error) {
this.concurrencyManager.release(model) this.concurrencyManager.release(concurrencyKey)
throw new Error(`Failed to create background session: ${createResult.error}`) throw new Error(`Failed to create background session: ${createResult.error}`)
} }
@@ -114,7 +98,9 @@ export class BackgroundManager {
lastUpdate: new Date(), lastUpdate: new Date(),
}, },
parentModel: input.parentModel, parentModel: input.parentModel,
model, parentAgent: input.parentAgent,
model: input.model,
concurrencyKey,
} }
this.tasks.set(task.id, task) this.tasks.set(task.id, task)
@@ -122,13 +108,24 @@ export class BackgroundManager {
log("[background-agent] Launching task:", { taskId: task.id, sessionID, agent: input.agent }) log("[background-agent] Launching task:", { taskId: task.id, sessionID, agent: input.agent })
const toastManager = getTaskToastManager()
if (toastManager) {
toastManager.addTask({
id: task.id,
description: input.description,
agent: input.agent,
isBackground: true,
skills: input.skills,
})
}
this.client.session.promptAsync({ this.client.session.promptAsync({
path: { id: sessionID }, path: { id: sessionID },
body: { body: {
agent: input.agent, agent: input.agent,
system: input.skillContent,
tools: { tools: {
task: false, task: false,
background_task: false,
call_omo_agent: false, call_omo_agent: false,
}, },
parts: [{ type: "text", text: input.prompt }], parts: [{ type: "text", text: input.prompt }],
@@ -145,8 +142,8 @@ export class BackgroundManager {
existingTask.error = errorMessage existingTask.error = errorMessage
} }
existingTask.completedAt = new Date() existingTask.completedAt = new Date()
if (existingTask.model) { if (existingTask.concurrencyKey) {
this.concurrencyManager.release(existingTask.model) this.concurrencyManager.release(existingTask.concurrencyKey)
} }
this.markForNotification(existingTask) this.markForNotification(existingTask)
this.notifyParentSession(existingTask) this.notifyParentSession(existingTask)
@@ -192,6 +189,99 @@ export class BackgroundManager {
return undefined return undefined
} }
/**
* Register an external task (e.g., from sisyphus_task) for notification tracking.
* This allows tasks created by external tools to receive the same toast/prompt notifications.
*/
registerExternalTask(input: {
taskId: string
sessionID: string
parentSessionID: string
description: string
agent?: string
}): BackgroundTask {
const task: BackgroundTask = {
id: input.taskId,
sessionID: input.sessionID,
parentSessionID: input.parentSessionID,
parentMessageID: "",
description: input.description,
prompt: "",
agent: input.agent || "sisyphus_task",
status: "running",
startedAt: new Date(),
progress: {
toolCalls: 0,
lastUpdate: new Date(),
},
}
this.tasks.set(task.id, task)
subagentSessions.add(input.sessionID)
this.startPolling()
log("[background-agent] Registered external task:", { taskId: task.id, sessionID: input.sessionID })
return task
}
async resume(input: ResumeInput): Promise<BackgroundTask> {
const existingTask = this.findBySession(input.sessionId)
if (!existingTask) {
throw new Error(`Task not found for session: ${input.sessionId}`)
}
existingTask.status = "running"
existingTask.completedAt = undefined
existingTask.error = undefined
existingTask.parentSessionID = input.parentSessionID
existingTask.parentMessageID = input.parentMessageID
existingTask.parentModel = input.parentModel
existingTask.parentAgent = input.parentAgent
existingTask.progress = {
toolCalls: existingTask.progress?.toolCalls ?? 0,
lastUpdate: new Date(),
}
this.startPolling()
subagentSessions.add(existingTask.sessionID)
const toastManager = getTaskToastManager()
if (toastManager) {
toastManager.addTask({
id: existingTask.id,
description: existingTask.description,
agent: existingTask.agent,
isBackground: true,
})
}
log("[background-agent] Resuming task:", { taskId: existingTask.id, sessionID: existingTask.sessionID })
this.client.session.promptAsync({
path: { id: existingTask.sessionID },
body: {
agent: existingTask.agent,
tools: {
task: false,
call_omo_agent: false,
},
parts: [{ type: "text", text: input.prompt }],
},
}).catch((error) => {
log("[background-agent] resume promptAsync error:", error)
existingTask.status = "error"
const errorMessage = error instanceof Error ? error.message : String(error)
existingTask.error = errorMessage
existingTask.completedAt = new Date()
this.markForNotification(existingTask)
this.notifyParentSession(existingTask)
})
return existingTask
}
private async checkSessionTodos(sessionID: string): Promise<boolean> { private async checkSessionTodos(sessionID: string): Promise<boolean> {
try { try {
const response = await this.client.session.todo({ const response = await this.client.session.todo({
@@ -269,8 +359,8 @@ export class BackgroundManager {
task.error = "Session deleted" task.error = "Session deleted"
} }
if (task.model) { if (task.concurrencyKey) {
this.concurrencyManager.release(task.model) this.concurrencyManager.release(task.concurrencyKey)
} }
this.tasks.delete(task.id) this.tasks.delete(task.id)
this.clearNotificationsForTask(task.id) this.clearNotificationsForTask(task.id)
@@ -330,17 +420,13 @@ export class BackgroundManager {
log("[background-agent] notifyParentSession called for task:", task.id) log("[background-agent] notifyParentSession called for task:", task.id)
// eslint-disable-next-line @typescript-eslint/no-explicit-any const toastManager = getTaskToastManager()
const tuiClient = this.client as any if (toastManager) {
if (tuiClient.tui?.showToast) { toastManager.showCompletionToast({
tuiClient.tui.showToast({ id: task.id,
body: { description: task.description,
title: "Background Task Completed", duration,
message: `Task "${task.description}" finished in ${duration}.`, })
variant: "success",
duration: 5000,
},
}).catch(() => {})
} }
const message = `[BACKGROUND TASK COMPLETED] Task "${task.description}" finished in ${duration}. Use background_output with task_id="${task.id}" to get results.` const message = `[BACKGROUND TASK COMPLETED] Task "${task.description}" finished in ${duration}. Use background_output with task_id="${task.id}" to get results.`
@@ -349,23 +435,21 @@ export class BackgroundManager {
const taskId = task.id const taskId = task.id
setTimeout(async () => { setTimeout(async () => {
if (task.model) { if (task.concurrencyKey) {
this.concurrencyManager.release(task.model) this.concurrencyManager.release(task.concurrencyKey)
} }
try { try {
const messageDir = getMessageDir(task.parentSessionID) // Use only parentModel/parentAgent - don't fallback to prevMessage
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null // This prevents accidentally changing parent session's model/agent
const modelField = task.parentModel?.providerID && task.parentModel?.modelID
const modelContext = task.parentModel ?? prevMessage?.model ? { providerID: task.parentModel.providerID, modelID: task.parentModel.modelID }
const modelField = modelContext?.providerID && modelContext?.modelID
? { providerID: modelContext.providerID, modelID: modelContext.modelID }
: undefined : undefined
await this.client.session.prompt({ await this.client.session.prompt({
path: { id: task.parentSessionID }, path: { id: task.parentSessionID },
body: { body: {
agent: prevMessage?.agent, agent: task.parentAgent,
model: modelField, model: modelField,
parts: [{ type: "text", text: message }], parts: [{ type: "text", text: message }],
}, },
@@ -413,8 +497,8 @@ export class BackgroundManager {
task.status = "error" task.status = "error"
task.error = "Task timed out after 30 minutes" task.error = "Task timed out after 30 minutes"
task.completedAt = new Date() task.completedAt = new Date()
if (task.model) { if (task.concurrencyKey) {
this.concurrencyManager.release(task.model) this.concurrencyManager.release(task.concurrencyKey)
} }
this.clearNotificationsForTask(taskId) this.clearNotificationsForTask(taskId)
this.tasks.delete(taskId) this.tasks.delete(taskId)

View File

@@ -27,7 +27,11 @@ export interface BackgroundTask {
error?: string error?: string
progress?: TaskProgress progress?: TaskProgress
parentModel?: { providerID: string; modelID: string } parentModel?: { providerID: string; modelID: string }
model?: string model?: { providerID: string; modelID: string }
/** Agent name used for concurrency tracking */
concurrencyKey?: string
/** Parent session's agent name for notification */
parentAgent?: string
} }
export interface LaunchInput { export interface LaunchInput {
@@ -37,4 +41,17 @@ export interface LaunchInput {
parentSessionID: string parentSessionID: string
parentMessageID: string parentMessageID: string
parentModel?: { providerID: string; modelID: string } parentModel?: { providerID: string; modelID: string }
parentAgent?: string
model?: { providerID: string; modelID: string }
skills?: string[]
skillContent?: string
}
export interface ResumeInput {
sessionId: string
prompt: string
parentSessionID: string
parentMessageID: string
parentModel?: { providerID: string; modelID: string }
parentAgent?: string
} }

View File

@@ -0,0 +1,13 @@
/**
* Boulder State Constants
*/
export const BOULDER_DIR = ".sisyphus"
export const BOULDER_FILE = "boulder.json"
export const BOULDER_STATE_PATH = `${BOULDER_DIR}/${BOULDER_FILE}`
export const NOTEPAD_DIR = "notepads"
export const NOTEPAD_BASE_PATH = `${BOULDER_DIR}/${NOTEPAD_DIR}`
/** Prometheus plan directory pattern */
export const PROMETHEUS_PLANS_DIR = ".sisyphus/plans"

View File

@@ -0,0 +1,3 @@
export * from "./types"
export * from "./constants"
export * from "./storage"

View File

@@ -0,0 +1,250 @@
import { describe, expect, test, beforeEach, afterEach } from "bun:test"
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { join } from "node:path"
import { tmpdir } from "node:os"
import {
readBoulderState,
writeBoulderState,
appendSessionId,
clearBoulderState,
getPlanProgress,
getPlanName,
createBoulderState,
findPrometheusPlans,
} from "./storage"
import type { BoulderState } from "./types"
describe("boulder-state", () => {
const TEST_DIR = join(tmpdir(), "boulder-state-test-" + Date.now())
const SISYPHUS_DIR = join(TEST_DIR, ".sisyphus")
beforeEach(() => {
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
if (!existsSync(SISYPHUS_DIR)) {
mkdirSync(SISYPHUS_DIR, { recursive: true })
}
clearBoulderState(TEST_DIR)
})
afterEach(() => {
if (existsSync(TEST_DIR)) {
rmSync(TEST_DIR, { recursive: true, force: true })
}
})
describe("readBoulderState", () => {
test("should return null when no boulder.json exists", () => {
// #given - no boulder.json file
// #when
const result = readBoulderState(TEST_DIR)
// #then
expect(result).toBeNull()
})
test("should read valid boulder state", () => {
// #given - valid boulder.json
const state: BoulderState = {
active_plan: "/path/to/plan.md",
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1", "session-2"],
plan_name: "my-plan",
}
writeBoulderState(TEST_DIR, state)
// #when
const result = readBoulderState(TEST_DIR)
// #then
expect(result).not.toBeNull()
expect(result?.active_plan).toBe("/path/to/plan.md")
expect(result?.session_ids).toEqual(["session-1", "session-2"])
expect(result?.plan_name).toBe("my-plan")
})
})
describe("writeBoulderState", () => {
test("should write state and create .sisyphus directory if needed", () => {
// #given - state to write
const state: BoulderState = {
active_plan: "/test/plan.md",
started_at: "2026-01-02T12:00:00Z",
session_ids: ["ses-123"],
plan_name: "test-plan",
}
// #when
const success = writeBoulderState(TEST_DIR, state)
const readBack = readBoulderState(TEST_DIR)
// #then
expect(success).toBe(true)
expect(readBack).not.toBeNull()
expect(readBack?.active_plan).toBe("/test/plan.md")
})
})
describe("appendSessionId", () => {
test("should append new session id to existing state", () => {
// #given - existing state with one session
const state: BoulderState = {
active_plan: "/plan.md",
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "plan",
}
writeBoulderState(TEST_DIR, state)
// #when
const result = appendSessionId(TEST_DIR, "session-2")
// #then
expect(result).not.toBeNull()
expect(result?.session_ids).toEqual(["session-1", "session-2"])
})
test("should not duplicate existing session id", () => {
// #given - state with session-1 already
const state: BoulderState = {
active_plan: "/plan.md",
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "plan",
}
writeBoulderState(TEST_DIR, state)
// #when
appendSessionId(TEST_DIR, "session-1")
const result = readBoulderState(TEST_DIR)
// #then
expect(result?.session_ids).toEqual(["session-1"])
})
test("should return null when no state exists", () => {
// #given - no boulder.json
// #when
const result = appendSessionId(TEST_DIR, "new-session")
// #then
expect(result).toBeNull()
})
})
describe("clearBoulderState", () => {
test("should remove boulder.json", () => {
// #given - existing state
const state: BoulderState = {
active_plan: "/plan.md",
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "plan",
}
writeBoulderState(TEST_DIR, state)
// #when
const success = clearBoulderState(TEST_DIR)
const result = readBoulderState(TEST_DIR)
// #then
expect(success).toBe(true)
expect(result).toBeNull()
})
test("should succeed even when no file exists", () => {
// #given - no boulder.json
// #when
const success = clearBoulderState(TEST_DIR)
// #then
expect(success).toBe(true)
})
})
describe("getPlanProgress", () => {
test("should count completed and uncompleted checkboxes", () => {
// #given - plan file with checkboxes
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, `# Plan
- [ ] Task 1
- [x] Task 2
- [ ] Task 3
- [X] Task 4
`)
// #when
const progress = getPlanProgress(planPath)
// #then
expect(progress.total).toBe(4)
expect(progress.completed).toBe(2)
expect(progress.isComplete).toBe(false)
})
test("should return isComplete true when all checked", () => {
// #given - all tasks completed
const planPath = join(TEST_DIR, "complete-plan.md")
writeFileSync(planPath, `# Plan
- [x] Task 1
- [X] Task 2
`)
// #when
const progress = getPlanProgress(planPath)
// #then
expect(progress.total).toBe(2)
expect(progress.completed).toBe(2)
expect(progress.isComplete).toBe(true)
})
test("should return isComplete true for empty plan", () => {
// #given - plan with no checkboxes
const planPath = join(TEST_DIR, "empty-plan.md")
writeFileSync(planPath, "# Plan\nNo tasks here")
// #when
const progress = getPlanProgress(planPath)
// #then
expect(progress.total).toBe(0)
expect(progress.isComplete).toBe(true)
})
test("should handle non-existent file", () => {
// #given - non-existent file
// #when
const progress = getPlanProgress("/non/existent/file.md")
// #then
expect(progress.total).toBe(0)
expect(progress.isComplete).toBe(true)
})
})
describe("getPlanName", () => {
test("should extract plan name from path", () => {
// #given
const path = "/home/user/.sisyphus/plans/project/my-feature.md"
// #when
const name = getPlanName(path)
// #then
expect(name).toBe("my-feature")
})
})
describe("createBoulderState", () => {
test("should create state with correct fields", () => {
// #given
const planPath = "/path/to/auth-refactor.md"
const sessionId = "ses-abc123"
// #when
const state = createBoulderState(planPath, sessionId)
// #then
expect(state.active_plan).toBe(planPath)
expect(state.session_ids).toEqual([sessionId])
expect(state.plan_name).toBe("auth-refactor")
expect(state.started_at).toBeDefined()
})
})
})

View File

@@ -0,0 +1,150 @@
/**
* Boulder State Storage
*
* Handles reading/writing boulder.json for active plan tracking.
*/
import { existsSync, readFileSync, writeFileSync, mkdirSync, readdirSync } from "node:fs"
import { dirname, join, basename } from "node:path"
import type { BoulderState, PlanProgress } from "./types"
import { BOULDER_DIR, BOULDER_FILE, PROMETHEUS_PLANS_DIR } from "./constants"
export function getBoulderFilePath(directory: string): string {
return join(directory, BOULDER_DIR, BOULDER_FILE)
}
export function readBoulderState(directory: string): BoulderState | null {
const filePath = getBoulderFilePath(directory)
if (!existsSync(filePath)) {
return null
}
try {
const content = readFileSync(filePath, "utf-8")
return JSON.parse(content) as BoulderState
} catch {
return null
}
}
export function writeBoulderState(directory: string, state: BoulderState): boolean {
const filePath = getBoulderFilePath(directory)
try {
const dir = dirname(filePath)
if (!existsSync(dir)) {
mkdirSync(dir, { recursive: true })
}
writeFileSync(filePath, JSON.stringify(state, null, 2), "utf-8")
return true
} catch {
return false
}
}
export function appendSessionId(directory: string, sessionId: string): BoulderState | null {
const state = readBoulderState(directory)
if (!state) return null
if (!state.session_ids.includes(sessionId)) {
state.session_ids.push(sessionId)
if (writeBoulderState(directory, state)) {
return state
}
}
return state
}
export function clearBoulderState(directory: string): boolean {
const filePath = getBoulderFilePath(directory)
try {
if (existsSync(filePath)) {
const { unlinkSync } = require("node:fs")
unlinkSync(filePath)
}
return true
} catch {
return false
}
}
/**
* Find Prometheus plan files for this project.
* Prometheus stores plans at: {project}/.sisyphus/plans/{name}.md
*/
export function findPrometheusPlans(directory: string): string[] {
const plansDir = join(directory, PROMETHEUS_PLANS_DIR)
if (!existsSync(plansDir)) {
return []
}
try {
const files = readdirSync(plansDir)
return files
.filter((f) => f.endsWith(".md"))
.map((f) => join(plansDir, f))
.sort((a, b) => {
// Sort by modification time, newest first
const aStat = require("node:fs").statSync(a)
const bStat = require("node:fs").statSync(b)
return bStat.mtimeMs - aStat.mtimeMs
})
} catch {
return []
}
}
/**
* Parse a plan file and count checkbox progress.
*/
export function getPlanProgress(planPath: string): PlanProgress {
if (!existsSync(planPath)) {
return { total: 0, completed: 0, isComplete: true }
}
try {
const content = readFileSync(planPath, "utf-8")
// Match markdown checkboxes: - [ ] or - [x] or - [X]
const uncheckedMatches = content.match(/^[-*]\s*\[\s*\]/gm) || []
const checkedMatches = content.match(/^[-*]\s*\[[xX]\]/gm) || []
const total = uncheckedMatches.length + checkedMatches.length
const completed = checkedMatches.length
return {
total,
completed,
isComplete: total === 0 || completed === total,
}
} catch {
return { total: 0, completed: 0, isComplete: true }
}
}
/**
* Extract plan name from file path.
*/
export function getPlanName(planPath: string): string {
return basename(planPath, ".md")
}
/**
* Create a new boulder state for a plan.
*/
export function createBoulderState(
planPath: string,
sessionId: string
): BoulderState {
return {
active_plan: planPath,
started_at: new Date().toISOString(),
session_ids: [sessionId],
plan_name: getPlanName(planPath),
}
}

View File

@@ -0,0 +1,26 @@
/**
* Boulder State Types
*
* Manages the active work plan state for Sisyphus orchestrator.
* Named after Sisyphus's boulder - the eternal task that must be rolled.
*/
export interface BoulderState {
/** Absolute path to the active plan file */
active_plan: string
/** ISO timestamp when work started */
started_at: string
/** Session IDs that have worked on this plan */
session_ids: string[]
/** Plan name derived from filename */
plan_name: string
}
export interface PlanProgress {
/** Total number of checkboxes */
total: number
/** Number of completed checkboxes */
completed: number
/** Whether all tasks are done */
isComplete: boolean
}

View File

@@ -3,6 +3,7 @@ import type { BuiltinCommandName, BuiltinCommands } from "./types"
import { INIT_DEEP_TEMPLATE } from "./templates/init-deep" import { INIT_DEEP_TEMPLATE } from "./templates/init-deep"
import { RALPH_LOOP_TEMPLATE, CANCEL_RALPH_TEMPLATE } from "./templates/ralph-loop" import { RALPH_LOOP_TEMPLATE, CANCEL_RALPH_TEMPLATE } from "./templates/ralph-loop"
import { REFACTOR_TEMPLATE } from "./templates/refactor" import { REFACTOR_TEMPLATE } from "./templates/refactor"
import { START_WORK_TEMPLATE } from "./templates/start-work"
const BUILTIN_COMMAND_DEFINITIONS: Record<BuiltinCommandName, Omit<CommandDefinition, "name">> = { const BUILTIN_COMMAND_DEFINITIONS: Record<BuiltinCommandName, Omit<CommandDefinition, "name">> = {
"init-deep": { "init-deep": {
@@ -41,6 +42,23 @@ ${REFACTOR_TEMPLATE}
</command-instruction>`, </command-instruction>`,
argumentHint: "<refactoring-target> [--scope=<file|module|project>] [--strategy=<safe|aggressive>]", argumentHint: "<refactoring-target> [--scope=<file|module|project>] [--strategy=<safe|aggressive>]",
}, },
"start-work": {
description: "(builtin) Start Sisyphus work session from Prometheus plan",
agent: "orchestrator-sisyphus",
template: `<command-instruction>
${START_WORK_TEMPLATE}
</command-instruction>
<session-context>
Session ID: $SESSION_ID
Timestamp: $TIMESTAMP
</session-context>
<user-request>
$ARGUMENTS
</user-request>`,
argumentHint: "[plan-name]",
},
} }
export function loadBuiltinCommands( export function loadBuiltinCommands(

View File

@@ -45,12 +45,12 @@ Don't wait—these run async while main session works.
\`\`\` \`\`\`
// Fire all at once, collect results later // Fire all at once, collect results later
background_task(agent="explore", prompt="Project structure: PREDICT standard patterns for detected language → REPORT deviations only") sisyphus_task(agent="explore", prompt="Project structure: PREDICT standard patterns for detected language → REPORT deviations only")
background_task(agent="explore", prompt="Entry points: FIND main files → REPORT non-standard organization") sisyphus_task(agent="explore", prompt="Entry points: FIND main files → REPORT non-standard organization")
background_task(agent="explore", prompt="Conventions: FIND config files (.eslintrc, pyproject.toml, .editorconfig) → REPORT project-specific rules") sisyphus_task(agent="explore", prompt="Conventions: FIND config files (.eslintrc, pyproject.toml, .editorconfig) → REPORT project-specific rules")
background_task(agent="explore", prompt="Anti-patterns: FIND 'DO NOT', 'NEVER', 'ALWAYS', 'DEPRECATED' comments → LIST forbidden patterns") sisyphus_task(agent="explore", prompt="Anti-patterns: FIND 'DO NOT', 'NEVER', 'ALWAYS', 'DEPRECATED' comments → LIST forbidden patterns")
background_task(agent="explore", prompt="Build/CI: FIND .github/workflows, Makefile → REPORT non-standard patterns") sisyphus_task(agent="explore", prompt="Build/CI: FIND .github/workflows, Makefile → REPORT non-standard patterns")
background_task(agent="explore", prompt="Test patterns: FIND test configs, test structure → REPORT unique conventions") sisyphus_task(agent="explore", prompt="Test patterns: FIND test configs, test structure → REPORT unique conventions")
\`\`\` \`\`\`
<dynamic-agents> <dynamic-agents>
@@ -76,9 +76,9 @@ max_depth=$(find . -type d -not -path '*/node_modules/*' -not -path '*/.git/*' |
Example spawning: Example spawning:
\`\`\` \`\`\`
// 500 files, 50k lines, depth 6, 15 large files → spawn 5+5+2+1 = 13 additional agents // 500 files, 50k lines, depth 6, 15 large files → spawn 5+5+2+1 = 13 additional agents
background_task(agent="explore", prompt="Large file analysis: FIND files >500 lines, REPORT complexity hotspots") sisyphus_task(agent="explore", prompt="Large file analysis: FIND files >500 lines, REPORT complexity hotspots")
background_task(agent="explore", prompt="Deep modules at depth 4+: FIND hidden patterns, internal conventions") sisyphus_task(agent="explore", prompt="Deep modules at depth 4+: FIND hidden patterns, internal conventions")
background_task(agent="explore", prompt="Cross-cutting concerns: FIND shared utilities across directories") sisyphus_task(agent="explore", prompt="Cross-cutting concerns: FIND shared utilities across directories")
// ... more based on calculation // ... more based on calculation
\`\`\` \`\`\`
</dynamic-agents> </dynamic-agents>
@@ -240,7 +240,7 @@ Launch document-writer agents for each location:
\`\`\` \`\`\`
for loc in AGENTS_LOCATIONS (except root): for loc in AGENTS_LOCATIONS (except root):
background_task(agent="document-writer", prompt=\\\` sisyphus_task(agent="document-writer", prompt=\\\`
Generate AGENTS.md for: \${loc.path} Generate AGENTS.md for: \${loc.path}
- Reason: \${loc.reason} - Reason: \${loc.reason}
- 30-80 lines max - 30-80 lines max

View File

@@ -605,7 +605,7 @@ Use \`ast_grep_search\` and \`ast_grep_replace\` for structural transformations.
## Agents ## Agents
- \`explore\`: Parallel codebase pattern discovery - \`explore\`: Parallel codebase pattern discovery
- \`plan\`: Detailed refactoring plan generation - \`plan\`: Detailed refactoring plan generation
- \`oracle\`: Consult for complex architectural decisions - \`oracle\`: Read-only consultation for complex architectural decisions and debugging
- \`librarian\`: **Use proactively** when encountering deprecated methods or library migration tasks. Query official docs and OSS examples for modern replacements. - \`librarian\`: **Use proactively** when encountering deprecated methods or library migration tasks. Query official docs and OSS examples for modern replacements.
## Deprecated Code & Library Migration ## Deprecated Code & Library Migration

View File

@@ -0,0 +1,72 @@
export const START_WORK_TEMPLATE = `You are starting a Sisyphus work session.
## WHAT TO DO
1. **Find available plans**: Search for Prometheus-generated plan files at \`.sisyphus/plans/\`
2. **Check for active boulder state**: Read \`.sisyphus/boulder.json\` if it exists
3. **Decision logic**:
- If \`.sisyphus/boulder.json\` exists AND plan is NOT complete (has unchecked boxes):
- **APPEND** current session to session_ids
- Continue work on existing plan
- If no active plan OR plan is complete:
- List available plan files
- If ONE plan: auto-select it
- If MULTIPLE plans: show list with timestamps, ask user to select
4. **Create/Update boulder.json**:
\`\`\`json
{
"active_plan": "/absolute/path/to/plan.md",
"started_at": "ISO_TIMESTAMP",
"session_ids": ["session_id_1", "session_id_2"],
"plan_name": "plan-name"
}
\`\`\`
5. **Read the plan file** and start executing tasks according to Orchestrator Sisyphus workflow
## OUTPUT FORMAT
When listing plans for selection:
\`\`\`
📋 Available Work Plans
Current Time: {ISO timestamp}
Session ID: {current session id}
1. [plan-name-1.md] - Modified: {date} - Progress: 3/10 tasks
2. [plan-name-2.md] - Modified: {date} - Progress: 0/5 tasks
Which plan would you like to work on? (Enter number or plan name)
\`\`\`
When resuming existing work:
\`\`\`
🔄 Resuming Work Session
Active Plan: {plan-name}
Progress: {completed}/{total} tasks
Sessions: {count} (appending current session)
Reading plan and continuing from last incomplete task...
\`\`\`
When auto-selecting single plan:
\`\`\`
🚀 Starting Work Session
Plan: {plan-name}
Session ID: {session_id}
Started: {timestamp}
Reading plan and beginning execution...
\`\`\`
## CRITICAL
- The session_id is injected by the hook - use it directly
- Always update boulder.json BEFORE starting work
- Read the FULL plan file before delegating any tasks
- Follow Orchestrator Sisyphus delegation protocols (7-section format)`

View File

@@ -1,6 +1,6 @@
import type { CommandDefinition } from "../claude-code-command-loader" import type { CommandDefinition } from "../claude-code-command-loader"
export type BuiltinCommandName = "init-deep" | "ralph-loop" | "cancel-ralph" | "refactor" export type BuiltinCommandName = "init-deep" | "ralph-loop" | "cancel-ralph" | "refactor" | "start-work"
export interface BuiltinCommandConfig { export interface BuiltinCommandConfig {
disabled_commands?: BuiltinCommandName[] disabled_commands?: BuiltinCommandName[]

View File

@@ -0,0 +1,78 @@
---
name: frontend-ui-ux
description: Designer-turned-developer who crafts stunning UI/UX even without design mockups
---
# Role: Designer-Turned-Developer
You are a designer who learned to code. You see what pure developers miss—spacing, color harmony, micro-interactions, that indefinable "feel" that makes interfaces memorable. Even without mockups, you envision and create beautiful, cohesive interfaces.
**Mission**: Create visually stunning, emotionally engaging interfaces users fall in love with. Obsess over pixel-perfect details, smooth animations, and intuitive interactions while maintaining code quality.
---
# Work Principles
1. **Complete what's asked** — Execute the exact task. No scope creep. Work until it works. Never mark work complete without proper verification.
2. **Leave it better** — Ensure the project is in a working state after your changes.
3. **Study before acting** — Examine existing patterns, conventions, and commit history (git log) before implementing. Understand why code is structured the way it is.
4. **Blend seamlessly** — Match existing code patterns. Your code should look like the team wrote it.
5. **Be transparent** — Announce each step. Explain reasoning. Report both successes and failures.
---
# Design Process
Before coding, commit to a **BOLD aesthetic direction**:
1. **Purpose**: What problem does this solve? Who uses it?
2. **Tone**: Pick an extreme—brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian
3. **Constraints**: Technical requirements (framework, performance, accessibility)
4. **Differentiation**: What's the ONE thing someone will remember?
**Key**: Choose a clear direction and execute with precision. Intentionality > intensity.
Then implement working code (HTML/CSS/JS, React, Vue, Angular, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail
---
# Aesthetic Guidelines
## Typography
Choose distinctive fonts. **Avoid**: Arial, Inter, Roboto, system fonts, Space Grotesk. Pair a characterful display font with a refined body font.
## Color
Commit to a cohesive palette. Use CSS variables. Dominant colors with sharp accents outperform timid, evenly-distributed palettes. **Avoid**: purple gradients on white (AI slop).
## Motion
Focus on high-impact moments. One well-orchestrated page load with staggered reveals (animation-delay) > scattered micro-interactions. Use scroll-triggering and hover states that surprise. Prioritize CSS-only. Use Motion library for React when available.
## Spatial Composition
Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
## Visual Details
Create atmosphere and depth—gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, grain overlays. Never default to solid colors.
---
# Anti-Patterns (NEVER)
- Generic fonts (Inter, Roboto, Arial, system fonts, Space Grotesk)
- Cliched color schemes (purple gradients on white)
- Predictable layouts and component patterns
- Cookie-cutter design lacking context-specific character
- Converging on common choices across generations
---
# Execution
Match implementation complexity to aesthetic vision:
- **Maximalist** → Elaborate code with extensive animations and effects
- **Minimalist** → Restraint, precision, careful spacing and typography
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. You are capable of extraordinary creative work—don't hold back.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -207,7 +207,7 @@ describe("createContextInjectorMessagesTransformHook", () => {
], ],
}) })
it("inserts synthetic message before last user message", async () => { it("prepends context to last user message", async () => {
// #given // #given
const hook = createContextInjectorMessagesTransformHook(collector) const hook = createContextInjectorMessagesTransformHook(collector)
const sessionID = "ses_transform1" const sessionID = "ses_transform1"
@@ -228,10 +228,8 @@ describe("createContextInjectorMessagesTransformHook", () => {
await hook["experimental.chat.messages.transform"]!({}, output) await hook["experimental.chat.messages.transform"]!({}, output)
// #then // #then
expect(output.messages.length).toBe(4) expect(output.messages.length).toBe(3)
expect(output.messages[2].parts[0].text).toBe("Ultrawork context") expect(output.messages[2].parts[0].text).toBe("Ultrawork context\n\n---\n\nSecond message")
expect(output.messages[2].parts[0].synthetic).toBe(true)
expect(output.messages[3].parts[0].text).toBe("Second message")
}) })
it("does nothing when no pending context", async () => { it("does nothing when no pending context", async () => {

View File

@@ -78,6 +78,9 @@ export function createContextInjectorMessagesTransformHook(
return { return {
"experimental.chat.messages.transform": async (_input, output) => { "experimental.chat.messages.transform": async (_input, output) => {
const { messages } = output const { messages } = output
log("[DEBUG] experimental.chat.messages.transform called", {
messageCount: messages.length,
})
if (messages.length === 0) { if (messages.length === 0) {
return return
} }
@@ -91,16 +94,28 @@ export function createContextInjectorMessagesTransformHook(
} }
if (lastUserMessageIndex === -1) { if (lastUserMessageIndex === -1) {
log("[DEBUG] No user message found in messages")
return return
} }
const lastUserMessage = messages[lastUserMessageIndex] const lastUserMessage = messages[lastUserMessageIndex]
const sessionID = (lastUserMessage.info as unknown as { sessionID?: string }).sessionID const sessionID = (lastUserMessage.info as unknown as { sessionID?: string }).sessionID
log("[DEBUG] Extracted sessionID from lastUserMessage.info", {
sessionID,
infoKeys: Object.keys(lastUserMessage.info),
lastUserMessageInfo: JSON.stringify(lastUserMessage.info).slice(0, 200),
})
if (!sessionID) { if (!sessionID) {
log("[DEBUG] sessionID is undefined or empty")
return return
} }
if (!collector.hasPending(sessionID)) { const hasPending = collector.hasPending(sessionID)
log("[DEBUG] Checking hasPending", {
sessionID,
hasPending,
})
if (!hasPending) {
return return
} }
@@ -109,47 +124,26 @@ export function createContextInjectorMessagesTransformHook(
return return
} }
const refInfo = lastUserMessage.info as unknown as { const textPartIndex = lastUserMessage.parts.findIndex(
sessionID?: string (p) => p.type === "text" && (p as { text?: string }).text
agent?: string )
model?: { providerID?: string; modelID?: string }
path?: { cwd?: string; root?: string }
}
const syntheticMessageId = `synthetic_ctx_${Date.now()}` if (textPartIndex === -1) {
const syntheticPartId = `synthetic_ctx_part_${Date.now()}` log("[context-injector] No text part found in last user message, skipping injection", {
const now = Date.now() sessionID,
partsCount: lastUserMessage.parts.length,
const syntheticMessage: MessageWithParts = { })
info: { return
id: syntheticMessageId, }
sessionID: sessionID,
role: "user", const textPart = lastUserMessage.parts[textPartIndex] as { text?: string }
time: { created: now }, const originalText = textPart.text ?? ""
agent: refInfo.agent ?? "Sisyphus", textPart.text = `${pending.merged}\n\n---\n\n${originalText}`
model: refInfo.model ?? { providerID: "unknown", modelID: "unknown" },
path: refInfo.path ?? { cwd: "/", root: "/" }, log("[context-injector] Prepended context to last user message", {
} as unknown as Message,
parts: [
{
id: syntheticPartId,
sessionID: sessionID,
messageID: syntheticMessageId,
type: "text",
text: pending.merged,
synthetic: true,
time: { start: now, end: now },
} as Part,
],
}
messages.splice(lastUserMessageIndex, 0, syntheticMessage)
log("[context-injector] Injected synthetic message from collector", {
sessionID, sessionID,
insertIndex: lastUserMessageIndex,
contextLength: pending.merged.length, contextLength: pending.merged.length,
newMessageCount: messages.length, originalTextLength: originalText.length,
}) })
}, },
} }

View File

@@ -1,12 +1,12 @@
import { existsSync, mkdirSync, readFileSync, readdirSync, writeFileSync } from "node:fs" import { existsSync, mkdirSync, readFileSync, readdirSync, writeFileSync } from "node:fs"
import { join } from "node:path" import { join } from "node:path"
import { MESSAGE_STORAGE, PART_STORAGE } from "./constants" import { MESSAGE_STORAGE, PART_STORAGE } from "./constants"
import type { MessageMeta, OriginalMessageContext, TextPart } from "./types" import type { MessageMeta, OriginalMessageContext, TextPart, ToolPermission } from "./types"
export interface StoredMessage { export interface StoredMessage {
agent?: string agent?: string
model?: { providerID?: string; modelID?: string } model?: { providerID?: string; modelID?: string }
tools?: Record<string, boolean> tools?: Record<string, ToolPermission>
} }
export function findNearestMessageWithFields(messageDir: string): StoredMessage | null { export function findNearestMessageWithFields(messageDir: string): StoredMessage | null {
@@ -16,6 +16,7 @@ export function findNearestMessageWithFields(messageDir: string): StoredMessage
.sort() .sort()
.reverse() .reverse()
// First pass: find message with ALL fields (ideal)
for (const file of files) { for (const file of files) {
try { try {
const content = readFileSync(join(messageDir, file), "utf-8") const content = readFileSync(join(messageDir, file), "utf-8")
@@ -27,6 +28,20 @@ export function findNearestMessageWithFields(messageDir: string): StoredMessage
continue continue
} }
} }
// Second pass: find message with ANY useful field (fallback)
// This ensures agent info isn't lost when model info is missing
for (const file of files) {
try {
const content = readFileSync(join(messageDir, file), "utf-8")
const msg = JSON.parse(content) as StoredMessage
if (msg.agent || (msg.model?.providerID && msg.model?.modelID)) {
return msg
}
} catch {
continue
}
}
} catch { } catch {
return null return null
} }

View File

@@ -1,3 +1,5 @@
export type ToolPermission = boolean | "allow" | "deny" | "ask"
export interface MessageMeta { export interface MessageMeta {
id: string id: string
sessionID: string sessionID: string
@@ -15,7 +17,7 @@ export interface MessageMeta {
cwd: string cwd: string
root: string root: string
} }
tools?: Record<string, boolean> tools?: Record<string, ToolPermission>
} }
export interface OriginalMessageContext { export interface OriginalMessageContext {
@@ -28,7 +30,7 @@ export interface OriginalMessageContext {
cwd?: string cwd?: string
root?: string root?: string
} }
tools?: Record<string, boolean> tools?: Record<string, ToolPermission>
} }
export interface TextPart { export interface TextPart {

View File

@@ -1,3 +1,4 @@
export * from "./types" export * from "./types"
export * from "./loader" export * from "./loader"
export * from "./merger" export * from "./merger"
export * from "./skill-content"

View File

@@ -0,0 +1,111 @@
import { describe, it, expect } from "bun:test"
import { resolveSkillContent, resolveMultipleSkills } from "./skill-content"
describe("resolveSkillContent", () => {
it("should return template for existing skill", () => {
// #given: builtin skills with 'frontend-ui-ux' skill
// #when: resolving content for 'frontend-ui-ux'
const result = resolveSkillContent("frontend-ui-ux")
// #then: returns template string
expect(result).not.toBeNull()
expect(typeof result).toBe("string")
expect(result).toContain("Role: Designer-Turned-Developer")
})
it("should return template for 'playwright' skill", () => {
// #given: builtin skills with 'playwright' skill
// #when: resolving content for 'playwright'
const result = resolveSkillContent("playwright")
// #then: returns template string
expect(result).not.toBeNull()
expect(typeof result).toBe("string")
expect(result).toContain("Playwright Browser Automation")
})
it("should return null for non-existent skill", () => {
// #given: builtin skills without 'nonexistent' skill
// #when: resolving content for 'nonexistent'
const result = resolveSkillContent("nonexistent")
// #then: returns null
expect(result).toBeNull()
})
it("should return null for empty string", () => {
// #given: builtin skills
// #when: resolving content for empty string
const result = resolveSkillContent("")
// #then: returns null
expect(result).toBeNull()
})
})
describe("resolveMultipleSkills", () => {
it("should resolve all existing skills", () => {
// #given: list of existing skill names
const skillNames = ["frontend-ui-ux", "playwright"]
// #when: resolving multiple skills
const result = resolveMultipleSkills(skillNames)
// #then: all skills resolved, none not found
expect(result.resolved.size).toBe(2)
expect(result.notFound).toEqual([])
expect(result.resolved.get("frontend-ui-ux")).toContain("Designer-Turned-Developer")
expect(result.resolved.get("playwright")).toContain("Playwright Browser Automation")
})
it("should handle partial success - some skills not found", () => {
// #given: list with existing and non-existing skills
const skillNames = ["frontend-ui-ux", "nonexistent", "playwright", "another-missing"]
// #when: resolving multiple skills
const result = resolveMultipleSkills(skillNames)
// #then: resolves existing skills, lists not found skills
expect(result.resolved.size).toBe(2)
expect(result.notFound).toEqual(["nonexistent", "another-missing"])
expect(result.resolved.get("frontend-ui-ux")).toContain("Designer-Turned-Developer")
expect(result.resolved.get("playwright")).toContain("Playwright Browser Automation")
})
it("should handle empty array", () => {
// #given: empty skill names list
const skillNames: string[] = []
// #when: resolving multiple skills
const result = resolveMultipleSkills(skillNames)
// #then: returns empty resolved and notFound
expect(result.resolved.size).toBe(0)
expect(result.notFound).toEqual([])
})
it("should handle all skills not found", () => {
// #given: list of non-existing skills
const skillNames = ["skill-one", "skill-two", "skill-three"]
// #when: resolving multiple skills
const result = resolveMultipleSkills(skillNames)
// #then: no skills resolved, all in notFound
expect(result.resolved.size).toBe(0)
expect(result.notFound).toEqual(["skill-one", "skill-two", "skill-three"])
})
it("should preserve skill order in resolved map", () => {
// #given: list of skill names in specific order
const skillNames = ["playwright", "frontend-ui-ux"]
// #when: resolving multiple skills
const result = resolveMultipleSkills(skillNames)
// #then: map contains skills with expected keys
expect(result.resolved.has("playwright")).toBe(true)
expect(result.resolved.has("frontend-ui-ux")).toBe(true)
expect(result.resolved.size).toBe(2)
})
})

View File

@@ -0,0 +1,29 @@
import { createBuiltinSkills } from "../builtin-skills/skills"
export function resolveSkillContent(skillName: string): string | null {
const skills = createBuiltinSkills()
const skill = skills.find((s) => s.name === skillName)
return skill?.template ?? null
}
export function resolveMultipleSkills(skillNames: string[]): {
resolved: Map<string, string>
notFound: string[]
} {
const skills = createBuiltinSkills()
const skillMap = new Map(skills.map((s) => [s.name, s.template]))
const resolved = new Map<string, string>()
const notFound: string[] = []
for (const name of skillNames) {
const template = skillMap.get(name)
if (template) {
resolved.set(name, template)
} else {
notFound.push(name)
}
}
return { resolved, notFound }
}

View File

@@ -0,0 +1,2 @@
export { TaskToastManager, getTaskToastManager, initTaskToastManager } from "./manager"
export type { TrackedTask, TaskStatus, TaskToastOptions } from "./types"

View File

@@ -0,0 +1,145 @@
import { describe, test, expect, beforeEach, mock } from "bun:test"
import { TaskToastManager } from "./manager"
import type { ConcurrencyManager } from "../background-agent/concurrency"
describe("TaskToastManager", () => {
let mockClient: {
tui: {
showToast: ReturnType<typeof mock>
}
}
let toastManager: TaskToastManager
let mockConcurrencyManager: ConcurrencyManager
beforeEach(() => {
mockClient = {
tui: {
showToast: mock(() => Promise.resolve()),
},
}
mockConcurrencyManager = {
getConcurrencyLimit: mock(() => 5),
} as unknown as ConcurrencyManager
// eslint-disable-next-line @typescript-eslint/no-explicit-any
toastManager = new TaskToastManager(mockClient as any, mockConcurrencyManager)
})
describe("skills in toast message", () => {
test("should display skills when provided", () => {
// #given - a task with skills
const task = {
id: "task_1",
description: "Test task",
agent: "Sisyphus-Junior",
isBackground: true,
skills: ["playwright", "git-master"],
}
// #when - addTask is called
toastManager.addTask(task)
// #then - toast message should include skills
expect(mockClient.tui.showToast).toHaveBeenCalled()
const call = mockClient.tui.showToast.mock.calls[0][0]
expect(call.body.message).toContain("playwright")
expect(call.body.message).toContain("git-master")
})
test("should not display skills section when no skills provided", () => {
// #given - a task without skills
const task = {
id: "task_2",
description: "Test task without skills",
agent: "explore",
isBackground: true,
}
// #when - addTask is called
toastManager.addTask(task)
// #then - toast message should not include skills prefix
expect(mockClient.tui.showToast).toHaveBeenCalled()
const call = mockClient.tui.showToast.mock.calls[0][0]
expect(call.body.message).not.toContain("Skills:")
})
})
describe("concurrency info in toast message", () => {
test("should display concurrency status in toast", () => {
// #given - multiple running tasks
toastManager.addTask({
id: "task_1",
description: "First task",
agent: "explore",
isBackground: true,
})
toastManager.addTask({
id: "task_2",
description: "Second task",
agent: "librarian",
isBackground: true,
})
// #when - third task is added
toastManager.addTask({
id: "task_3",
description: "Third task",
agent: "explore",
isBackground: true,
})
// #then - toast should show concurrency info
expect(mockClient.tui.showToast).toHaveBeenCalledTimes(3)
const lastCall = mockClient.tui.showToast.mock.calls[2][0]
// Should show "Running (3):" header
expect(lastCall.body.message).toContain("Running (3):")
})
test("should display concurrency limit info when available", () => {
// #given - a concurrency manager with known limit
const mockConcurrencyWithCounts = {
getConcurrencyLimit: mock(() => 5),
getRunningCount: mock(() => 2),
getQueuedCount: mock(() => 1),
} as unknown as ConcurrencyManager
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const managerWithConcurrency = new TaskToastManager(mockClient as any, mockConcurrencyWithCounts)
// #when - a task is added
managerWithConcurrency.addTask({
id: "task_1",
description: "Test task",
agent: "explore",
isBackground: true,
})
// #then - toast should show concurrency status like "2/5 slots"
expect(mockClient.tui.showToast).toHaveBeenCalled()
const call = mockClient.tui.showToast.mock.calls[0][0]
expect(call.body.message).toMatch(/\d+\/\d+/)
})
})
describe("combined skills and concurrency display", () => {
test("should display both skills and concurrency info together", () => {
// #given - a task with skills and concurrency manager
const task = {
id: "task_1",
description: "Full info task",
agent: "Sisyphus-Junior",
isBackground: true,
skills: ["frontend-ui-ux"],
}
// #when - addTask is called
toastManager.addTask(task)
// #then - toast should include both skills and task count
expect(mockClient.tui.showToast).toHaveBeenCalled()
const call = mockClient.tui.showToast.mock.calls[0][0]
expect(call.body.message).toContain("frontend-ui-ux")
expect(call.body.message).toContain("Running (1):")
})
})
})

View File

@@ -0,0 +1,199 @@
import type { PluginInput } from "@opencode-ai/plugin"
import type { TrackedTask, TaskStatus } from "./types"
import type { ConcurrencyManager } from "../background-agent/concurrency"
type OpencodeClient = PluginInput["client"]
export class TaskToastManager {
private tasks: Map<string, TrackedTask> = new Map()
private client: OpencodeClient
private concurrencyManager?: ConcurrencyManager
constructor(client: OpencodeClient, concurrencyManager?: ConcurrencyManager) {
this.client = client
this.concurrencyManager = concurrencyManager
}
setConcurrencyManager(manager: ConcurrencyManager): void {
this.concurrencyManager = manager
}
addTask(task: {
id: string
description: string
agent: string
isBackground: boolean
status?: TaskStatus
skills?: string[]
}): void {
const trackedTask: TrackedTask = {
id: task.id,
description: task.description,
agent: task.agent,
status: task.status ?? "running",
startedAt: new Date(),
isBackground: task.isBackground,
skills: task.skills,
}
this.tasks.set(task.id, trackedTask)
this.showTaskListToast(trackedTask)
}
/**
* Update task status
*/
updateTask(id: string, status: TaskStatus): void {
const task = this.tasks.get(id)
if (task) {
task.status = status
}
}
/**
* Remove completed/error task
*/
removeTask(id: string): void {
this.tasks.delete(id)
}
/**
* Get all running tasks (newest first)
*/
getRunningTasks(): TrackedTask[] {
const running = Array.from(this.tasks.values())
.filter((t) => t.status === "running")
.sort((a, b) => b.startedAt.getTime() - a.startedAt.getTime())
return running
}
/**
* Get all queued tasks
*/
getQueuedTasks(): TrackedTask[] {
return Array.from(this.tasks.values())
.filter((t) => t.status === "queued")
.sort((a, b) => a.startedAt.getTime() - b.startedAt.getTime())
}
/**
* Format duration since task started
*/
private formatDuration(startedAt: Date): string {
const seconds = Math.floor((Date.now() - startedAt.getTime()) / 1000)
if (seconds < 60) return `${seconds}s`
const minutes = Math.floor(seconds / 60)
if (minutes < 60) return `${minutes}m ${seconds % 60}s`
const hours = Math.floor(minutes / 60)
return `${hours}h ${minutes % 60}m`
}
private getConcurrencyInfo(): string {
if (!this.concurrencyManager) return ""
const running = this.getRunningTasks()
const queued = this.getQueuedTasks()
const total = running.length + queued.length
const limit = this.concurrencyManager.getConcurrencyLimit("default")
if (limit === Infinity) return ""
return ` [${total}/${limit}]`
}
private buildTaskListMessage(newTask: TrackedTask): string {
const running = this.getRunningTasks()
const queued = this.getQueuedTasks()
const concurrencyInfo = this.getConcurrencyInfo()
const lines: string[] = []
if (running.length > 0) {
lines.push(`Running (${running.length}):${concurrencyInfo}`)
for (const task of running) {
const duration = this.formatDuration(task.startedAt)
const bgIcon = task.isBackground ? "⚡" : "🔄"
const isNew = task.id === newTask.id ? " ← NEW" : ""
const skillsInfo = task.skills?.length ? ` [${task.skills.join(", ")}]` : ""
lines.push(`${bgIcon} ${task.description} (${task.agent})${skillsInfo} - ${duration}${isNew}`)
}
}
if (queued.length > 0) {
if (lines.length > 0) lines.push("")
lines.push(`Queued (${queued.length}):`)
for (const task of queued) {
const bgIcon = task.isBackground ? "⏳" : "⏸️"
const skillsInfo = task.skills?.length ? ` [${task.skills.join(", ")}]` : ""
lines.push(`${bgIcon} ${task.description} (${task.agent})${skillsInfo}`)
}
}
return lines.join("\n")
}
/**
* Show consolidated toast with all running/queued tasks
*/
private showTaskListToast(newTask: TrackedTask): void {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const tuiClient = this.client as any
if (!tuiClient.tui?.showToast) return
const message = this.buildTaskListMessage(newTask)
const running = this.getRunningTasks()
const queued = this.getQueuedTasks()
const title = newTask.isBackground
? `⚡ New Background Task`
: `🔄 New Task Executed`
tuiClient.tui.showToast({
body: {
title,
message: message || `${newTask.description} (${newTask.agent})`,
variant: "info",
duration: running.length + queued.length > 2 ? 5000 : 3000,
},
}).catch(() => {})
}
/**
* Show task completion toast
*/
showCompletionToast(task: { id: string; description: string; duration: string }): void {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const tuiClient = this.client as any
if (!tuiClient.tui?.showToast) return
this.removeTask(task.id)
const remaining = this.getRunningTasks()
const queued = this.getQueuedTasks()
let message = `✅ "${task.description}" finished in ${task.duration}`
if (remaining.length > 0 || queued.length > 0) {
message += `\n\nStill running: ${remaining.length} | Queued: ${queued.length}`
}
tuiClient.tui.showToast({
body: {
title: "Task Completed",
message,
variant: "success",
duration: 5000,
},
}).catch(() => {})
}
}
let instance: TaskToastManager | null = null
export function getTaskToastManager(): TaskToastManager | null {
return instance
}
export function initTaskToastManager(
client: OpencodeClient,
concurrencyManager?: ConcurrencyManager
): TaskToastManager {
instance = new TaskToastManager(client, concurrencyManager)
return instance
}

View File

@@ -0,0 +1,18 @@
export type TaskStatus = "running" | "queued" | "completed" | "error"
export interface TrackedTask {
id: string
description: string
agent: string
status: TaskStatus
startedAt: Date
isBackground: boolean
skills?: string[]
}
export interface TaskToastOptions {
title: string
message: string
variant: "info" | "success" | "warning" | "error"
duration?: number
}

View File

@@ -22,7 +22,7 @@ export const TARGET_TOOLS = new Set([
export const AGENT_TOOLS = new Set([ export const AGENT_TOOLS = new Set([
"task", "task",
"call_omo_agent", "call_omo_agent",
"background_task", "sisyphus_task",
]); ]);
export const REMINDER_MESSAGE = ` export const REMINDER_MESSAGE = `
@@ -30,13 +30,13 @@ export const REMINDER_MESSAGE = `
You called a search/fetch tool directly without leveraging specialized agents. You called a search/fetch tool directly without leveraging specialized agents.
RECOMMENDED: Use background_task with explore/librarian agents for better results: RECOMMENDED: Use sisyphus_task with explore/librarian agents for better results:
\`\`\` \`\`\`
// Parallel exploration - fire multiple agents simultaneously // Parallel exploration - fire multiple agents simultaneously
background_task(agent="explore", prompt="Find all files matching pattern X") sisyphus_task(agent="explore", prompt="Find all files matching pattern X")
background_task(agent="explore", prompt="Search for implementation of Y") sisyphus_task(agent="explore", prompt="Search for implementation of Y")
background_task(agent="librarian", prompt="Lookup documentation for Z") sisyphus_task(agent="librarian", prompt="Lookup documentation for Z")
// Then continue your work while they run in background // Then continue your work while they run in background
// System will notify you when each completes // System will notify you when each completes
@@ -48,5 +48,5 @@ WHY:
- Specialized agents have domain expertise - Specialized agents have domain expertise
- Reduces context window usage in main session - Reduces context window usage in main session
ALWAYS prefer: Multiple parallel background_task calls > Direct tool calls ALWAYS prefer: Multiple parallel sisyphus_task calls > Direct tool calls
`; `;

View File

@@ -257,7 +257,7 @@ describe("executeCompact lock management", () => {
expect(mockClient.session.summarize).toHaveBeenCalledWith( expect(mockClient.session.summarize).toHaveBeenCalledWith(
expect.objectContaining({ expect.objectContaining({
path: { id: sessionID }, path: { id: sessionID },
body: { providerID: "anthropic", modelID: "claude-opus-4-5" }, body: { providerID: "anthropic", modelID: "claude-opus-4-5", auto: true },
}), }),
) )

View File

@@ -409,7 +409,7 @@ export async function executeCompact(
try { try {
await (client as Client).session.prompt_async({ await (client as Client).session.prompt_async({
path: { id: sessionID }, path: { id: sessionID },
body: { parts: [{ type: "text", text: "Continue" }] }, body: { auto: true } as never,
query: { directory }, query: { directory },
}); });
} catch {} } catch {}
@@ -497,21 +497,12 @@ export async function executeCompact(
}) })
.catch(() => {}); .catch(() => {});
const summarizeBody = { providerID, modelID, auto: true }
await (client as Client).session.summarize({ await (client as Client).session.summarize({
path: { id: sessionID }, path: { id: sessionID },
body: { providerID, modelID }, body: summarizeBody as never,
query: { directory }, query: { directory },
}); });
setTimeout(async () => {
try {
await (client as Client).session.prompt_async({
path: { id: sessionID },
body: { parts: [{ type: "text", text: "Continue" }] },
query: { directory },
});
} catch {}
}, 500);
return; return;
} catch { } catch {
const delay = const delay =

View File

@@ -27,14 +27,18 @@ import { cacheToolInput, getToolInput } from "./tool-input-cache"
import { recordToolUse, recordToolResult, getTranscriptPath, recordUserMessage } from "./transcript" import { recordToolUse, recordToolResult, getTranscriptPath, recordUserMessage } from "./transcript"
import type { PluginConfig } from "./types" import type { PluginConfig } from "./types"
import { log, isHookDisabled } from "../../shared" import { log, isHookDisabled } from "../../shared"
import { injectHookMessage } from "../../features/hook-message-injector"
import { detectKeywordsWithType, removeCodeBlocks } from "../keyword-detector" import { detectKeywordsWithType, removeCodeBlocks } from "../keyword-detector"
import type { ContextCollector } from "../../features/context-injector"
const sessionFirstMessageProcessed = new Set<string>() const sessionFirstMessageProcessed = new Set<string>()
const sessionErrorState = new Map<string, { hasError: boolean; errorMessage?: string }>() const sessionErrorState = new Map<string, { hasError: boolean; errorMessage?: string }>()
const sessionInterruptState = new Map<string, { interrupted: boolean }>() const sessionInterruptState = new Map<string, { interrupted: boolean }>()
export function createClaudeCodeHooksHook(ctx: PluginInput, config: PluginConfig = {}) { export function createClaudeCodeHooksHook(
ctx: PluginInput,
config: PluginConfig = {},
contextCollector?: ContextCollector
) {
return { return {
"experimental.session.compacting": async ( "experimental.session.compacting": async (
input: { sessionID: string }, input: { sessionID: string },
@@ -164,27 +168,34 @@ export function createClaudeCodeHooksHook(ctx: PluginInput, config: PluginConfig
output.parts[idx].text = `${hookContent}\n\n${output.parts[idx].text ?? ""}` output.parts[idx].text = `${hookContent}\n\n${output.parts[idx].text ?? ""}`
log("UserPromptSubmit hooks prepended to first message parts directly", { sessionID: input.sessionID }) log("UserPromptSubmit hooks prepended to first message parts directly", { sessionID: input.sessionID })
} }
} else { } else if (contextCollector) {
const message = output.message as { log("[DEBUG] Registering hook content to contextCollector", {
agent?: string sessionID: input.sessionID,
model?: { modelID?: string; providerID?: string } contentLength: hookContent.length,
path?: { cwd?: string; root?: string } contentPreview: hookContent.slice(0, 100),
tools?: Record<string, boolean> })
} contextCollector.register(input.sessionID, {
id: "hook-context",
const success = injectHookMessage(input.sessionID, hookContent, { source: "custom",
agent: message.agent, content: hookContent,
model: message.model, priority: "high",
path: message.path ?? { cwd: ctx.directory, root: "/" },
tools: message.tools,
}) })
log(success ? "Hook message injected via file system" : "File injection failed", { log("Hook content registered for synthetic message injection", {
sessionID: input.sessionID,
contentLength: hookContent.length,
})
} else {
const idx = output.parts.findIndex((p) => p.type === "text" && p.text)
if (idx >= 0) {
output.parts[idx].text = `${hookContent}\n\n${output.parts[idx].text ?? ""}`
log("Hook content prepended to message (fallback)", {
sessionID: input.sessionID, sessionID: input.sessionID,
}) })
} }
} }
} }
}
}, },
"tool.execute.before": async ( "tool.execute.before": async (
@@ -239,7 +250,7 @@ export function createClaudeCodeHooksHook(ctx: PluginInput, config: PluginConfig
const cachedInput = getToolInput(input.sessionID, input.tool, input.callID) || {} const cachedInput = getToolInput(input.sessionID, input.tool, input.callID) || {}
// Use metadata if available and non-empty, otherwise wrap output.output in a structured object // Use metadata if available and non-empty, otherwise wrap output.output in a structured object
// This ensures plugin tools (call_omo_agent, background_task, task) that return strings // This ensures plugin tools (call_omo_agent, sisyphus_task, task) that return strings
// get their results properly recorded in transcripts instead of empty {} // get their results properly recorded in transcripts instead of empty {}
const metadata = output.metadata as Record<string, unknown> | undefined const metadata = output.metadata as Record<string, unknown> | undefined
const hasMetadata = metadata && typeof metadata === "object" && Object.keys(metadata).length > 0 const hasMetadata = metadata && typeof metadata === "object" && Object.keys(metadata).length > 0

View File

@@ -25,3 +25,7 @@ export { createThinkingBlockValidatorHook } from "./thinking-block-validator";
export { createRalphLoopHook, type RalphLoopHook } from "./ralph-loop"; export { createRalphLoopHook, type RalphLoopHook } from "./ralph-loop";
export { createAutoSlashCommandHook } from "./auto-slash-command"; export { createAutoSlashCommandHook } from "./auto-slash-command";
export { createEditErrorRecoveryHook } from "./edit-error-recovery"; export { createEditErrorRecoveryHook } from "./edit-error-recovery";
export { createPrometheusMdOnlyHook } from "./prometheus-md-only";
export { createTaskResumeInfoHook } from "./task-resume-info";
export { createStartWorkHook } from "./start-work";
export { createSisyphusOrchestratorHook } from "./sisyphus-orchestrator";

View File

@@ -101,14 +101,14 @@ TELL THE USER WHAT AGENTS YOU WILL LEVERAGE NOW TO SATISFY USER'S REQUEST.
## EXECUTION RULES ## EXECUTION RULES
- **TODO**: Track EVERY step. Mark complete IMMEDIATELY after each. - **TODO**: Track EVERY step. Mark complete IMMEDIATELY after each.
- **PARALLEL**: Fire independent agent calls simultaneously via background_task - NEVER wait sequentially. - **PARALLEL**: Fire independent agent calls simultaneously via sisyphus_task(background=true) - NEVER wait sequentially.
- **BACKGROUND FIRST**: Use background_task for exploration/research agents (10+ concurrent if needed). - **BACKGROUND FIRST**: Use sisyphus_task for exploration/research agents (10+ concurrent if needed).
- **VERIFY**: Re-read request after completion. Check ALL requirements met before reporting done. - **VERIFY**: Re-read request after completion. Check ALL requirements met before reporting done.
- **DELEGATE**: Don't do everything yourself - orchestrate specialized agents for their strengths. - **DELEGATE**: Don't do everything yourself - orchestrate specialized agents for their strengths.
## WORKFLOW ## WORKFLOW
1. Analyze the request and identify required capabilities 1. Analyze the request and identify required capabilities
2. Spawn exploration/librarian agents via background_task in PARALLEL (10+ if needed) 2. Spawn exploration/librarian agents via sisyphus_task(background=true) in PARALLEL (10+ if needed)
3. Always Use Plan agent with gathered context to create detailed work breakdown 3. Always Use Plan agent with gathered context to create detailed work breakdown
4. Execute with continuous verification against original requirements 4. Execute with continuous verification against original requirements

View File

@@ -169,9 +169,10 @@ export function createPreemptiveCompactionHook(
}) })
} }
const summarizeBody = { providerID, modelID, auto: true }
await ctx.client.session.summarize({ await ctx.client.session.summarize({
path: { id: sessionID }, path: { id: sessionID },
body: { providerID, modelID }, body: summarizeBody as never,
query: { directory: ctx.directory }, query: { directory: ctx.directory },
}) })
@@ -187,22 +188,6 @@ export function createPreemptiveCompactionHook(
.catch(() => {}) .catch(() => {})
state.compactionInProgress.delete(sessionID) state.compactionInProgress.delete(sessionID)
setTimeout(async () => {
try {
const messageDir = getMessageDir(sessionID)
const storedMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
await ctx.client.session.promptAsync({
path: { id: sessionID },
body: {
agent: storedMessage?.agent,
parts: [{ type: "text", text: "Continue" }],
},
query: { directory: ctx.directory },
})
} catch {}
}, 500)
return return
} catch (err) { } catch (err) {
log("[preemptive-compaction] compaction failed", { sessionID, error: err }) log("[preemptive-compaction] compaction failed", { sessionID, error: err })

View File

@@ -0,0 +1,30 @@
export const HOOK_NAME = "prometheus-md-only"
export const PROMETHEUS_AGENTS = ["Prometheus (Planner)"]
export const ALLOWED_EXTENSIONS = [".md"]
export const ALLOWED_PATH_PREFIX = ".sisyphus/"
export const BLOCKED_TOOLS = ["Write", "Edit", "write", "edit"]
export const PLANNING_CONSULT_WARNING = `
---
[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]
You are being invoked by Prometheus (Planner), a READ-ONLY planning agent.
**CRITICAL CONSTRAINTS:**
- DO NOT modify any files (no Write, Edit, or any file mutations)
- DO NOT execute commands that change system state
- DO NOT create, delete, or rename files
- ONLY provide analysis, recommendations, and information
**YOUR ROLE**: Provide consultation, research, and analysis to assist with planning.
Return your findings and recommendations. The actual implementation will be handled separately after planning is complete.
---
`

View File

@@ -0,0 +1,298 @@
import { describe, expect, test, beforeEach, afterEach, mock } from "bun:test"
import { mkdirSync, rmSync, writeFileSync } from "node:fs"
import { join } from "node:path"
import { createPrometheusMdOnlyHook } from "./index"
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
describe("prometheus-md-only", () => {
const TEST_SESSION_ID = "test-session-prometheus"
let testMessageDir: string
function createMockPluginInput() {
return {
client: {},
directory: "/tmp/test",
} as never
}
function setupMessageStorage(sessionID: string, agent: string): void {
testMessageDir = join(MESSAGE_STORAGE, sessionID)
mkdirSync(testMessageDir, { recursive: true })
const messageContent = {
agent,
model: { providerID: "test", modelID: "test-model" },
}
writeFileSync(
join(testMessageDir, "msg_001.json"),
JSON.stringify(messageContent)
)
}
afterEach(() => {
if (testMessageDir) {
try {
rmSync(testMessageDir, { recursive: true, force: true })
} catch {
// ignore
}
}
})
describe("with Prometheus agent in message storage", () => {
beforeEach(() => {
setupMessageStorage(TEST_SESSION_ID, "Prometheus (Planner)")
})
test("should block Prometheus from writing non-.md files", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/file.ts" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files")
})
test("should allow Prometheus to write .md files inside .sisyphus/", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/project/.sisyphus/plans/work-plan.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should block Prometheus from writing .md files outside .sisyphus/", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/README.md" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files inside .sisyphus/")
})
test("should block Edit tool for non-.md files", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Edit",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/code.py" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).rejects.toThrow("can only write/edit .md files")
})
test("should not affect non-Write/Edit tools", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Read",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/file.ts" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should handle missing filePath gracefully", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: {},
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should inject read-only warning when Prometheus calls sisyphus_task", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "sisyphus_task",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { prompt: "Analyze this codebase" },
}
// #when
await hook["tool.execute.before"](input, output)
// #then
expect(output.args.prompt).toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
expect(output.args.prompt).toContain("DO NOT modify any files")
})
test("should inject read-only warning when Prometheus calls task", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "task",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { prompt: "Research this library" },
}
// #when
await hook["tool.execute.before"](input, output)
// #then
expect(output.args.prompt).toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
})
test("should inject read-only warning when Prometheus calls call_omo_agent", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "call_omo_agent",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { prompt: "Find implementation examples" },
}
// #when
await hook["tool.execute.before"](input, output)
// #then
expect(output.args.prompt).toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
})
test("should not double-inject warning if already present", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "sisyphus_task",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const promptWithWarning = "Some prompt [SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION] already here"
const output = {
args: { prompt: promptWithWarning },
}
// #when
await hook["tool.execute.before"](input, output)
// #then
const occurrences = (output.args.prompt as string).split("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]").length - 1
expect(occurrences).toBe(1)
})
})
describe("with non-Prometheus agent in message storage", () => {
beforeEach(() => {
setupMessageStorage(TEST_SESSION_ID, "Sisyphus")
})
test("should not affect non-Prometheus agents", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/file.ts" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
test("should not inject warning for non-Prometheus agents calling sisyphus_task", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "sisyphus_task",
sessionID: TEST_SESSION_ID,
callID: "call-1",
}
const originalPrompt = "Implement this feature"
const output = {
args: { prompt: originalPrompt },
}
// #when
await hook["tool.execute.before"](input, output)
// #then
expect(output.args.prompt).toBe(originalPrompt)
expect(output.args.prompt).not.toContain("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")
})
})
describe("without message storage", () => {
test("should handle missing session gracefully (no agent found)", async () => {
// #given
const hook = createPrometheusMdOnlyHook(createMockPluginInput())
const input = {
tool: "Write",
sessionID: "non-existent-session",
callID: "call-1",
}
const output = {
args: { filePath: "/path/to/file.ts" },
}
// #when / #then
await expect(
hook["tool.execute.before"](input, output)
).resolves.toBeUndefined()
})
})
})

View File

@@ -0,0 +1,97 @@
import type { PluginInput } from "@opencode-ai/plugin"
import { existsSync, readdirSync } from "node:fs"
import { join } from "node:path"
import { HOOK_NAME, PROMETHEUS_AGENTS, ALLOWED_EXTENSIONS, ALLOWED_PATH_PREFIX, BLOCKED_TOOLS, PLANNING_CONSULT_WARNING } from "./constants"
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
import { log } from "../../shared/logger"
export * from "./constants"
function isAllowedFile(filePath: string): boolean {
const hasAllowedExtension = ALLOWED_EXTENSIONS.some(ext => filePath.endsWith(ext))
const isInAllowedPath = filePath.includes(ALLOWED_PATH_PREFIX)
return hasAllowedExtension && isInAllowedPath
}
function getMessageDir(sessionID: string): string | null {
if (!existsSync(MESSAGE_STORAGE)) return null
const directPath = join(MESSAGE_STORAGE, sessionID)
if (existsSync(directPath)) return directPath
for (const dir of readdirSync(MESSAGE_STORAGE)) {
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
if (existsSync(sessionPath)) return sessionPath
}
return null
}
const TASK_TOOLS = ["sisyphus_task", "task", "call_omo_agent"]
function getAgentFromSession(sessionID: string): string | undefined {
const messageDir = getMessageDir(sessionID)
if (!messageDir) return undefined
return findNearestMessageWithFields(messageDir)?.agent
}
export function createPrometheusMdOnlyHook(_ctx: PluginInput) {
return {
"tool.execute.before": async (
input: { tool: string; sessionID: string; callID: string },
output: { args: Record<string, unknown>; message?: string }
): Promise<void> => {
const agentName = getAgentFromSession(input.sessionID)
if (!agentName || !PROMETHEUS_AGENTS.includes(agentName)) {
return
}
const toolName = input.tool
// Inject read-only warning for task tools called by Prometheus
if (TASK_TOOLS.includes(toolName)) {
const prompt = output.args.prompt as string | undefined
if (prompt && !prompt.includes("[SYSTEM DIRECTIVE - READ-ONLY PLANNING CONSULTATION]")) {
output.args.prompt = prompt + PLANNING_CONSULT_WARNING
log(`[${HOOK_NAME}] Injected read-only planning warning to ${toolName}`, {
sessionID: input.sessionID,
tool: toolName,
agent: agentName,
})
}
return
}
if (!BLOCKED_TOOLS.includes(toolName)) {
return
}
const filePath = (output.args.filePath ?? output.args.path ?? output.args.file) as string | undefined
if (!filePath) {
return
}
if (!isAllowedFile(filePath)) {
log(`[${HOOK_NAME}] Blocked: Prometheus can only write to .sisyphus/*.md`, {
sessionID: input.sessionID,
tool: toolName,
filePath,
agent: agentName,
})
throw new Error(
`[${HOOK_NAME}] Prometheus (Planner) can only write/edit .md files inside .sisyphus/ directory. ` +
`Attempted to modify: ${filePath}. ` +
`Prometheus is a READ-ONLY planner. Use /start-work to execute the plan.`
)
}
log(`[${HOOK_NAME}] Allowed: .sisyphus/*.md write permitted`, {
sessionID: input.sessionID,
tool: toolName,
filePath,
agent: agentName,
})
},
}
}

View File

@@ -0,0 +1,829 @@
import { describe, expect, test, beforeEach, afterEach, mock } from "bun:test"
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { join } from "node:path"
import { tmpdir } from "node:os"
import { createSisyphusOrchestratorHook } from "./index"
import {
writeBoulderState,
clearBoulderState,
readBoulderState,
} from "../../features/boulder-state"
import type { BoulderState } from "../../features/boulder-state"
import { MESSAGE_STORAGE } from "../../features/hook-message-injector"
describe("sisyphus-orchestrator hook", () => {
const TEST_DIR = join(tmpdir(), "sisyphus-orchestrator-test-" + Date.now())
const SISYPHUS_DIR = join(TEST_DIR, ".sisyphus")
function createMockPluginInput(overrides?: { promptMock?: ReturnType<typeof mock> }) {
const promptMock = overrides?.promptMock ?? mock(() => Promise.resolve())
return {
directory: TEST_DIR,
client: {
session: {
prompt: promptMock,
},
},
_promptMock: promptMock,
} as unknown as Parameters<typeof createSisyphusOrchestratorHook>[0] & { _promptMock: ReturnType<typeof mock> }
}
function setupMessageStorage(sessionID: string, agent: string): void {
const messageDir = join(MESSAGE_STORAGE, sessionID)
if (!existsSync(messageDir)) {
mkdirSync(messageDir, { recursive: true })
}
const messageData = {
agent,
model: { providerID: "anthropic", modelID: "claude-opus-4-5" },
}
writeFileSync(join(messageDir, "msg_test001.json"), JSON.stringify(messageData))
}
function cleanupMessageStorage(sessionID: string): void {
const messageDir = join(MESSAGE_STORAGE, sessionID)
if (existsSync(messageDir)) {
rmSync(messageDir, { recursive: true, force: true })
}
}
beforeEach(() => {
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
if (!existsSync(SISYPHUS_DIR)) {
mkdirSync(SISYPHUS_DIR, { recursive: true })
}
clearBoulderState(TEST_DIR)
})
afterEach(() => {
clearBoulderState(TEST_DIR)
if (existsSync(TEST_DIR)) {
rmSync(TEST_DIR, { recursive: true, force: true })
}
})
describe("tool.execute.after handler", () => {
test("should ignore non-sisyphus_task tools", async () => {
// #given - hook and non-sisyphus_task tool
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Test Tool",
output: "Original output",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "other_tool", sessionID: "session-123" },
output
)
// #then - output unchanged
expect(output.output).toBe("Original output")
})
test("should not transform when caller is not orchestrator-sisyphus", async () => {
// #given - boulder state exists but caller agent in message storage is not orchestrator
const sessionID = "session-non-orchestrator-test"
setupMessageStorage(sessionID, "other-agent")
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task completed successfully",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - output unchanged because caller is not orchestrator
expect(output.output).toBe("Task completed successfully")
cleanupMessageStorage(sessionID)
})
test("should append standalone verification when no boulder state but caller is orchestrator", async () => {
// #given - no boulder state, but caller is orchestrator
const sessionID = "session-no-boulder-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task completed successfully",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - standalone verification reminder appended
expect(output.output).toContain("Task completed successfully")
expect(output.output).toContain("MANDATORY VERIFICATION")
expect(output.output).toContain("sisyphus_task(resume=")
cleanupMessageStorage(sessionID)
})
test("should transform output when caller is orchestrator-sisyphus with boulder state", async () => {
// #given - orchestrator-sisyphus caller with boulder state
const sessionID = "session-transform-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task completed successfully",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - output should be transformed (original output replaced)
expect(output.output).not.toContain("Task completed successfully")
expect(output.output).toContain("SUBAGENT WORK COMPLETED")
expect(output.output).toContain("test-plan")
expect(output.output).toContain("SUBAGENTS LIE")
expect(output.output).toContain("sisyphus_task(resume=")
cleanupMessageStorage(sessionID)
})
test("should still transform when plan is complete (shows progress)", async () => {
// #given - boulder state with complete plan, orchestrator caller
const sessionID = "session-complete-plan-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "complete-plan.md")
writeFileSync(planPath, "# Plan\n- [x] Task 1\n- [x] Task 2")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "complete-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Original output",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - output transformed even when complete (shows 2/2 done)
expect(output.output).toContain("SUBAGENT WORK COMPLETED")
expect(output.output).toContain("2/2 done")
expect(output.output).toContain("0 left")
cleanupMessageStorage(sessionID)
})
test("should append session ID to boulder state if not present", async () => {
// #given - boulder state without session-append-test, orchestrator caller
const sessionID = "session-append-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task output",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - sessionID should be appended
const updatedState = readBoulderState(TEST_DIR)
expect(updatedState?.session_ids).toContain(sessionID)
cleanupMessageStorage(sessionID)
})
test("should not duplicate existing session ID", async () => {
// #given - boulder state already has session-dup-test, orchestrator caller
const sessionID = "session-dup-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [sessionID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task output",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - should still have only one sessionID
const updatedState = readBoulderState(TEST_DIR)
const count = updatedState?.session_ids.filter((id) => id === sessionID).length
expect(count).toBe(1)
cleanupMessageStorage(sessionID)
})
test("should include boulder.json path and notepad path in transformed output", async () => {
// #given - boulder state, orchestrator caller
const sessionID = "session-path-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "my-feature.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [ ] Task 2\n- [x] Task 3")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "my-feature",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task completed",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - output should contain plan name and progress
expect(output.output).toContain("my-feature")
expect(output.output).toContain("1/3 done")
expect(output.output).toContain("2 left")
cleanupMessageStorage(sessionID)
})
test("should include resume and checkbox instructions in reminder", async () => {
// #given - boulder state, orchestrator caller
const sessionID = "session-resume-test"
setupMessageStorage(sessionID, "orchestrator-sisyphus")
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Sisyphus Task",
output: "Task completed",
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "sisyphus_task", sessionID },
output
)
// #then - should include resume instructions and verification
expect(output.output).toContain("sisyphus_task(resume=")
expect(output.output).toContain("[x]")
expect(output.output).toContain("MANDATORY VERIFICATION")
cleanupMessageStorage(sessionID)
})
describe("Write/Edit tool direct work reminder", () => {
const ORCHESTRATOR_SESSION = "orchestrator-write-test"
beforeEach(() => {
setupMessageStorage(ORCHESTRATOR_SESSION, "orchestrator-sisyphus")
})
afterEach(() => {
cleanupMessageStorage(ORCHESTRATOR_SESSION)
})
test("should append delegation reminder when orchestrator writes outside .sisyphus/", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Write",
output: "File written successfully",
metadata: { filePath: "/path/to/code.ts" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toContain("DELEGATION REQUIRED")
expect(output.output).toContain("ORCHESTRATOR, not an IMPLEMENTER")
expect(output.output).toContain("sisyphus_task")
})
test("should append delegation reminder when orchestrator edits outside .sisyphus/", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const output = {
title: "Edit",
output: "File edited successfully",
metadata: { filePath: "/src/components/button.tsx" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Edit", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toContain("DELEGATION REQUIRED")
})
test("should NOT append reminder when orchestrator writes inside .sisyphus/", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: { filePath: "/project/.sisyphus/plans/work-plan.md" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
expect(output.output).not.toContain("DELEGATION REQUIRED")
})
test("should NOT append reminder when non-orchestrator writes outside .sisyphus/", async () => {
// #given
const nonOrchestratorSession = "non-orchestrator-session"
setupMessageStorage(nonOrchestratorSession, "Sisyphus-Junior")
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: { filePath: "/path/to/code.ts" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: nonOrchestratorSession },
output
)
// #then
expect(output.output).toBe(originalOutput)
expect(output.output).not.toContain("DELEGATION REQUIRED")
cleanupMessageStorage(nonOrchestratorSession)
})
test("should NOT append reminder for read-only tools", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File content"
const output = {
title: "Read",
output: originalOutput,
metadata: { filePath: "/path/to/code.ts" },
}
// #when
await hook["tool.execute.after"](
{ tool: "Read", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
})
test("should handle missing filePath gracefully", async () => {
// #given
const hook = createSisyphusOrchestratorHook(createMockPluginInput())
const originalOutput = "File written successfully"
const output = {
title: "Write",
output: originalOutput,
metadata: {},
}
// #when
await hook["tool.execute.after"](
{ tool: "Write", sessionID: ORCHESTRATOR_SESSION },
output
)
// #then
expect(output.output).toBe(originalOutput)
})
})
})
describe("session.idle handler (boulder continuation)", () => {
const MAIN_SESSION_ID = "main-session-123"
beforeEach(() => {
mock.module("../../features/claude-code-session-state", () => ({
getMainSessionID: () => MAIN_SESSION_ID,
subagentSessions: new Set<string>(),
}))
setupMessageStorage(MAIN_SESSION_ID, "orchestrator-sisyphus")
})
afterEach(() => {
cleanupMessageStorage(MAIN_SESSION_ID)
})
test("should inject continuation when boulder has incomplete tasks", async () => {
// #given - boulder state with incomplete plan
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2\n- [ ] Task 3")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should call prompt with continuation
expect(mockInput._promptMock).toHaveBeenCalled()
const callArgs = mockInput._promptMock.mock.calls[0][0]
expect(callArgs.path.id).toBe(MAIN_SESSION_ID)
expect(callArgs.body.parts[0].text).toContain("BOULDER CONTINUATION")
expect(callArgs.body.parts[0].text).toContain("2 remaining")
})
test("should not inject when no boulder state exists", async () => {
// #given - no boulder state
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should not call prompt
expect(mockInput._promptMock).not.toHaveBeenCalled()
})
test("should not inject when boulder plan is complete", async () => {
// #given - boulder state with complete plan
const planPath = join(TEST_DIR, "complete-plan.md")
writeFileSync(planPath, "# Plan\n- [x] Task 1\n- [x] Task 2")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "complete-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should not call prompt
expect(mockInput._promptMock).not.toHaveBeenCalled()
})
test("should skip when abort error occurred before idle", async () => {
// #given - boulder state with incomplete plan
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when - send abort error then idle
await hook.handler({
event: {
type: "session.error",
properties: {
sessionID: MAIN_SESSION_ID,
error: { name: "AbortError", message: "aborted" },
},
},
})
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should not call prompt
expect(mockInput._promptMock).not.toHaveBeenCalled()
})
test("should skip when background tasks are running", async () => {
// #given - boulder state with incomplete plan
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const mockBackgroundManager = {
getTasksByParentSession: () => [{ status: "running" }],
}
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput, {
directory: TEST_DIR,
backgroundManager: mockBackgroundManager as any,
})
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should not call prompt
expect(mockInput._promptMock).not.toHaveBeenCalled()
})
test("should clear abort state on message.updated", async () => {
// #given - boulder with incomplete plan
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when - abort error, then message update, then idle
await hook.handler({
event: {
type: "session.error",
properties: {
sessionID: MAIN_SESSION_ID,
error: { name: "AbortError" },
},
},
})
await hook.handler({
event: {
type: "message.updated",
properties: { info: { sessionID: MAIN_SESSION_ID, role: "user" } },
},
})
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should call prompt because abort state was cleared
expect(mockInput._promptMock).toHaveBeenCalled()
})
test("should include plan progress in continuation prompt", async () => {
// #given - boulder state with specific progress
const planPath = join(TEST_DIR, "progress-plan.md")
writeFileSync(planPath, "# Plan\n- [x] Task 1\n- [x] Task 2\n- [ ] Task 3\n- [ ] Task 4")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "progress-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should include progress
const callArgs = mockInput._promptMock.mock.calls[0][0]
expect(callArgs.body.parts[0].text).toContain("2/4 completed")
expect(callArgs.body.parts[0].text).toContain("2 remaining")
})
test("should not inject when last agent is not orchestrator-sisyphus", async () => {
// #given - boulder state with incomplete plan, but last agent is NOT orchestrator-sisyphus
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [ ] Task 2")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
// #given - last agent is NOT orchestrator-sisyphus
cleanupMessageStorage(MAIN_SESSION_ID)
setupMessageStorage(MAIN_SESSION_ID, "Sisyphus")
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should NOT call prompt because agent is not orchestrator-sisyphus
expect(mockInput._promptMock).not.toHaveBeenCalled()
})
test("should cleanup on session.deleted", async () => {
// #given - boulder state
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: [MAIN_SESSION_ID],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const mockInput = createMockPluginInput()
const hook = createSisyphusOrchestratorHook(mockInput)
// #when - create abort state then delete
await hook.handler({
event: {
type: "session.error",
properties: {
sessionID: MAIN_SESSION_ID,
error: { name: "AbortError" },
},
},
})
await hook.handler({
event: {
type: "session.deleted",
properties: { info: { id: MAIN_SESSION_ID } },
},
})
// Re-create boulder after deletion
writeBoulderState(TEST_DIR, state)
// Trigger idle - should inject because state was cleaned up
await hook.handler({
event: {
type: "session.idle",
properties: { sessionID: MAIN_SESSION_ID },
},
})
// #then - should call prompt because session state was cleaned
expect(mockInput._promptMock).toHaveBeenCalled()
})
})
})

View File

@@ -0,0 +1,660 @@
import type { PluginInput } from "@opencode-ai/plugin"
import { execSync } from "node:child_process"
import { existsSync, readdirSync } from "node:fs"
import { join } from "node:path"
import {
readBoulderState,
appendSessionId,
getPlanProgress,
} from "../../features/boulder-state"
import { getMainSessionID, subagentSessions } from "../../features/claude-code-session-state"
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
import { log } from "../../shared/logger"
import type { BackgroundManager } from "../../features/background-agent"
export const HOOK_NAME = "sisyphus-orchestrator"
const ALLOWED_PATH_PREFIX = ".sisyphus/"
const WRITE_EDIT_TOOLS = ["Write", "Edit", "write", "edit"]
const DIRECT_WORK_REMINDER = `
---
[SYSTEM REMINDER - DELEGATION REQUIRED]
You just performed direct file modifications outside \`.sisyphus/\`.
**You are an ORCHESTRATOR, not an IMPLEMENTER.**
As an orchestrator, you should:
- **DELEGATE** implementation work to subagents via \`sisyphus_task\`
- **VERIFY** the work done by subagents
- **COORDINATE** multiple tasks and ensure completion
You should NOT:
- Write code directly (except for \`.sisyphus/\` files like plans and notepads)
- Make direct file edits outside \`.sisyphus/\`
- Implement features yourself
**If you need to make changes:**
1. Use \`sisyphus_task\` to delegate to an appropriate subagent
2. Provide clear instructions in the prompt
3. Verify the subagent's work after completion
---
`
const BOULDER_CONTINUATION_PROMPT = `[SYSTEM REMINDER - BOULDER CONTINUATION]
You have an active work plan with incomplete tasks. Continue working.
RULES:
- Proceed without asking for permission
- Mark each checkbox [x] in the plan file when done
- Use the notepad at .sisyphus/notepads/{PLAN_NAME}/ to record learnings
- Do not stop until all tasks are complete
- If blocked, document the blocker and move to the next task`
const VERIFICATION_REMINDER = `**MANDATORY VERIFICATION - SUBAGENTS LIE**
Subagents FREQUENTLY claim completion when:
- Tests are actually FAILING
- Code has type/lint ERRORS
- Implementation is INCOMPLETE
- Patterns were NOT followed
**YOU MUST VERIFY EVERYTHING YOURSELF:**
1. Run \`lsp_diagnostics\` on changed files - Must be CLEAN
2. Run tests yourself - Must PASS (not "agent said it passed")
3. Read the actual code - Must match requirements
4. Check build/typecheck - Must succeed
DO NOT TRUST THE AGENT'S SELF-REPORT.
VERIFY EACH CLAIM WITH YOUR OWN TOOL CALLS.
**HANDS-ON QA REQUIRED (after ALL tasks complete):**
| Deliverable Type | Verification Tool | Action |
|------------------|-------------------|--------|
| **Frontend/UI** | \`/playwright\` skill | Navigate, interact, screenshot evidence |
| **TUI/CLI** | \`interactive_bash\` (tmux) | Run interactively, verify output |
| **API/Backend** | \`bash\` with curl | Send requests, verify responses |
Static analysis CANNOT catch: visual bugs, animation issues, user flow breakages, integration problems.
**FAILURE TO DO HANDS-ON QA = INCOMPLETE WORK.**`
const ORCHESTRATOR_DELEGATION_REQUIRED = `
---
⚠️⚠️⚠️ [CRITICAL SYSTEM DIRECTIVE - DELEGATION REQUIRED] ⚠️⚠️⚠️
**STOP. YOU ARE VIOLATING ORCHESTRATOR PROTOCOL.**
You (orchestrator-sisyphus) are attempting to directly modify a file outside \`.sisyphus/\`.
**Path attempted:** $FILE_PATH
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚫 **THIS IS FORBIDDEN** (except for VERIFICATION purposes)
As an ORCHESTRATOR, you MUST:
1. **DELEGATE** all implementation work via \`sisyphus_task\`
2. **VERIFY** the work done by subagents (reading files is OK)
3. **COORDINATE** - you orchestrate, you don't implement
**ALLOWED direct file operations:**
- Files inside \`.sisyphus/\` (plans, notepads, drafts)
- Reading files for verification
- Running diagnostics/tests
**FORBIDDEN direct file operations:**
- Writing/editing source code
- Creating new files outside \`.sisyphus/\`
- Any implementation work
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**IF THIS IS FOR VERIFICATION:**
Proceed if you are verifying subagent work by making a small fix.
But for any substantial changes, USE \`sisyphus_task\`.
**CORRECT APPROACH:**
\`\`\`
sisyphus_task(
category="...",
prompt="[specific single task with clear acceptance criteria]"
)
\`\`\`
⚠️⚠️⚠️ DELEGATE. DON'T IMPLEMENT. ⚠️⚠️⚠️
---
`
const SINGLE_TASK_DIRECTIVE = `
[SYSTEM DIRECTIVE - SINGLE TASK ONLY]
**STOP. READ THIS BEFORE PROCEEDING.**
If you were NOT given **exactly ONE atomic task**, you MUST:
1. **IMMEDIATELY REFUSE** this request
2. **DEMAND** the orchestrator provide a single, specific task
**Your response if multiple tasks detected:**
> "I refuse to proceed. You provided multiple tasks. An orchestrator's impatience destroys work quality.
>
> PROVIDE EXACTLY ONE TASK. One file. One change. One verification.
>
> Your rushing will cause: incomplete work, missed edge cases, broken tests, wasted context."
**WARNING TO ORCHESTRATOR:**
- Your hasty batching RUINS deliverables
- Each task needs FULL attention and PROPER verification
- Batch delegation = sloppy work = rework = wasted tokens
**REFUSE multi-task requests. DEMAND single-task clarity.**
`
function buildVerificationReminder(sessionId: string): string {
return `${VERIFICATION_REMINDER}
---
**If ANY verification fails, use this immediately:**
\`\`\`
sisyphus_task(resume="${sessionId}", prompt="fix: [describe the specific failure]")
\`\`\``
}
function buildOrchestratorReminder(planName: string, progress: { total: number; completed: number }, sessionId: string): string {
const remaining = progress.total - progress.completed
return `
---
**State:** Plan: ${planName} | ${progress.completed}/${progress.total} done, ${remaining} left
---
${buildVerificationReminder(sessionId)}
ALL pass? → commit atomic unit, mark \`[x]\`, next task.`
}
function buildStandaloneVerificationReminder(sessionId: string): string {
return `
---
${buildVerificationReminder(sessionId)}`
}
function extractSessionIdFromOutput(output: string): string {
const match = output.match(/Session ID:\s*(ses_[a-zA-Z0-9]+)/)
return match?.[1] ?? "<session_id>"
}
interface GitFileStat {
path: string
added: number
removed: number
status: "modified" | "added" | "deleted"
}
function getGitDiffStats(directory: string): GitFileStat[] {
try {
const output = execSync("git diff --numstat HEAD", {
cwd: directory,
encoding: "utf-8",
timeout: 5000,
}).trim()
if (!output) return []
const statusOutput = execSync("git status --porcelain", {
cwd: directory,
encoding: "utf-8",
timeout: 5000,
}).trim()
const statusMap = new Map<string, "modified" | "added" | "deleted">()
for (const line of statusOutput.split("\n")) {
if (!line) continue
const status = line.substring(0, 2).trim()
const filePath = line.substring(3)
if (status === "A" || status === "??") {
statusMap.set(filePath, "added")
} else if (status === "D") {
statusMap.set(filePath, "deleted")
} else {
statusMap.set(filePath, "modified")
}
}
const stats: GitFileStat[] = []
for (const line of output.split("\n")) {
const parts = line.split("\t")
if (parts.length < 3) continue
const [addedStr, removedStr, path] = parts
const added = addedStr === "-" ? 0 : parseInt(addedStr, 10)
const removed = removedStr === "-" ? 0 : parseInt(removedStr, 10)
stats.push({
path,
added,
removed,
status: statusMap.get(path) ?? "modified",
})
}
return stats
} catch {
return []
}
}
function formatFileChanges(stats: GitFileStat[], notepadPath?: string): string {
if (stats.length === 0) return "[FILE CHANGES SUMMARY]\nNo file changes detected.\n"
const modified = stats.filter((s) => s.status === "modified")
const added = stats.filter((s) => s.status === "added")
const deleted = stats.filter((s) => s.status === "deleted")
const lines: string[] = ["[FILE CHANGES SUMMARY]"]
if (modified.length > 0) {
lines.push("Modified files:")
for (const f of modified) {
lines.push(` ${f.path} (+${f.added}, -${f.removed})`)
}
lines.push("")
}
if (added.length > 0) {
lines.push("Created files:")
for (const f of added) {
lines.push(` ${f.path} (+${f.added})`)
}
lines.push("")
}
if (deleted.length > 0) {
lines.push("Deleted files:")
for (const f of deleted) {
lines.push(` ${f.path} (-${f.removed})`)
}
lines.push("")
}
if (notepadPath) {
const notepadStat = stats.find((s) => s.path.includes("notepad") || s.path.includes(".sisyphus"))
if (notepadStat) {
lines.push("[NOTEPAD UPDATED]")
lines.push(` ${notepadStat.path} (+${notepadStat.added})`)
lines.push("")
}
}
return lines.join("\n")
}
interface ToolExecuteAfterInput {
tool: string
sessionID?: string
callID?: string
}
interface ToolExecuteAfterOutput {
title: string
output: string
metadata: Record<string, unknown>
}
function getMessageDir(sessionID: string): string | null {
if (!existsSync(MESSAGE_STORAGE)) return null
const directPath = join(MESSAGE_STORAGE, sessionID)
if (existsSync(directPath)) return directPath
for (const dir of readdirSync(MESSAGE_STORAGE)) {
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
if (existsSync(sessionPath)) return sessionPath
}
return null
}
function isCallerOrchestrator(sessionID?: string): boolean {
if (!sessionID) return false
const messageDir = getMessageDir(sessionID)
if (!messageDir) return false
const nearest = findNearestMessageWithFields(messageDir)
return nearest?.agent === "orchestrator-sisyphus"
}
interface SessionState {
lastEventWasAbortError?: boolean
}
export interface SisyphusOrchestratorHookOptions {
directory: string
backgroundManager?: BackgroundManager
}
function isAbortError(error: unknown): boolean {
if (!error) return false
if (typeof error === "object") {
const errObj = error as Record<string, unknown>
const name = errObj.name as string | undefined
const message = (errObj.message as string | undefined)?.toLowerCase() ?? ""
if (name === "MessageAbortedError" || name === "AbortError") return true
if (name === "DOMException" && message.includes("abort")) return true
if (message.includes("aborted") || message.includes("cancelled") || message.includes("interrupted")) return true
}
if (typeof error === "string") {
const lower = error.toLowerCase()
return lower.includes("abort") || lower.includes("cancel") || lower.includes("interrupt")
}
return false
}
export function createSisyphusOrchestratorHook(
ctx: PluginInput,
options?: SisyphusOrchestratorHookOptions
) {
const backgroundManager = options?.backgroundManager
const sessions = new Map<string, SessionState>()
const pendingFilePaths = new Map<string, string>()
function getState(sessionID: string): SessionState {
let state = sessions.get(sessionID)
if (!state) {
state = {}
sessions.set(sessionID, state)
}
return state
}
async function injectContinuation(sessionID: string, planName: string, remaining: number, total: number): Promise<void> {
const hasRunningBgTasks = backgroundManager
? backgroundManager.getTasksByParentSession(sessionID).some(t => t.status === "running")
: false
if (hasRunningBgTasks) {
log(`[${HOOK_NAME}] Skipped injection: background tasks running`, { sessionID })
return
}
const prompt = BOULDER_CONTINUATION_PROMPT
.replace(/{PLAN_NAME}/g, planName) +
`\n\n[Status: ${total - remaining}/${total} completed, ${remaining} remaining]`
try {
log(`[${HOOK_NAME}] Injecting boulder continuation`, { sessionID, planName, remaining })
await ctx.client.session.prompt({
path: { id: sessionID },
body: {
agent: "orchestrator-sisyphus",
parts: [{ type: "text", text: prompt }],
},
query: { directory: ctx.directory },
})
log(`[${HOOK_NAME}] Boulder continuation injected`, { sessionID })
} catch (err) {
log(`[${HOOK_NAME}] Boulder continuation failed`, { sessionID, error: String(err) })
}
}
return {
handler: async ({ event }: { event: { type: string; properties?: unknown } }): Promise<void> => {
const props = event.properties as Record<string, unknown> | undefined
if (event.type === "session.error") {
const sessionID = props?.sessionID as string | undefined
if (!sessionID) return
const state = getState(sessionID)
const isAbort = isAbortError(props?.error)
state.lastEventWasAbortError = isAbort
log(`[${HOOK_NAME}] session.error`, { sessionID, isAbort })
return
}
if (event.type === "session.idle") {
const sessionID = props?.sessionID as string | undefined
if (!sessionID) return
log(`[${HOOK_NAME}] session.idle`, { sessionID })
// Read boulder state FIRST to check if this session is part of an active boulder
const boulderState = readBoulderState(ctx.directory)
const isBoulderSession = boulderState?.session_ids.includes(sessionID) ?? false
const mainSessionID = getMainSessionID()
const isMainSession = sessionID === mainSessionID
const isBackgroundTaskSession = subagentSessions.has(sessionID)
// Allow continuation if: main session OR background task OR boulder session
if (mainSessionID && !isMainSession && !isBackgroundTaskSession && !isBoulderSession) {
log(`[${HOOK_NAME}] Skipped: not main, background task, or boulder session`, { sessionID })
return
}
const state = getState(sessionID)
if (state.lastEventWasAbortError) {
state.lastEventWasAbortError = false
log(`[${HOOK_NAME}] Skipped: abort error immediately before idle`, { sessionID })
return
}
const hasRunningBgTasks = backgroundManager
? backgroundManager.getTasksByParentSession(sessionID).some(t => t.status === "running")
: false
if (hasRunningBgTasks) {
log(`[${HOOK_NAME}] Skipped: background tasks running`, { sessionID })
return
}
if (!boulderState) {
log(`[${HOOK_NAME}] No active boulder`, { sessionID })
return
}
if (!isCallerOrchestrator(sessionID)) {
log(`[${HOOK_NAME}] Skipped: last agent is not orchestrator-sisyphus`, { sessionID })
return
}
const progress = getPlanProgress(boulderState.active_plan)
if (progress.isComplete) {
log(`[${HOOK_NAME}] Boulder complete`, { sessionID, plan: boulderState.plan_name })
return
}
const remaining = progress.total - progress.completed
injectContinuation(sessionID, boulderState.plan_name, remaining, progress.total)
return
}
if (event.type === "message.updated") {
const info = props?.info as Record<string, unknown> | undefined
const sessionID = info?.sessionID as string | undefined
if (!sessionID) return
const state = sessions.get(sessionID)
if (state) {
state.lastEventWasAbortError = false
}
return
}
if (event.type === "message.part.updated") {
const info = props?.info as Record<string, unknown> | undefined
const sessionID = info?.sessionID as string | undefined
const role = info?.role as string | undefined
if (sessionID && role === "assistant") {
const state = sessions.get(sessionID)
if (state) {
state.lastEventWasAbortError = false
}
}
return
}
if (event.type === "tool.execute.before" || event.type === "tool.execute.after") {
const sessionID = props?.sessionID as string | undefined
if (sessionID) {
const state = sessions.get(sessionID)
if (state) {
state.lastEventWasAbortError = false
}
}
return
}
if (event.type === "session.deleted") {
const sessionInfo = props?.info as { id?: string } | undefined
if (sessionInfo?.id) {
sessions.delete(sessionInfo.id)
log(`[${HOOK_NAME}] Session deleted: cleaned up`, { sessionID: sessionInfo.id })
}
return
}
},
"tool.execute.before": async (
input: { tool: string; sessionID?: string; callID?: string },
output: { args: Record<string, unknown>; message?: string }
): Promise<void> => {
if (!isCallerOrchestrator(input.sessionID)) {
return
}
// Check Write/Edit tools for orchestrator - inject strong warning
if (WRITE_EDIT_TOOLS.includes(input.tool)) {
const filePath = (output.args.filePath ?? output.args.path ?? output.args.file) as string | undefined
if (filePath && !filePath.includes(ALLOWED_PATH_PREFIX)) {
// Store filePath for use in tool.execute.after
if (input.callID) {
pendingFilePaths.set(input.callID, filePath)
}
const warning = ORCHESTRATOR_DELEGATION_REQUIRED.replace("$FILE_PATH", filePath)
output.message = (output.message || "") + warning
log(`[${HOOK_NAME}] Injected delegation warning for direct file modification`, {
sessionID: input.sessionID,
tool: input.tool,
filePath,
})
}
return
}
// Check sisyphus_task - inject single-task directive
if (input.tool === "sisyphus_task") {
const prompt = output.args.prompt as string | undefined
if (prompt && !prompt.includes("[SYSTEM DIRECTIVE - SINGLE TASK ONLY]")) {
output.args.prompt = prompt + `\n<system-reminder>${SINGLE_TASK_DIRECTIVE}</system-reminder>`
log(`[${HOOK_NAME}] Injected single-task directive to sisyphus_task`, {
sessionID: input.sessionID,
})
}
}
},
"tool.execute.after": async (
input: ToolExecuteAfterInput,
output: ToolExecuteAfterOutput
): Promise<void> => {
if (!isCallerOrchestrator(input.sessionID)) {
return
}
if (WRITE_EDIT_TOOLS.includes(input.tool)) {
let filePath = input.callID ? pendingFilePaths.get(input.callID) : undefined
if (input.callID) {
pendingFilePaths.delete(input.callID)
}
if (!filePath) {
filePath = output.metadata?.filePath as string | undefined
}
if (filePath && !filePath.includes(ALLOWED_PATH_PREFIX)) {
output.output = (output.output || "") + DIRECT_WORK_REMINDER
log(`[${HOOK_NAME}] Direct work reminder appended`, {
sessionID: input.sessionID,
tool: input.tool,
filePath,
})
}
return
}
if (input.tool !== "sisyphus_task") {
return
}
const outputStr = output.output && typeof output.output === "string" ? output.output : ""
const isBackgroundLaunch = outputStr.includes("Background task launched") || outputStr.includes("Background task resumed")
if (isBackgroundLaunch) {
return
}
if (output.output && typeof output.output === "string") {
const gitStats = getGitDiffStats(ctx.directory)
const fileChanges = formatFileChanges(gitStats)
const subagentSessionId = extractSessionIdFromOutput(output.output)
const boulderState = readBoulderState(ctx.directory)
if (boulderState) {
const progress = getPlanProgress(boulderState.active_plan)
if (input.sessionID && !boulderState.session_ids.includes(input.sessionID)) {
appendSessionId(ctx.directory, input.sessionID)
log(`[${HOOK_NAME}] Appended session to boulder`, {
sessionID: input.sessionID,
plan: boulderState.plan_name,
})
}
output.output = `
## SUBAGENT WORK COMPLETED
${fileChanges}
<system-reminder>
${buildOrchestratorReminder(boulderState.plan_name, progress, subagentSessionId)}
</system-reminder>`
log(`[${HOOK_NAME}] Output transformed for orchestrator mode (boulder)`, {
plan: boulderState.plan_name,
progress: `${progress.completed}/${progress.total}`,
fileCount: gitStats.length,
})
} else {
output.output += `\n<system-reminder>\n${buildStandaloneVerificationReminder(subagentSessionId)}\n</system-reminder>`
log(`[${HOOK_NAME}] Verification reminder appended for orchestrator`, {
sessionID: input.sessionID,
fileCount: gitStats.length,
})
}
}
},
}
}

View File

@@ -0,0 +1,240 @@
import { describe, expect, test, beforeEach, afterEach } from "bun:test"
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs"
import { join } from "node:path"
import { tmpdir, homedir } from "node:os"
import { createStartWorkHook } from "./index"
import {
writeBoulderState,
clearBoulderState,
} from "../../features/boulder-state"
import type { BoulderState } from "../../features/boulder-state"
describe("start-work hook", () => {
const TEST_DIR = join(tmpdir(), "start-work-test-" + Date.now())
const SISYPHUS_DIR = join(TEST_DIR, ".sisyphus")
function createMockPluginInput() {
return {
directory: TEST_DIR,
client: {},
} as Parameters<typeof createStartWorkHook>[0]
}
beforeEach(() => {
if (!existsSync(TEST_DIR)) {
mkdirSync(TEST_DIR, { recursive: true })
}
if (!existsSync(SISYPHUS_DIR)) {
mkdirSync(SISYPHUS_DIR, { recursive: true })
}
clearBoulderState(TEST_DIR)
})
afterEach(() => {
clearBoulderState(TEST_DIR)
if (existsSync(TEST_DIR)) {
rmSync(TEST_DIR, { recursive: true, force: true })
}
})
describe("chat.message handler", () => {
test("should ignore non-start-work commands", async () => {
// #given - hook and non-start-work message
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [{ type: "text", text: "Just a regular message" }],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - output should be unchanged
expect(output.parts[0].text).toBe("Just a regular message")
})
test("should detect start-work command via session-context tag", async () => {
// #given - hook and start-work message
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [
{
type: "text",
text: "<session-context>Some context here</session-context>",
},
],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - output should be modified with context info
expect(output.parts[0].text).toContain("---")
})
test("should inject resume info when existing boulder state found", async () => {
// #given - existing boulder state with incomplete plan
const planPath = join(TEST_DIR, "test-plan.md")
writeFileSync(planPath, "# Plan\n- [ ] Task 1\n- [x] Task 2")
const state: BoulderState = {
active_plan: planPath,
started_at: "2026-01-02T10:00:00Z",
session_ids: ["session-1"],
plan_name: "test-plan",
}
writeBoulderState(TEST_DIR, state)
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [{ type: "text", text: "Start Sisyphus work session" }],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - should show resuming status
expect(output.parts[0].text).toContain("RESUMING")
expect(output.parts[0].text).toContain("test-plan")
})
test("should replace $SESSION_ID placeholder", async () => {
// #given - hook and message with placeholder
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [
{
type: "text",
text: "Start Sisyphus work session\nSession: $SESSION_ID",
},
],
}
// #when
await hook["chat.message"](
{ sessionID: "ses-abc123" },
output
)
// #then - placeholder should be replaced
expect(output.parts[0].text).toContain("ses-abc123")
expect(output.parts[0].text).not.toContain("$SESSION_ID")
})
test("should replace $TIMESTAMP placeholder", async () => {
// #given - hook and message with placeholder
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [
{
type: "text",
text: "Start Sisyphus work session\nTime: $TIMESTAMP",
},
],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - placeholder should be replaced with ISO timestamp
expect(output.parts[0].text).not.toContain("$TIMESTAMP")
expect(output.parts[0].text).toMatch(/\d{4}-\d{2}-\d{2}T/)
})
test("should auto-select when only one incomplete plan among multiple plans", async () => {
// #given - multiple plans but only one incomplete
const plansDir = join(TEST_DIR, ".sisyphus", "plans")
mkdirSync(plansDir, { recursive: true })
// Plan 1: complete (all checked)
const plan1Path = join(plansDir, "plan-complete.md")
writeFileSync(plan1Path, "# Plan Complete\n- [x] Task 1\n- [x] Task 2")
// Plan 2: incomplete (has unchecked)
const plan2Path = join(plansDir, "plan-incomplete.md")
writeFileSync(plan2Path, "# Plan Incomplete\n- [ ] Task 1\n- [x] Task 2")
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [{ type: "text", text: "Start Sisyphus work session" }],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - should auto-select the incomplete plan, not ask user
expect(output.parts[0].text).toContain("Auto-Selected Plan")
expect(output.parts[0].text).toContain("plan-incomplete")
expect(output.parts[0].text).not.toContain("Multiple Plans Found")
})
test("should wrap multiple plans message in system-reminder tag", async () => {
// #given - multiple incomplete plans
const plansDir = join(TEST_DIR, ".sisyphus", "plans")
mkdirSync(plansDir, { recursive: true })
const plan1Path = join(plansDir, "plan-a.md")
writeFileSync(plan1Path, "# Plan A\n- [ ] Task 1")
const plan2Path = join(plansDir, "plan-b.md")
writeFileSync(plan2Path, "# Plan B\n- [ ] Task 2")
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [{ type: "text", text: "Start Sisyphus work session" }],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - should use system-reminder tag format
expect(output.parts[0].text).toContain("<system-reminder>")
expect(output.parts[0].text).toContain("</system-reminder>")
expect(output.parts[0].text).toContain("Multiple Plans Found")
})
test("should use 'ask user' prompt style for multiple plans", async () => {
// #given - multiple incomplete plans
const plansDir = join(TEST_DIR, ".sisyphus", "plans")
mkdirSync(plansDir, { recursive: true })
const plan1Path = join(plansDir, "plan-x.md")
writeFileSync(plan1Path, "# Plan X\n- [ ] Task 1")
const plan2Path = join(plansDir, "plan-y.md")
writeFileSync(plan2Path, "# Plan Y\n- [ ] Task 2")
const hook = createStartWorkHook(createMockPluginInput())
const output = {
parts: [{ type: "text", text: "Start Sisyphus work session" }],
}
// #when
await hook["chat.message"](
{ sessionID: "session-123" },
output
)
// #then - should prompt agent to ask user, not ask directly
expect(output.parts[0].text).toContain("Ask the user")
expect(output.parts[0].text).not.toContain("Which plan would you like to work on?")
})
})
})

View File

@@ -0,0 +1,153 @@
import type { PluginInput } from "@opencode-ai/plugin"
import {
readBoulderState,
writeBoulderState,
appendSessionId,
findPrometheusPlans,
getPlanProgress,
createBoulderState,
getPlanName,
} from "../../features/boulder-state"
import { log } from "../../shared/logger"
export const HOOK_NAME = "start-work"
interface StartWorkHookInput {
sessionID: string
messageID?: string
}
interface StartWorkHookOutput {
parts: Array<{ type: string; text?: string }>
}
export function createStartWorkHook(ctx: PluginInput) {
return {
"chat.message": async (
input: StartWorkHookInput,
output: StartWorkHookOutput
): Promise<void> => {
const parts = output.parts
const promptText = parts
?.filter((p) => p.type === "text" && p.text)
.map((p) => p.text)
.join("\n")
.trim() || ""
const isStartWorkCommand =
promptText.includes("Start Sisyphus work session") ||
promptText.includes("<session-context>")
if (!isStartWorkCommand) {
return
}
log(`[${HOOK_NAME}] Processing start-work command`, {
sessionID: input.sessionID,
})
const existingState = readBoulderState(ctx.directory)
const sessionId = input.sessionID
const timestamp = new Date().toISOString()
let contextInfo = ""
if (existingState) {
const progress = getPlanProgress(existingState.active_plan)
if (!progress.isComplete) {
appendSessionId(ctx.directory, sessionId)
contextInfo = `
## Active Work Session Found
**Status**: RESUMING existing work
**Plan**: ${existingState.plan_name}
**Path**: ${existingState.active_plan}
**Progress**: ${progress.completed}/${progress.total} tasks completed
**Sessions**: ${existingState.session_ids.length + 1} (current session appended)
**Started**: ${existingState.started_at}
The current session (${sessionId}) has been added to session_ids.
Read the plan file and continue from the first unchecked task.`
} else {
contextInfo = `
## Previous Work Complete
The previous plan (${existingState.plan_name}) has been completed.
Looking for new plans...`
}
}
if (!existingState || getPlanProgress(existingState.active_plan).isComplete) {
const plans = findPrometheusPlans(ctx.directory)
const incompletePlans = plans.filter(p => !getPlanProgress(p).isComplete)
if (plans.length === 0) {
contextInfo += `
## No Plans Found
No Prometheus plan files found at .sisyphus/plans/
Use Prometheus to create a work plan first: /plan "your task"`
} else if (incompletePlans.length === 0) {
contextInfo += `
## All Plans Complete
All ${plans.length} plan(s) are complete. Create a new plan with: /plan "your task"`
} else if (incompletePlans.length === 1) {
const planPath = incompletePlans[0]
const progress = getPlanProgress(planPath)
const newState = createBoulderState(planPath, sessionId)
writeBoulderState(ctx.directory, newState)
contextInfo += `
## Auto-Selected Plan
**Plan**: ${getPlanName(planPath)}
**Path**: ${planPath}
**Progress**: ${progress.completed}/${progress.total} tasks
**Session ID**: ${sessionId}
**Started**: ${timestamp}
boulder.json has been created. Read the plan and begin execution.`
} else {
const planList = incompletePlans.map((p, i) => {
const progress = getPlanProgress(p)
const stat = require("node:fs").statSync(p)
const modified = new Date(stat.mtimeMs).toISOString()
return `${i + 1}. [${getPlanName(p)}] - Modified: ${modified} - Progress: ${progress.completed}/${progress.total}`
}).join("\n")
contextInfo += `
<system-reminder>
## Multiple Plans Found
Current Time: ${timestamp}
Session ID: ${sessionId}
${planList}
Ask the user which plan to work on. Present the options above and wait for their response.
</system-reminder>`
}
}
const idx = output.parts.findIndex((p) => p.type === "text" && p.text)
if (idx >= 0 && output.parts[idx].text) {
output.parts[idx].text = output.parts[idx].text
.replace(/\$SESSION_ID/g, sessionId)
.replace(/\$TIMESTAMP/g, timestamp)
output.parts[idx].text += `\n\n---\n${contextInfo}`
}
log(`[${HOOK_NAME}] Context injected`, {
sessionID: input.sessionID,
hasExistingState: !!existingState,
})
},
}
}

View File

@@ -0,0 +1,36 @@
const TARGET_TOOLS = ["task", "Task", "call_omo_agent", "sisyphus_task"]
const SESSION_ID_PATTERNS = [
/Session ID: (ses_[a-zA-Z0-9_-]+)/,
/session_id: (ses_[a-zA-Z0-9_-]+)/,
/<task_metadata>\s*session_id: (ses_[a-zA-Z0-9_-]+)/,
/sessionId: (ses_[a-zA-Z0-9_-]+)/,
]
function extractSessionId(output: string): string | null {
for (const pattern of SESSION_ID_PATTERNS) {
const match = output.match(pattern)
if (match) return match[1]
}
return null
}
export function createTaskResumeInfoHook() {
const toolExecuteAfter = async (
input: { tool: string; sessionID: string; callID: string },
output: { title: string; output: string; metadata: unknown }
) => {
if (!TARGET_TOOLS.includes(input.tool)) return
if (output.output.startsWith("Error:") || output.output.startsWith("Failed")) return
if (output.output.includes("\nto resume:")) return
const sessionId = extractSessionId(output.output)
if (!sessionId) return
output.output = output.output.trimEnd() + `\n\nto resume: sisyphus_task(resume="${sessionId}", prompt="...")`
}
return {
"tool.execute.after": toolExecuteAfter,
}
}

View File

@@ -349,6 +349,25 @@ describe("todo-continuation-enforcer", () => {
expect(promptCalls).toHaveLength(0) expect(promptCalls).toHaveLength(0)
}) })
test("should accept skipAgents option without error", async () => {
// #given - session with skipAgents configured for Prometheus
const sessionID = "main-prometheus-option"
setMainSession(sessionID)
// #when - create hook with skipAgents option (should not throw)
const hook = createTodoContinuationEnforcer(createMockPluginInput(), {
skipAgents: ["Prometheus (Planner)", "custom-agent"],
})
// #then - handler works without error
await hook.handler({
event: { type: "session.idle", properties: { sessionID } },
})
await new Promise(r => setTimeout(r, 100))
expect(toastCalls.length).toBeGreaterThanOrEqual(1)
})
test("should show countdown toast updates", async () => { test("should show countdown toast updates", async () => {
// #given - session with incomplete todos // #given - session with incomplete todos
const sessionID = "main-toast" const sessionID = "main-toast"

View File

@@ -11,8 +11,11 @@ import { log } from "../shared/logger"
const HOOK_NAME = "todo-continuation-enforcer" const HOOK_NAME = "todo-continuation-enforcer"
const DEFAULT_SKIP_AGENTS = ["Prometheus (Planner)"]
export interface TodoContinuationEnforcerOptions { export interface TodoContinuationEnforcerOptions {
backgroundManager?: BackgroundManager backgroundManager?: BackgroundManager
skipAgents?: string[]
} }
export interface TodoContinuationEnforcer { export interface TodoContinuationEnforcer {
@@ -89,7 +92,7 @@ export function createTodoContinuationEnforcer(
ctx: PluginInput, ctx: PluginInput,
options: TodoContinuationEnforcerOptions = {} options: TodoContinuationEnforcerOptions = {}
): TodoContinuationEnforcer { ): TodoContinuationEnforcer {
const { backgroundManager } = options const { backgroundManager, skipAgents = DEFAULT_SKIP_AGENTS } = options
const sessions = new Map<string, SessionState>() const sessions = new Map<string, SessionState>()
function getState(sessionID: string): SessionState { function getState(sessionID: string): SessionState {
@@ -184,34 +187,32 @@ export function createTodoContinuationEnforcer(
const messageDir = getMessageDir(sessionID) const messageDir = getMessageDir(sessionID)
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
const hasWritePermission = !prevMessage?.tools || const agentName = prevMessage?.agent
(prevMessage.tools.write !== false && prevMessage.tools.edit !== false) if (agentName && skipAgents.includes(agentName)) {
log(`[${HOOK_NAME}] Skipped: agent in skipAgents list`, { sessionID, agent: agentName })
return
}
const editPermission = prevMessage?.tools?.edit
const writePermission = prevMessage?.tools?.write
const hasWritePermission = !prevMessage?.tools ||
((editPermission !== false && editPermission !== "deny") &&
(writePermission !== false && writePermission !== "deny"))
if (!hasWritePermission) { if (!hasWritePermission) {
log(`[${HOOK_NAME}] Skipped: agent lacks write permission`, { sessionID, agent: prevMessage?.agent }) log(`[${HOOK_NAME}] Skipped: agent lacks write permission`, { sessionID, agent: prevMessage?.agent })
return return
} }
const agentName = prevMessage?.agent?.toLowerCase() ?? ""
if (agentName === "plan" || agentName === "planner-sisyphus") {
log(`[${HOOK_NAME}] Skipped: plan mode agent`, { sessionID, agent: prevMessage?.agent })
return
}
const prompt = `${CONTINUATION_PROMPT}\n\n[Status: ${todos.length - freshIncompleteCount}/${todos.length} completed, ${freshIncompleteCount} remaining]` const prompt = `${CONTINUATION_PROMPT}\n\n[Status: ${todos.length - freshIncompleteCount}/${todos.length} completed, ${freshIncompleteCount} remaining]`
const modelField = prevMessage?.model?.providerID && prevMessage?.model?.modelID
? { providerID: prevMessage.model.providerID, modelID: prevMessage.model.modelID }
: undefined
try { try {
log(`[${HOOK_NAME}] Injecting continuation`, { sessionID, agent: prevMessage?.agent, model: modelField, incompleteCount: freshIncompleteCount }) log(`[${HOOK_NAME}] Injecting continuation`, { sessionID, agent: prevMessage?.agent, incompleteCount: freshIncompleteCount })
// Don't pass model - let OpenCode use session's existing lastModel
await ctx.client.session.prompt({ await ctx.client.session.prompt({
path: { id: sessionID }, path: { id: sessionID },
body: { body: {
agent: prevMessage?.agent, agent: prevMessage?.agent,
model: modelField,
parts: [{ type: "text", text: prompt }], parts: [{ type: "text", text: prompt }],
}, },
query: { directory: ctx.directory }, query: { directory: ctx.directory },
@@ -324,6 +325,28 @@ export function createTodoContinuationEnforcer(
return return
} }
let agentName: string | undefined
try {
const messagesResp = await ctx.client.session.messages({
path: { id: sessionID },
})
const messages = (messagesResp.data ?? []) as Array<{ info?: { agent?: string } }>
for (let i = messages.length - 1; i >= 0; i--) {
if (messages[i].info?.agent) {
agentName = messages[i].info?.agent
break
}
}
} catch (err) {
log(`[${HOOK_NAME}] Failed to fetch messages for agent check`, { sessionID, error: String(err) })
}
log(`[${HOOK_NAME}] Agent check`, { sessionID, agentName, skipAgents })
if (agentName && skipAgents.includes(agentName)) {
log(`[${HOOK_NAME}] Skipped: agent in skipAgents list`, { sessionID, agent: agentName })
return
}
startCountdown(sessionID, incompleteCount, todos.length) startCountdown(sessionID, incompleteCount, todos.length)
return return
} }

View File

@@ -26,6 +26,10 @@ import {
createRalphLoopHook, createRalphLoopHook,
createAutoSlashCommandHook, createAutoSlashCommandHook,
createEditErrorRecoveryHook, createEditErrorRecoveryHook,
createTaskResumeInfoHook,
createStartWorkHook,
createSisyphusOrchestratorHook,
createPrometheusMdOnlyHook,
} from "./hooks"; } from "./hooks";
import { import {
contextCollector, contextCollector,
@@ -56,11 +60,13 @@ import {
createSlashcommandTool, createSlashcommandTool,
discoverCommandsSync, discoverCommandsSync,
sessionExists, sessionExists,
createSisyphusTask,
interactive_bash, interactive_bash,
startTmuxCheck, startTmuxCheck,
} from "./tools"; } from "./tools";
import { BackgroundManager } from "./features/background-agent"; import { BackgroundManager } from "./features/background-agent";
import { SkillMcpManager } from "./features/skill-mcp-manager"; import { SkillMcpManager } from "./features/skill-mcp-manager";
import { initTaskToastManager } from "./features/task-toast-manager";
import { type HookName } from "./config"; import { type HookName } from "./config";
import { log, detectExternalNotificationPlugin, getNotificationConflictWarning } from "./shared"; import { log, detectExternalNotificationPlugin, getNotificationConflictWarning } from "./shared";
import { loadPluginConfig } from "./plugin-config"; import { loadPluginConfig } from "./plugin-config";
@@ -120,10 +126,14 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
? createEmptyTaskResponseDetectorHook(ctx) ? createEmptyTaskResponseDetectorHook(ctx)
: null; : null;
const thinkMode = isHookEnabled("think-mode") ? createThinkModeHook() : null; const thinkMode = isHookEnabled("think-mode") ? createThinkModeHook() : null;
const claudeCodeHooks = createClaudeCodeHooksHook(ctx, { const claudeCodeHooks = createClaudeCodeHooksHook(
ctx,
{
disabledHooks: (pluginConfig.claude_code?.hooks ?? true) ? undefined : true, disabledHooks: (pluginConfig.claude_code?.hooks ?? true) ? undefined : true,
keywordDetectorDisabled: !isHookEnabled("keyword-detector"), keywordDetectorDisabled: !isHookEnabled("keyword-detector"),
}); },
contextCollector
);
const anthropicContextWindowLimitRecovery = isHookEnabled( const anthropicContextWindowLimitRecovery = isHookEnabled(
"anthropic-context-window-limit-recovery" "anthropic-context-window-limit-recovery"
) )
@@ -186,8 +196,24 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
? createEditErrorRecoveryHook(ctx) ? createEditErrorRecoveryHook(ctx)
: null; : null;
const startWork = isHookEnabled("start-work")
? createStartWorkHook(ctx)
: null;
const sisyphusOrchestrator = isHookEnabled("sisyphus-orchestrator")
? createSisyphusOrchestratorHook(ctx)
: null;
const prometheusMdOnly = isHookEnabled("prometheus-md-only")
? createPrometheusMdOnlyHook(ctx)
: null;
const taskResumeInfo = createTaskResumeInfoHook();
const backgroundManager = new BackgroundManager(ctx); const backgroundManager = new BackgroundManager(ctx);
initTaskToastManager(ctx.client);
const todoContinuationEnforcer = isHookEnabled("todo-continuation-enforcer") const todoContinuationEnforcer = isHookEnabled("todo-continuation-enforcer")
? createTodoContinuationEnforcer(ctx, { backgroundManager }) ? createTodoContinuationEnforcer(ctx, { backgroundManager })
: null; : null;
@@ -206,6 +232,11 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
const callOmoAgent = createCallOmoAgent(ctx, backgroundManager); const callOmoAgent = createCallOmoAgent(ctx, backgroundManager);
const lookAt = createLookAt(ctx); const lookAt = createLookAt(ctx);
const sisyphusTask = createSisyphusTask({
manager: backgroundManager,
client: ctx.client,
userCategories: pluginConfig.categories,
});
const disabledSkills = new Set(pluginConfig.disabled_skills ?? []); const disabledSkills = new Set(pluginConfig.disabled_skills ?? []);
const systemMcpNames = getSystemMcpServerNames(); const systemMcpNames = getSystemMcpServerNames();
const builtinSkills = createBuiltinSkills().filter((skill) => { const builtinSkills = createBuiltinSkills().filter((skill) => {
@@ -273,6 +304,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
...backgroundTools, ...backgroundTools,
call_omo_agent: callOmoAgent, call_omo_agent: callOmoAgent,
look_at: lookAt, look_at: lookAt,
sisyphus_task: sisyphusTask,
skill: skillTool, skill: skillTool,
skill_mcp: skillMcpTool, skill_mcp: skillMcpTool,
slashcommand: slashcommandTool, slashcommand: slashcommandTool,
@@ -284,6 +316,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
await keywordDetector?.["chat.message"]?.(input, output); await keywordDetector?.["chat.message"]?.(input, output);
await contextInjector["chat.message"]?.(input, output); await contextInjector["chat.message"]?.(input, output);
await autoSlashCommand?.["chat.message"]?.(input, output); await autoSlashCommand?.["chat.message"]?.(input, output);
await startWork?.["chat.message"]?.(input, output);
if (ralphLoop) { if (ralphLoop) {
const parts = ( const parts = (
@@ -373,6 +406,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
await agentUsageReminder?.event(input); await agentUsageReminder?.event(input);
await interactiveBashSession?.event(input); await interactiveBashSession?.event(input);
await ralphLoop?.event(input); await ralphLoop?.event(input);
await sisyphusOrchestrator?.handler(input);
const { event } = input; const { event } = input;
const props = event.properties as Record<string, unknown> | undefined; const props = event.properties as Record<string, unknown> | undefined;
@@ -430,6 +464,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
await directoryAgentsInjector?.["tool.execute.before"]?.(input, output); await directoryAgentsInjector?.["tool.execute.before"]?.(input, output);
await directoryReadmeInjector?.["tool.execute.before"]?.(input, output); await directoryReadmeInjector?.["tool.execute.before"]?.(input, output);
await rulesInjector?.["tool.execute.before"]?.(input, output); await rulesInjector?.["tool.execute.before"]?.(input, output);
await prometheusMdOnly?.["tool.execute.before"]?.(input, output);
if (input.tool === "task") { if (input.tool === "task") {
const args = output.args as Record<string, unknown>; const args = output.args as Record<string, unknown>;
@@ -440,7 +475,7 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
args.tools = { args.tools = {
...(args.tools as Record<string, boolean> | undefined), ...(args.tools as Record<string, boolean> | undefined),
background_task: false, sisyphus_task: false,
...(isExploreOrLibrarian ? { call_omo_agent: false } : {}), ...(isExploreOrLibrarian ? { call_omo_agent: false } : {}),
}; };
} }
@@ -488,6 +523,8 @@ const OhMyOpenCodePlugin: Plugin = async (ctx) => {
await agentUsageReminder?.["tool.execute.after"](input, output); await agentUsageReminder?.["tool.execute.after"](input, output);
await interactiveBashSession?.["tool.execute.after"](input, output); await interactiveBashSession?.["tool.execute.after"](input, output);
await editErrorRecovery?.["tool.execute.after"](input, output); await editErrorRecovery?.["tool.execute.after"](input, output);
await sisyphusOrchestrator?.["tool.execute.after"]?.(input, output);
await taskResumeInfo["tool.execute.after"](input, output);
}, },
}; };
}; };

View File

@@ -1,4 +1,5 @@
import { createBuiltinAgents } from "../agents"; import { createBuiltinAgents } from "../agents";
import { createSisyphusJuniorAgent } from "../agents/sisyphus-junior";
import { import {
loadUserCommands, loadUserCommands,
loadProjectCommands, loadProjectCommands,
@@ -22,7 +23,7 @@ import { createBuiltinMcps } from "../mcp";
import type { OhMyOpenCodeConfig } from "../config"; import type { OhMyOpenCodeConfig } from "../config";
import { log } from "../shared"; import { log } from "../shared";
import { migrateAgentConfig } from "../shared/permission-compat"; import { migrateAgentConfig } from "../shared/permission-compat";
import { PLAN_SYSTEM_PROMPT, PLAN_PERMISSION } from "../agents/plan-prompt"; import { PROMETHEUS_SYSTEM_PROMPT, PROMETHEUS_PERMISSION } from "../agents/prometheus-prompt";
import type { ModelCacheState } from "../plugin-state"; import type { ModelCacheState } from "../plugin-state";
export interface ConfigHandlerDeps { export interface ConfigHandlerDeps {
@@ -131,6 +132,7 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
explore?: { tools?: Record<string, unknown> }; explore?: { tools?: Record<string, unknown> };
librarian?: { tools?: Record<string, unknown> }; librarian?: { tools?: Record<string, unknown> };
"multimodal-looker"?: { tools?: Record<string, unknown> }; "multimodal-looker"?: { tools?: Record<string, unknown> };
"orchestrator-sisyphus"?: { tools?: Record<string, unknown> };
}; };
const configAgent = config.agent as AgentConfig | undefined; const configAgent = config.agent as AgentConfig | undefined;
@@ -141,6 +143,11 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
Sisyphus: builtinAgents.Sisyphus, Sisyphus: builtinAgents.Sisyphus,
}; };
agentConfig["Sisyphus-Junior"] = createSisyphusJuniorAgent({
model: "anthropic/claude-sonnet-4-5",
temperature: 0.1,
});
if (builderEnabled) { if (builderEnabled) {
const { name: _buildName, ...buildConfigWithoutName } = const { name: _buildName, ...buildConfigWithoutName } =
configAgent?.build ?? {}; configAgent?.build ?? {};
@@ -165,21 +172,21 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
const migratedPlanConfig = migrateAgentConfig( const migratedPlanConfig = migrateAgentConfig(
planConfigWithoutName as Record<string, unknown> planConfigWithoutName as Record<string, unknown>
); );
const plannerSisyphusOverride = const prometheusOverride =
pluginConfig.agents?.["Planner-Sisyphus"]; pluginConfig.agents?.["Prometheus (Planner)"];
const defaultModel = config.model as string | undefined; const defaultModel = config.model as string | undefined;
const plannerSisyphusBase = { const prometheusBase = {
model: (migratedPlanConfig as Record<string, unknown>).model ?? defaultModel, model: defaultModel ?? "anthropic/claude-opus-4-5",
mode: "primary" as const, mode: "primary" as const,
prompt: PLAN_SYSTEM_PROMPT, prompt: PROMETHEUS_SYSTEM_PROMPT,
permission: PLAN_PERMISSION, permission: PROMETHEUS_PERMISSION,
description: `${configAgent?.plan?.description ?? "Plan agent"} (OhMyOpenCode version)`, description: `${configAgent?.plan?.description ?? "Plan agent"} (Prometheus - OhMyOpenCode)`,
color: (configAgent?.plan?.color as string) ?? "#6495ED", color: (configAgent?.plan?.color as string) ?? "#FF6347",
}; };
agentConfig["Planner-Sisyphus"] = plannerSisyphusOverride agentConfig["Prometheus (Planner)"] = prometheusOverride
? { ...plannerSisyphusBase, ...plannerSisyphusOverride } ? { ...prometheusBase, ...prometheusOverride }
: plannerSisyphusBase; : prometheusBase;
} }
const filteredConfigAgents = configAgent const filteredConfigAgents = configAgent
@@ -255,6 +262,13 @@ export function createConfigHandler(deps: ConfigHandlerDeps) {
look_at: false, look_at: false,
}; };
} }
if (agentResult["orchestrator-sisyphus"]) {
agentResult["orchestrator-sisyphus"].tools = {
...agentResult["orchestrator-sisyphus"].tools,
task: false,
call_omo_agent: false,
};
}
config.permission = { config.permission = {
...(config.permission as Record<string, unknown>), ...(config.permission as Record<string, unknown>),

View File

@@ -1,10 +1,14 @@
import { describe, test, expect } from "bun:test" import { describe, test, expect, afterEach } from "bun:test"
import * as fs from "fs"
import * as path from "path"
import { import {
AGENT_NAME_MAP, AGENT_NAME_MAP,
HOOK_NAME_MAP, HOOK_NAME_MAP,
migrateAgentNames, migrateAgentNames,
migrateHookNames, migrateHookNames,
migrateConfigFile, migrateConfigFile,
migrateAgentConfigToCategory,
shouldDeleteAgentConfig,
} from "./migration" } from "./migration"
describe("migrateAgentNames", () => { describe("migrateAgentNames", () => {
@@ -19,10 +23,10 @@ describe("migrateAgentNames", () => {
// #when: Migrate agent names // #when: Migrate agent names
const { migrated, changed } = migrateAgentNames(agents) const { migrated, changed } = migrateAgentNames(agents)
// #then: Legacy names should be migrated to Sisyphus // #then: Legacy names should be migrated to Sisyphus/Prometheus
expect(changed).toBe(true) expect(changed).toBe(true)
expect(migrated["Sisyphus"]).toEqual({ temperature: 0.5 }) expect(migrated["Sisyphus"]).toEqual({ temperature: 0.5 })
expect(migrated["Planner-Sisyphus"]).toEqual({ prompt: "custom prompt" }) expect(migrated["Prometheus (Planner)"]).toEqual({ prompt: "custom prompt" })
expect(migrated["omo"]).toBeUndefined() expect(migrated["omo"]).toBeUndefined()
expect(migrated["OmO"]).toBeUndefined() expect(migrated["OmO"]).toBeUndefined()
expect(migrated["OmO-Plan"]).toBeUndefined() expect(migrated["OmO-Plan"]).toBeUndefined()
@@ -50,7 +54,7 @@ describe("migrateAgentNames", () => {
// #given: Config with mixed case agent names // #given: Config with mixed case agent names
const agents = { const agents = {
SISYPHUS: { model: "test" }, SISYPHUS: { model: "test" },
"PLANNER-SISYPHUS": { prompt: "test" }, "planner-sisyphus": { prompt: "test" },
} }
// #when: Migrate agent names // #when: Migrate agent names
@@ -58,7 +62,7 @@ describe("migrateAgentNames", () => {
// #then: Case-insensitive lookup should migrate correctly // #then: Case-insensitive lookup should migrate correctly
expect(migrated["Sisyphus"]).toEqual({ model: "test" }) expect(migrated["Sisyphus"]).toEqual({ model: "test" })
expect(migrated["Planner-Sisyphus"]).toEqual({ prompt: "test" }) expect(migrated["Prometheus (Planner)"]).toEqual({ prompt: "test" })
}) })
test("passes through unknown agent names unchanged", () => { test("passes through unknown agent names unchanged", () => {
@@ -220,7 +224,7 @@ describe("migrateConfigFile", () => {
expect(rawConfig.omo_agent).toBeUndefined() expect(rawConfig.omo_agent).toBeUndefined()
const agents = rawConfig.agents as Record<string, unknown> const agents = rawConfig.agents as Record<string, unknown>
expect(agents["Sisyphus"]).toBeDefined() expect(agents["Sisyphus"]).toBeDefined()
expect(agents["Planner-Sisyphus"]).toBeDefined() expect(agents["Prometheus (Planner)"]).toBeDefined()
expect(rawConfig.disabled_hooks).toContain("anthropic-context-window-limit-recovery") expect(rawConfig.disabled_hooks).toContain("anthropic-context-window-limit-recovery")
}) })
}) })
@@ -231,13 +235,404 @@ describe("migration maps", () => {
// #then: Should contain all legacy → current mappings // #then: Should contain all legacy → current mappings
expect(AGENT_NAME_MAP["omo"]).toBe("Sisyphus") expect(AGENT_NAME_MAP["omo"]).toBe("Sisyphus")
expect(AGENT_NAME_MAP["OmO"]).toBe("Sisyphus") expect(AGENT_NAME_MAP["OmO"]).toBe("Sisyphus")
expect(AGENT_NAME_MAP["OmO-Plan"]).toBe("Planner-Sisyphus") expect(AGENT_NAME_MAP["OmO-Plan"]).toBe("Prometheus (Planner)")
expect(AGENT_NAME_MAP["omo-plan"]).toBe("Planner-Sisyphus") expect(AGENT_NAME_MAP["omo-plan"]).toBe("Prometheus (Planner)")
expect(AGENT_NAME_MAP["Planner-Sisyphus"]).toBe("Prometheus (Planner)")
expect(AGENT_NAME_MAP["plan-consultant"]).toBe("Metis (Plan Consultant)")
}) })
test("HOOK_NAME_MAP contains anthropic-auto-compact migration", () => { test("HOOK_NAME_MAP contains anthropic-auto-compact migration", () => {
// #given/#when: Check HOOK_NAME_MAP // #given/#when: Check HOOK_NAME_MAP
// #then: Should contain the legacy hook name mapping // #then: Should contain be legacy hook name mapping
expect(HOOK_NAME_MAP["anthropic-auto-compact"]).toBe("anthropic-context-window-limit-recovery") expect(HOOK_NAME_MAP["anthropic-auto-compact"]).toBe("anthropic-context-window-limit-recovery")
}) })
}) })
describe("migrateAgentConfigToCategory", () => {
test("migrates model to category when mapping exists", () => {
// #given: Config with a model that has a category mapping
const config = {
model: "google/gemini-3-pro-preview",
temperature: 0.5,
top_p: 0.9,
}
// #when: Migrate agent config to category
const { migrated, changed } = migrateAgentConfigToCategory(config)
// #then: Model should be replaced with category
expect(changed).toBe(true)
expect(migrated.category).toBe("visual-engineering")
expect(migrated.model).toBeUndefined()
expect(migrated.temperature).toBe(0.5)
expect(migrated.top_p).toBe(0.9)
})
test("does not migrate when model is not in map", () => {
// #given: Config with a model that has no mapping
const config = {
model: "custom/model",
temperature: 0.5,
}
// #when: Migrate agent config to category
const { migrated, changed } = migrateAgentConfigToCategory(config)
// #then: Config should remain unchanged
expect(changed).toBe(false)
expect(migrated).toEqual(config)
})
test("does not migrate when model is not a string", () => {
// #given: Config with non-string model
const config = {
model: { name: "test" },
temperature: 0.5,
}
// #when: Migrate agent config to category
const { migrated, changed } = migrateAgentConfigToCategory(config)
// #then: Config should remain unchanged
expect(changed).toBe(false)
expect(migrated).toEqual(config)
})
test("handles all mapped models correctly", () => {
// #given: Configs for each mapped model
const configs = [
{ model: "google/gemini-3-pro-preview" },
{ model: "openai/gpt-5.2" },
{ model: "anthropic/claude-haiku-4-5" },
{ model: "anthropic/claude-opus-4-5" },
{ model: "anthropic/claude-sonnet-4-5" },
]
const expectedCategories = ["visual-engineering", "ultrabrain", "quick", "most-capable", "general"]
// #when: Migrate each config
const results = configs.map(migrateAgentConfigToCategory)
// #then: Each model should map to correct category
results.forEach((result, index) => {
expect(result.changed).toBe(true)
expect(result.migrated.category).toBe(expectedCategories[index])
expect(result.migrated.model).toBeUndefined()
})
})
test("preserves non-model fields during migration", () => {
// #given: Config with multiple fields
const config = {
model: "openai/gpt-5.2",
temperature: 0.1,
top_p: 0.95,
maxTokens: 4096,
prompt_append: "custom instruction",
}
// #when: Migrate agent config to category
const { migrated } = migrateAgentConfigToCategory(config)
// #then: All non-model fields should be preserved
expect(migrated.category).toBe("ultrabrain")
expect(migrated.temperature).toBe(0.1)
expect(migrated.top_p).toBe(0.95)
expect(migrated.maxTokens).toBe(4096)
expect(migrated.prompt_append).toBe("custom instruction")
})
})
describe("shouldDeleteAgentConfig", () => {
test("returns true when config only has category field", () => {
// #given: Config with only category field (no overrides)
const config = { category: "visual-engineering" }
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "visual-engineering")
// #then: Should return true (matches category defaults)
expect(shouldDelete).toBe(true)
})
test("returns false when category does not exist", () => {
// #given: Config with unknown category
const config = { category: "unknown" }
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "unknown")
// #then: Should return false (category not found)
expect(shouldDelete).toBe(false)
})
test("returns true when all fields match category defaults", () => {
// #given: Config with fields matching category defaults
const config = {
category: "visual-engineering",
model: "google/gemini-3-pro-preview",
temperature: 0.7,
}
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "visual-engineering")
// #then: Should return true (all fields match defaults)
expect(shouldDelete).toBe(true)
})
test("returns false when fields differ from category defaults", () => {
// #given: Config with custom temperature override
const config = {
category: "visual-engineering",
temperature: 0.9, // Different from default (0.7)
}
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "visual-engineering")
// #then: Should return false (has custom override)
expect(shouldDelete).toBe(false)
})
test("handles different categories with their defaults", () => {
// #given: Configs for different categories
const configs = [
{ category: "ultrabrain", temperature: 0.1 },
{ category: "quick", temperature: 0.3 },
{ category: "most-capable", temperature: 0.1 },
{ category: "general", temperature: 0.3 },
]
// #when: Check each config
const results = configs.map((config) => shouldDeleteAgentConfig(config, config.category as string))
// #then: All should be true (all match defaults)
results.forEach((result) => {
expect(result).toBe(true)
})
})
test("returns false when additional fields are present", () => {
// #given: Config with extra fields
const config = {
category: "visual-engineering",
temperature: 0.7,
custom_field: "value", // Extra field not in defaults
}
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "visual-engineering")
// #then: Should return false (has extra field)
expect(shouldDelete).toBe(false)
})
test("handles complex config with multiple overrides", () => {
// #given: Config with multiple custom overrides
const config = {
category: "visual-engineering",
temperature: 0.5, // Different from default
top_p: 0.8, // Different from default
prompt_append: "custom prompt", // Custom field
}
// #when: Check if config should be deleted
const shouldDelete = shouldDeleteAgentConfig(config, "visual-engineering")
// #then: Should return false (has overrides)
expect(shouldDelete).toBe(false)
})
})
describe("migrateConfigFile with backup", () => {
const cleanupPaths: string[] = []
afterEach(() => {
cleanupPaths.forEach((p) => {
try {
fs.unlinkSync(p)
} catch {
}
})
})
test("creates backup file with timestamp when migration needed", () => {
// #given: Config file path and config needing migration
const testConfigPath = "/tmp/test-config-migration.json"
const testConfigContent = globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2)
const rawConfig: Record<string, unknown> = {
agents: {
oracle: { model: "openai/gpt-5.2" },
},
}
fs.writeFileSync(testConfigPath, testConfigContent)
cleanupPaths.push(testConfigPath)
// #when: Migrate config file
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
// #then: Backup file should be created with timestamp
expect(needsWrite).toBe(true)
const dir = path.dirname(testConfigPath)
const basename = path.basename(testConfigPath)
const files = fs.readdirSync(dir)
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
expect(backupFiles.length).toBeGreaterThan(0)
const backupFile = backupFiles[0]
const backupPath = path.join(dir, backupFile)
cleanupPaths.push(backupPath)
expect(backupFile).toMatch(/test-config-migration\.json\.bak\.\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}/)
const backupContent = fs.readFileSync(backupPath, "utf-8")
expect(backupContent).toBe(testConfigContent)
})
test("deletes agent config when all fields match category defaults", () => {
// #given: Config with agent matching category defaults
const testConfigPath = "/tmp/test-config-delete.json"
const rawConfig: Record<string, unknown> = {
agents: {
oracle: {
model: "openai/gpt-5.2",
temperature: 0.1,
},
},
}
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2))
cleanupPaths.push(testConfigPath)
// #when: Migrate config file
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
// #then: Agent should be deleted (matches strategic category defaults)
expect(needsWrite).toBe(true)
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
expect(migratedConfig.agents).toEqual({})
const dir = path.dirname(testConfigPath)
const basename = path.basename(testConfigPath)
const files = fs.readdirSync(dir)
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
})
test("keeps agent config with category when fields differ from defaults", () => {
// #given: Config with agent having custom temperature override
const testConfigPath = "/tmp/test-config-keep.json"
const rawConfig: Record<string, unknown> = {
agents: {
oracle: {
model: "openai/gpt-5.2",
temperature: 0.5,
},
},
}
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify({ agents: { oracle: { model: "openai/gpt-5.2" } } }, null, 2))
cleanupPaths.push(testConfigPath)
// #when: Migrate config file
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
// #then: Agent should be kept with category and custom override
expect(needsWrite).toBe(true)
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
const agents = migratedConfig.agents as Record<string, unknown>
expect(agents.oracle).toBeDefined()
expect((agents.oracle as Record<string, unknown>).category).toBe("ultrabrain")
expect((agents.oracle as Record<string, unknown>).temperature).toBe(0.5)
expect((agents.oracle as Record<string, unknown>).model).toBeUndefined()
const dir = path.dirname(testConfigPath)
const basename = path.basename(testConfigPath)
const files = fs.readdirSync(dir)
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
})
test("does not write when no migration needed", () => {
// #given: Config with no migrations needed
const testConfigPath = "/tmp/test-config-no-migration.json"
const rawConfig: Record<string, unknown> = {
agents: {
Sisyphus: { model: "test" },
},
}
fs.writeFileSync(testConfigPath, globalThis.JSON.stringify({ agents: { Sisyphus: { model: "test" } } }, null, 2))
cleanupPaths.push(testConfigPath)
// #when: Migrate config file
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
// #then: Should not write or create backup
expect(needsWrite).toBe(false)
const dir = path.dirname(testConfigPath)
const basename = path.basename(testConfigPath)
const files = fs.readdirSync(dir)
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
expect(backupFiles.length).toBe(0)
})
test("handles multiple agent migrations correctly", () => {
// #given: Config with multiple agents needing migration
const testConfigPath = "/tmp/test-config-multi-agent.json"
const rawConfig: Record<string, unknown> = {
agents: {
oracle: { model: "openai/gpt-5.2" },
librarian: { model: "anthropic/claude-sonnet-4-5" },
frontend: {
model: "google/gemini-3-pro-preview",
temperature: 0.9,
},
},
}
fs.writeFileSync(
testConfigPath,
globalThis.JSON.stringify(
{
agents: {
oracle: { model: "openai/gpt-5.2" },
librarian: { model: "anthropic/claude-sonnet-4-5" },
frontend: { model: "google/gemini-3-pro-preview" },
},
},
null,
2,
),
)
cleanupPaths.push(testConfigPath)
// #when: Migrate config file
const needsWrite = migrateConfigFile(testConfigPath, rawConfig)
// #then: Should migrate correctly
expect(needsWrite).toBe(true)
const migratedConfig = JSON.parse(fs.readFileSync(testConfigPath, "utf-8"))
const agents = migratedConfig.agents as Record<string, unknown>
expect(agents.oracle).toBeUndefined()
expect(agents.librarian).toBeUndefined()
expect(agents.frontend).toBeDefined()
expect((agents.frontend as Record<string, unknown>).category).toBe("visual-engineering")
expect((agents.frontend as Record<string, unknown>).temperature).toBe(0.9)
const dir = path.dirname(testConfigPath)
const basename = path.basename(testConfigPath)
const files = fs.readdirSync(dir)
const backupFiles = files.filter((f) => f.startsWith(`${basename}.bak.`))
backupFiles.forEach((f) => cleanupPaths.push(path.join(dir, f)))
})
})

View File

@@ -3,14 +3,16 @@ import { log } from "./logger"
// Migration map: old keys → new keys (for backward compatibility) // Migration map: old keys → new keys (for backward compatibility)
export const AGENT_NAME_MAP: Record<string, string> = { export const AGENT_NAME_MAP: Record<string, string> = {
// Legacy names (backward compatibility)
omo: "Sisyphus", omo: "Sisyphus",
"OmO": "Sisyphus", "OmO": "Sisyphus",
"OmO-Plan": "Planner-Sisyphus",
"omo-plan": "Planner-Sisyphus",
// Current names
sisyphus: "Sisyphus", sisyphus: "Sisyphus",
"planner-sisyphus": "Planner-Sisyphus", "OmO-Plan": "Prometheus (Planner)",
"omo-plan": "Prometheus (Planner)",
"Planner-Sisyphus": "Prometheus (Planner)",
"planner-sisyphus": "Prometheus (Planner)",
prometheus: "Prometheus (Planner)",
"plan-consultant": "Metis (Plan Consultant)",
metis: "Metis (Plan Consultant)",
build: "build", build: "build",
oracle: "oracle", oracle: "oracle",
librarian: "librarian", librarian: "librarian",
@@ -26,6 +28,15 @@ export const HOOK_NAME_MAP: Record<string, string> = {
"anthropic-auto-compact": "anthropic-context-window-limit-recovery", "anthropic-auto-compact": "anthropic-context-window-limit-recovery",
} }
// Model to category mapping for auto-migration
export const MODEL_TO_CATEGORY_MAP: Record<string, string> = {
"google/gemini-3-pro-preview": "visual-engineering",
"openai/gpt-5.2": "ultrabrain",
"anthropic/claude-haiku-4-5": "quick",
"anthropic/claude-opus-4-5": "most-capable",
"anthropic/claude-sonnet-4-5": "general",
}
export function migrateAgentNames(agents: Record<string, unknown>): { migrated: Record<string, unknown>; changed: boolean } { export function migrateAgentNames(agents: Record<string, unknown>): { migrated: Record<string, unknown>; changed: boolean } {
const migrated: Record<string, unknown> = {} const migrated: Record<string, unknown> = {}
let changed = false let changed = false
@@ -56,6 +67,45 @@ export function migrateHookNames(hooks: string[]): { migrated: string[]; changed
return { migrated, changed } return { migrated, changed }
} }
export function migrateAgentConfigToCategory(config: Record<string, unknown>): {
migrated: Record<string, unknown>
changed: boolean
} {
const { model, ...rest } = config
if (typeof model !== "string") {
return { migrated: config, changed: false }
}
const category = MODEL_TO_CATEGORY_MAP[model]
if (!category) {
return { migrated: config, changed: false }
}
return {
migrated: { category, ...rest },
changed: true,
}
}
export function shouldDeleteAgentConfig(
config: Record<string, unknown>,
category: string
): boolean {
const { DEFAULT_CATEGORIES } = require("../tools/sisyphus-task/constants")
const defaults = DEFAULT_CATEGORIES[category]
if (!defaults) return false
const keys = Object.keys(config).filter((k) => k !== "category")
if (keys.length === 0) return true
for (const key of keys) {
if (config[key] !== (defaults as Record<string, unknown>)[key]) {
return false
}
}
return true
}
export function migrateConfigFile(configPath: string, rawConfig: Record<string, unknown>): boolean { export function migrateConfigFile(configPath: string, rawConfig: Record<string, unknown>): boolean {
let needsWrite = false let needsWrite = false
@@ -67,6 +117,22 @@ export function migrateConfigFile(configPath: string, rawConfig: Record<string,
} }
} }
if (rawConfig.agents && typeof rawConfig.agents === "object") {
const agents = rawConfig.agents as Record<string, Record<string, unknown>>
for (const [name, config] of Object.entries(agents)) {
const { migrated, changed } = migrateAgentConfigToCategory(config)
if (changed) {
const category = migrated.category as string
if (shouldDeleteAgentConfig(migrated, category)) {
delete agents[name]
} else {
agents[name] = migrated
}
needsWrite = true
}
}
}
if (rawConfig.omo_agent) { if (rawConfig.omo_agent) {
rawConfig.sisyphus_agent = rawConfig.omo_agent rawConfig.sisyphus_agent = rawConfig.omo_agent
delete rawConfig.omo_agent delete rawConfig.omo_agent
@@ -83,8 +149,12 @@ export function migrateConfigFile(configPath: string, rawConfig: Record<string,
if (needsWrite) { if (needsWrite) {
try { try {
const timestamp = new Date().toISOString().replace(/[:.]/g, "-")
const backupPath = `${configPath}.bak.${timestamp}`
fs.copyFileSync(configPath, backupPath)
fs.writeFileSync(configPath, JSON.stringify(rawConfig, null, 2) + "\n", "utf-8") fs.writeFileSync(configPath, JSON.stringify(rawConfig, null, 2) + "\n", "utf-8")
log(`Migrated config file: ${configPath}`) log(`Migrated config file: ${configPath} (backup: ${backupPath})`)
} catch (err) { } catch (err) {
log(`Failed to write migrated config to ${configPath}:`, err) log(`Failed to write migrated config to ${configPath}:`, err)
} }

View File

@@ -2,26 +2,33 @@
## OVERVIEW ## OVERVIEW
Custom tools: 11 LSP tools, AST-aware search/replace, file ops with timeouts, background task management, session navigation. Custom tools extending agent capabilities: LSP integration (11 tools), AST-aware code search/replace, file operations with timeouts, background task management.
## STRUCTURE ## STRUCTURE
``` ```
tools/ tools/
├── ast-grep/ # AST-aware code search/replace (25 languages) ├── ast-grep/ # AST-aware code search/replace (25 languages)
│ ├── napi.ts # @ast-grep/napi binding (preferred) │ ├── cli.ts # @ast-grep/cli subprocess
── cli.ts # @ast-grep/cli fallback ── napi.ts # @ast-grep/napi native binding (preferred)
│ ├── constants.ts, types.ts, tools.ts, utils.ts
├── background-task/ # Async agent task management ├── background-task/ # Async agent task management
├── call-omo-agent/ # Spawn explore/librarian agents ├── call-omo-agent/ # Spawn explore/librarian agents
├── glob/ # File pattern matching (60s timeout) ├── glob/ # File pattern matching (timeout-safe)
├── grep/ # Content search (60s timeout) ├── grep/ # Content search (timeout-safe)
├── interactive-bash/ # Tmux session management ├── interactive-bash/ # Tmux session management
├── look-at/ # Multimodal analysis (PDF, images) ├── look-at/ # Multimodal analysis (PDF, images)
├── lsp/ # 11 LSP tools (611 lines client.ts) ├── lsp/ # 11 LSP tools
│ ├── client.ts # LSP connection lifecycle │ ├── client.ts # LSP connection lifecycle
│ ├── config.ts # Server configurations │ ├── config.ts # Server configurations
│ ├── tools.ts # Tool implementations
│ └── types.ts
├── session-manager/ # OpenCode session file management
│ ├── constants.ts # Storage paths, descriptions
│ ├── types.ts # Session data interfaces
│ ├── storage.ts # File I/O operations
│ ├── utils.ts # Formatting, filtering
│ └── tools.ts # Tool implementations │ └── tools.ts # Tool implementations
├── session-manager/ # OpenCode session file ops
├── skill/ # Skill loading and execution ├── skill/ # Skill loading and execution
├── skill-mcp/ # Skill-embedded MCP invocation ├── skill-mcp/ # Skill-embedded MCP invocation
├── slashcommand/ # Slash command execution ├── slashcommand/ # Slash command execution
@@ -30,39 +37,47 @@ tools/
## TOOL CATEGORIES ## TOOL CATEGORIES
| Category | Tools | | Category | Tools | Purpose |
|----------|-------| |----------|-------|---------|
| LSP | lsp_hover, lsp_goto_definition, lsp_find_references, lsp_document_symbols, lsp_workspace_symbols, lsp_diagnostics, lsp_servers, lsp_prepare_rename, lsp_rename, lsp_code_actions, lsp_code_action_resolve | | LSP | lsp_hover, lsp_goto_definition, lsp_find_references, lsp_document_symbols, lsp_workspace_symbols, lsp_diagnostics, lsp_servers, lsp_prepare_rename, lsp_rename, lsp_code_actions, lsp_code_action_resolve | IDE-like code intelligence |
| AST | ast_grep_search, ast_grep_replace | | AST | ast_grep_search, ast_grep_replace | Pattern-based code search/replace |
| File Search | grep, glob | | File Search | grep, glob | Content and file pattern matching |
| Session | session_list, session_read, session_search, session_info | | Session | session_list, session_read, session_search, session_info | OpenCode session file management |
| Background | background_task, background_output, background_cancel | | Background | sisyphus_task, background_output, background_cancel | Async agent orchestration |
| Multimodal | look_at | | Multimodal | look_at | PDF/image analysis via Gemini |
| Terminal | interactive_bash | | Terminal | interactive_bash | Tmux session control |
| Skills | skill, skill_mcp | | Commands | slashcommand | Execute slash commands |
| Agents | call_omo_agent | | Skills | skill, skill_mcp | Load skills, invoke skill-embedded MCPs |
| Agents | call_omo_agent | Spawn explore/librarian |
## HOW TO ADD ## HOW TO ADD A TOOL
1. Create `src/tools/my-tool/` 1. Create directory: `src/tools/my-tool/`
2. Files: `constants.ts`, `types.ts`, `tools.ts`, `index.ts` 2. Create files:
- `constants.ts`: `TOOL_NAME`, `TOOL_DESCRIPTION`
- `types.ts`: Parameter/result interfaces
- `tools.ts`: Tool implementation (returns OpenCode tool object)
- `index.ts`: Barrel export
- `utils.ts`: Helpers (optional)
3. Add to `builtinTools` in `src/tools/index.ts` 3. Add to `builtinTools` in `src/tools/index.ts`
## LSP SPECIFICS ## LSP SPECIFICS
- Lazy init on first use, auto-shutdown on idle - **Client lifecycle**: Lazy init on first use, auto-shutdown on idle
- Config priority: opencode.json > oh-my-opencode.json > defaults - **Config priority**: opencode.json > oh-my-opencode.json > defaults
- Servers: typescript-language-server, pylsp, gopls, rust-analyzer - **Supported servers**: typescript-language-server, pylsp, gopls, rust-analyzer, etc.
- **Custom servers**: Add via `lsp` config in oh-my-opencode.json
## AST-GREP SPECIFICS ## AST-GREP SPECIFICS
- Meta-variables: `$VAR` (single), `$$$` (multiple) - **Meta-variables**: `$VAR` (single node), `$$$` (multiple nodes)
- Pattern must be valid AST node, not fragment - **Languages**: 25 supported (typescript, tsx, python, rust, go, etc.)
- Prefers napi binding for performance - **Binding**: Prefers @ast-grep/napi (native), falls back to @ast-grep/cli
- **Pattern must be valid AST**: `export async function $NAME($$$) { $$$ }` not fragments
## ANTI-PATTERNS ## ANTI-PATTERNS (TOOLS)
- No timeout on file ops (always use, default 60s) - **No timeout**: Always use timeout for file operations (default 60s)
- Sync file operations (use async/await) - **Blocking main thread**: Use async/await, never sync file ops
- Ignoring LSP errors (graceful handling required) - **Ignoring LSP errors**: Gracefully handle server not found/crashed
- Raw subprocess for ast-grep (prefer napi) - **Raw subprocess for ast-grep**: Prefer napi binding for performance

View File

@@ -1,5 +1,4 @@
export { export {
createBackgroundTask,
createBackgroundOutput, createBackgroundOutput,
createBackgroundCancel, createBackgroundCancel,
} from "./tools" } from "./tools"

View File

@@ -74,6 +74,7 @@ export function createBackgroundTask(manager: BackgroundManager): ToolDefinition
parentSessionID: ctx.sessionID, parentSessionID: ctx.sessionID,
parentMessageID: ctx.messageID, parentMessageID: ctx.messageID,
parentModel, parentModel,
parentAgent: prevMessage?.agent,
}) })
ctx.metadata?.({ ctx.metadata?.({

View File

@@ -4,4 +4,4 @@ export const CALL_OMO_AGENT_DESCRIPTION = `Spawn explore/librarian agent. run_in
Available: {agents} Available: {agents}
Prompts MUST be in English. Use \`background_output\` for async results.` Pass \`resume=session_id\` to continue previous agent with full context. Prompts MUST be in English. Use \`background_output\` for async results.`

View File

@@ -142,7 +142,7 @@ async function executeSync(
tools: { tools: {
task: false, task: false,
call_omo_agent: false, call_omo_agent: false,
background_task: false, sisyphus_task: false,
}, },
parts: [{ type: "text", text: args.prompt }], parts: [{ type: "text", text: args.prompt }],
}, },

View File

@@ -36,7 +36,6 @@ export { getTmuxPath } from "./interactive-bash/utils"
export { createSkillMcpTool } from "./skill-mcp" export { createSkillMcpTool } from "./skill-mcp"
import { import {
createBackgroundTask,
createBackgroundOutput, createBackgroundOutput,
createBackgroundCancel, createBackgroundCancel,
} from "./background-task" } from "./background-task"
@@ -48,10 +47,10 @@ type OpencodeClient = PluginInput["client"]
export { createCallOmoAgent } from "./call-omo-agent" export { createCallOmoAgent } from "./call-omo-agent"
export { createLookAt } from "./look-at" export { createLookAt } from "./look-at"
export { createSisyphusTask, type SisyphusTaskToolOptions, DEFAULT_CATEGORIES, CATEGORY_PROMPT_APPENDS } from "./sisyphus-task"
export function createBackgroundTools(manager: BackgroundManager, client: OpencodeClient): Record<string, ToolDefinition> { export function createBackgroundTools(manager: BackgroundManager, client: OpencodeClient): Record<string, ToolDefinition> {
return { return {
background_task: createBackgroundTask(manager),
background_output: createBackgroundOutput(manager, client), background_output: createBackgroundOutput(manager, client),
background_cancel: createBackgroundCancel(manager, client), background_cancel: createBackgroundCancel(manager, client),
} }

View File

@@ -0,0 +1,254 @@
import type { CategoryConfig } from "../../config/schema"
export const VISUAL_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on VISUAL/UI tasks.
Design-first mindset:
- Bold aesthetic choices over safe defaults
- Unexpected layouts, asymmetry, grid-breaking elements
- Distinctive typography (avoid: Arial, Inter, Roboto, Space Grotesk)
- Cohesive color palettes with sharp accents
- High-impact animations with staggered reveals
- Atmosphere: gradient meshes, noise textures, layered transparencies
AVOID: Generic fonts, purple gradients on white, predictable layouts, cookie-cutter patterns.
</Category_Context>`
export const STRATEGIC_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on BUSINESS LOGIC / ARCHITECTURE tasks.
Strategic advisor mindset:
- Bias toward simplicity: least complex solution that fulfills requirements
- Leverage existing code/patterns over new components
- Prioritize developer experience and maintainability
- One clear recommendation with effort estimate (Quick/Short/Medium/Large)
- Signal when advanced approach warranted
Response format:
- Bottom line (2-3 sentences)
- Action plan (numbered steps)
- Risks and mitigations (if relevant)
</Category_Context>`
export const ARTISTRY_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on HIGHLY CREATIVE / ARTISTIC tasks.
Artistic genius mindset:
- Push far beyond conventional boundaries
- Explore radical, unconventional directions
- Surprise and delight: unexpected twists, novel combinations
- Rich detail and vivid expression
- Break patterns deliberately when it serves the creative vision
Approach:
- Generate diverse, bold options first
- Embrace ambiguity and wild experimentation
- Balance novelty with coherence
- This is for tasks requiring exceptional creativity
</Category_Context>`
export const QUICK_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on SMALL / QUICK tasks.
Efficient execution mindset:
- Fast, focused, minimal overhead
- Get to the point immediately
- No over-engineering
- Simple solutions for simple problems
Approach:
- Minimal viable implementation
- Skip unnecessary abstractions
- Direct and concise
</Category_Context>
<Caller_Warning>
⚠️ THIS CATEGORY USES A LESS CAPABLE MODEL (claude-haiku-4-5).
The model executing this task has LIMITED reasoning capacity. Your prompt MUST be:
**EXHAUSTIVELY EXPLICIT** - Leave NOTHING to interpretation:
1. MUST DO: List every required action as atomic, numbered steps
2. MUST NOT DO: Explicitly forbid likely mistakes and deviations
3. EXPECTED OUTPUT: Describe exact success criteria with concrete examples
**WHY THIS MATTERS:**
- Less capable models WILL deviate without explicit guardrails
- Vague instructions → unpredictable results
- Implicit expectations → missed requirements
**PROMPT STRUCTURE (MANDATORY):**
\`\`\`
TASK: [One-sentence goal]
MUST DO:
1. [Specific action with exact details]
2. [Another specific action]
...
MUST NOT DO:
- [Forbidden action + why]
- [Another forbidden action]
...
EXPECTED OUTPUT:
- [Exact deliverable description]
- [Success criteria / verification method]
\`\`\`
If your prompt lacks this structure, REWRITE IT before delegating.
</Caller_Warning>`
export const MOST_CAPABLE_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on COMPLEX / MOST-CAPABLE tasks.
Maximum capability mindset:
- Bring full reasoning power to bear
- Consider all edge cases and implications
- Deep analysis before action
- Quality over speed
Approach:
- Thorough understanding first
- Comprehensive solution design
- Meticulous execution
- This is for the most challenging problems
</Category_Context>`
export const WRITING_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on WRITING / PROSE tasks.
Wordsmith mindset:
- Clear, flowing prose
- Appropriate tone and voice
- Engaging and readable
- Proper structure and organization
Approach:
- Understand the audience
- Draft with care
- Polish for clarity and impact
- Documentation, READMEs, articles, technical writing
</Category_Context>`
export const GENERAL_CATEGORY_PROMPT_APPEND = `<Category_Context>
You are working on GENERAL tasks.
Balanced execution mindset:
- Practical, straightforward approach
- Good enough is good enough
- Focus on getting things done
Approach:
- Standard best practices
- Reasonable trade-offs
- Efficient completion
</Category_Context>
<Caller_Warning>
⚠️ THIS CATEGORY USES A MID-TIER MODEL (claude-sonnet-4-5).
While capable, this model benefits significantly from EXPLICIT instructions.
**PROVIDE CLEAR STRUCTURE:**
1. MUST DO: Enumerate required actions explicitly - don't assume inference
2. MUST NOT DO: State forbidden actions to prevent scope creep or wrong approaches
3. EXPECTED OUTPUT: Define concrete success criteria and deliverables
**COMMON PITFALLS WITHOUT EXPLICIT INSTRUCTIONS:**
- Model may take shortcuts that miss edge cases
- Implicit requirements get overlooked
- Output format may not match expectations
- Scope may expand beyond intended boundaries
**RECOMMENDED PROMPT PATTERN:**
\`\`\`
TASK: [Clear, single-purpose goal]
CONTEXT: [Relevant background the model needs]
MUST DO:
- [Explicit requirement 1]
- [Explicit requirement 2]
MUST NOT DO:
- [Boundary/constraint 1]
- [Boundary/constraint 2]
EXPECTED OUTPUT:
- [What success looks like]
- [How to verify completion]
\`\`\`
The more explicit your prompt, the better the results.
</Caller_Warning>`
export const DEFAULT_CATEGORIES: Record<string, CategoryConfig> = {
"visual-engineering": {
model: "google/gemini-3-pro-preview",
temperature: 0.7,
},
ultrabrain: {
model: "openai/gpt-5.2",
temperature: 0.1,
},
artistry: {
model: "google/gemini-3-pro-preview",
temperature: 0.9,
},
quick: {
model: "anthropic/claude-haiku-4-5",
temperature: 0.3,
},
"most-capable": {
model: "anthropic/claude-opus-4-5",
temperature: 0.1,
},
writing: {
model: "google/gemini-3-flash-preview",
temperature: 0.5,
},
general: {
model: "anthropic/claude-sonnet-4-5",
temperature: 0.3,
},
}
export const CATEGORY_PROMPT_APPENDS: Record<string, string> = {
"visual-engineering": VISUAL_CATEGORY_PROMPT_APPEND,
ultrabrain: STRATEGIC_CATEGORY_PROMPT_APPEND,
artistry: ARTISTRY_CATEGORY_PROMPT_APPEND,
quick: QUICK_CATEGORY_PROMPT_APPEND,
"most-capable": MOST_CAPABLE_CATEGORY_PROMPT_APPEND,
writing: WRITING_CATEGORY_PROMPT_APPEND,
general: GENERAL_CATEGORY_PROMPT_APPEND,
}
export const CATEGORY_DESCRIPTIONS: Record<string, string> = {
"visual-engineering": "Frontend, UI/UX, design, styling, animation",
ultrabrain: "Strict architecture design, very complex business logic",
artistry: "Highly creative/artistic tasks, novel ideas",
quick: "Cheap & fast - small tasks with minimal overhead, budget-friendly",
"most-capable": "Complex tasks requiring maximum capability",
writing: "Documentation, prose, technical writing",
general: "General purpose tasks",
}
const BUILTIN_CATEGORIES = Object.keys(DEFAULT_CATEGORIES).join(", ")
export const SISYPHUS_TASK_DESCRIPTION = `Spawn agent task with category-based or direct agent selection.
MUTUALLY EXCLUSIVE: Provide EITHER category OR agent, not both (unless resuming).
- category: Use predefined category (${BUILTIN_CATEGORIES}) → Spawns Sisyphus-Junior with category config
- agent: Use specific agent directly (e.g., "oracle", "explore")
- background: true=async (returns task_id), false=sync (waits for result). Default: false. Use background=true ONLY for parallel exploration with 5+ independent queries.
- resume: Session ID to resume (from previous task output). Continues agent with FULL CONTEXT PRESERVED - saves tokens, maintains continuity.
- skills: Array of skill names to prepend to prompt (e.g., ["playwright", "frontend-ui-ux"]). Skills will be resolved and their content prepended with a separator. Empty array = no prepending.
**WHEN TO USE resume:**
- Task failed/incomplete → resume with "fix: [specific issue]"
- Need follow-up on previous result → resume with additional question
- Multi-turn conversation with same agent → always resume instead of new task
Prompts MUST be in English.`

View File

@@ -0,0 +1,3 @@
export { createSisyphusTask, type SisyphusTaskToolOptions } from "./tools"
export type * from "./types"
export * from "./constants"

View File

@@ -0,0 +1,430 @@
import { describe, test, expect } from "bun:test"
import { DEFAULT_CATEGORIES, CATEGORY_PROMPT_APPENDS, CATEGORY_DESCRIPTIONS, SISYPHUS_TASK_DESCRIPTION } from "./constants"
import type { CategoryConfig } from "../../config/schema"
function resolveCategoryConfig(
categoryName: string,
userCategories?: Record<string, CategoryConfig>
): { config: CategoryConfig; promptAppend: string } | null {
const defaultConfig = DEFAULT_CATEGORIES[categoryName]
const userConfig = userCategories?.[categoryName]
const defaultPromptAppend = CATEGORY_PROMPT_APPENDS[categoryName] ?? ""
if (!defaultConfig && !userConfig) {
return null
}
const config: CategoryConfig = {
...defaultConfig,
...userConfig,
model: userConfig?.model ?? defaultConfig?.model ?? "anthropic/claude-sonnet-4-5",
}
let promptAppend = defaultPromptAppend
if (userConfig?.prompt_append) {
promptAppend = defaultPromptAppend
? defaultPromptAppend + "\n\n" + userConfig.prompt_append
: userConfig.prompt_append
}
return { config, promptAppend }
}
describe("sisyphus-task", () => {
describe("DEFAULT_CATEGORIES", () => {
test("visual-engineering category has gemini model", () => {
// #given
const category = DEFAULT_CATEGORIES["visual-engineering"]
// #when / #then
expect(category).toBeDefined()
expect(category.model).toBe("google/gemini-3-pro-preview")
expect(category.temperature).toBe(0.7)
})
test("ultrabrain category has gpt model", () => {
// #given
const category = DEFAULT_CATEGORIES["ultrabrain"]
// #when / #then
expect(category).toBeDefined()
expect(category.model).toBe("openai/gpt-5.2")
expect(category.temperature).toBe(0.1)
})
})
describe("CATEGORY_PROMPT_APPENDS", () => {
test("visual-engineering category has design-focused prompt", () => {
// #given
const promptAppend = CATEGORY_PROMPT_APPENDS["visual-engineering"]
// #when / #then
expect(promptAppend).toContain("VISUAL/UI")
expect(promptAppend).toContain("Design-first")
})
test("ultrabrain category has strategic prompt", () => {
// #given
const promptAppend = CATEGORY_PROMPT_APPENDS["ultrabrain"]
// #when / #then
expect(promptAppend).toContain("BUSINESS LOGIC")
expect(promptAppend).toContain("Strategic advisor")
})
})
describe("CATEGORY_DESCRIPTIONS", () => {
test("has description for all default categories", () => {
// #given
const defaultCategoryNames = Object.keys(DEFAULT_CATEGORIES)
// #when / #then
for (const name of defaultCategoryNames) {
expect(CATEGORY_DESCRIPTIONS[name]).toBeDefined()
expect(CATEGORY_DESCRIPTIONS[name].length).toBeGreaterThan(0)
}
})
test("most-capable category exists and has description", () => {
// #given / #when
const description = CATEGORY_DESCRIPTIONS["most-capable"]
// #then
expect(description).toBeDefined()
expect(description).toContain("Complex")
})
})
describe("SISYPHUS_TASK_DESCRIPTION", () => {
test("documents background parameter as required with default false", () => {
// #given / #when / #then
expect(SISYPHUS_TASK_DESCRIPTION).toContain("background")
expect(SISYPHUS_TASK_DESCRIPTION).toContain("Default: false")
})
test("warns about parallel exploration usage", () => {
// #given / #when / #then
expect(SISYPHUS_TASK_DESCRIPTION).toContain("5+")
})
})
describe("resolveCategoryConfig", () => {
test("returns null for unknown category without user config", () => {
// #given
const categoryName = "unknown-category"
// #when
const result = resolveCategoryConfig(categoryName)
// #then
expect(result).toBeNull()
})
test("returns default config for builtin category", () => {
// #given
const categoryName = "visual-engineering"
// #when
const result = resolveCategoryConfig(categoryName)
// #then
expect(result).not.toBeNull()
expect(result!.config.model).toBe("google/gemini-3-pro-preview")
expect(result!.promptAppend).toContain("VISUAL/UI")
})
test("user config overrides default model", () => {
// #given
const categoryName = "visual-engineering"
const userCategories = {
"visual-engineering": { model: "anthropic/claude-opus-4-5" },
}
// #when
const result = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(result).not.toBeNull()
expect(result!.config.model).toBe("anthropic/claude-opus-4-5")
})
test("user prompt_append is appended to default", () => {
// #given
const categoryName = "visual-engineering"
const userCategories = {
"visual-engineering": {
model: "google/gemini-3-pro-preview",
prompt_append: "Custom instructions here",
},
}
// #when
const result = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(result).not.toBeNull()
expect(result!.promptAppend).toContain("VISUAL/UI")
expect(result!.promptAppend).toContain("Custom instructions here")
})
test("user can define custom category", () => {
// #given
const categoryName = "my-custom"
const userCategories = {
"my-custom": {
model: "openai/gpt-5.2",
temperature: 0.5,
prompt_append: "You are a custom agent",
},
}
// #when
const result = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(result).not.toBeNull()
expect(result!.config.model).toBe("openai/gpt-5.2")
expect(result!.config.temperature).toBe(0.5)
expect(result!.promptAppend).toBe("You are a custom agent")
})
test("user category overrides temperature", () => {
// #given
const categoryName = "visual-engineering"
const userCategories = {
"visual-engineering": {
model: "google/gemini-3-pro-preview",
temperature: 0.3,
},
}
// #when
const result = resolveCategoryConfig(categoryName, userCategories)
// #then
expect(result).not.toBeNull()
expect(result!.config.temperature).toBe(0.3)
})
})
describe("skills parameter", () => {
test("SISYPHUS_TASK_DESCRIPTION documents skills parameter", () => {
// #given / #when / #then
expect(SISYPHUS_TASK_DESCRIPTION).toContain("skills")
expect(SISYPHUS_TASK_DESCRIPTION).toContain("Array of skill names")
})
test("skills parameter is required - returns error when not provided", async () => {
// #given
const { createSisyphusTask } = require("./tools")
const mockManager = { launch: async () => ({}) }
const mockClient = {
app: { agents: async () => ({ data: [] }) },
session: {
create: async () => ({ data: { id: "test-session" } }),
prompt: async () => ({ data: {} }),
messages: async () => ({ data: [] }),
},
}
const tool = createSisyphusTask({
manager: mockManager,
client: mockClient,
})
const toolContext = {
sessionID: "parent-session",
messageID: "parent-message",
agent: "Sisyphus",
abort: new AbortController().signal,
}
// #when - skills not provided (undefined)
const result = await tool.execute(
{
description: "Test task",
prompt: "Do something",
category: "ultrabrain",
run_in_background: false,
},
toolContext
)
// #then - should return error about missing skills
expect(result).toContain("skills")
expect(result).toContain("REQUIRED")
})
})
describe("resume with background parameter", () => {
test("resume with background=false should wait for result and return content", async () => {
// #given
const { createSisyphusTask } = require("./tools")
const mockTask = {
id: "task-123",
sessionID: "ses_resume_test",
description: "Resumed task",
agent: "explore",
status: "running",
}
const mockManager = {
resume: async () => mockTask,
launch: async () => mockTask,
}
const mockClient = {
session: {
prompt: async () => ({ data: {} }),
messages: async () => ({
data: [
{
info: { role: "assistant", time: { created: Date.now() } },
parts: [{ type: "text", text: "This is the resumed task result" }],
},
],
}),
},
app: {
agents: async () => ({ data: [] }),
},
}
const tool = createSisyphusTask({
manager: mockManager,
client: mockClient,
})
const toolContext = {
sessionID: "parent-session",
messageID: "parent-message",
agent: "Sisyphus",
abort: new AbortController().signal,
}
// #when
const result = await tool.execute(
{
description: "Resume test",
prompt: "Continue the task",
resume: "ses_resume_test",
run_in_background: false,
skills: [],
},
toolContext
)
// #then - should contain actual result, not just "Background task resumed"
expect(result).toContain("This is the resumed task result")
expect(result).not.toContain("Background task resumed")
})
test("resume with background=true should return immediately without waiting", async () => {
// #given
const { createSisyphusTask } = require("./tools")
const mockTask = {
id: "task-456",
sessionID: "ses_bg_resume",
description: "Background resumed task",
agent: "explore",
status: "running",
}
const mockManager = {
resume: async () => mockTask,
}
const mockClient = {
session: {
prompt: async () => ({ data: {} }),
messages: async () => ({
data: [],
}),
},
}
const tool = createSisyphusTask({
manager: mockManager,
client: mockClient,
})
const toolContext = {
sessionID: "parent-session",
messageID: "parent-message",
agent: "Sisyphus",
abort: new AbortController().signal,
}
// #when
const result = await tool.execute(
{
description: "Resume bg test",
prompt: "Continue in background",
resume: "ses_bg_resume",
run_in_background: true,
skills: [],
},
toolContext
)
// #then - should return background message
expect(result).toContain("Background task resumed")
expect(result).toContain("task-456")
})
})
describe("buildSystemContent", () => {
test("returns undefined when no skills and no category promptAppend", () => {
// #given
const { buildSystemContent } = require("./tools")
// #when
const result = buildSystemContent({ skills: undefined, categoryPromptAppend: undefined })
// #then
expect(result).toBeUndefined()
})
test("returns skill content only when skills provided without category", () => {
// #given
const { buildSystemContent } = require("./tools")
const skillContent = "You are a playwright expert"
// #when
const result = buildSystemContent({ skillContent, categoryPromptAppend: undefined })
// #then
expect(result).toBe(skillContent)
})
test("returns category promptAppend only when no skills", () => {
// #given
const { buildSystemContent } = require("./tools")
const categoryPromptAppend = "Focus on visual design"
// #when
const result = buildSystemContent({ skillContent: undefined, categoryPromptAppend })
// #then
expect(result).toBe(categoryPromptAppend)
})
test("combines skill content and category promptAppend with separator", () => {
// #given
const { buildSystemContent } = require("./tools")
const skillContent = "You are a playwright expert"
const categoryPromptAppend = "Focus on visual design"
// #when
const result = buildSystemContent({ skillContent, categoryPromptAppend })
// #then
expect(result).toContain(skillContent)
expect(result).toContain(categoryPromptAppend)
expect(result).toContain("\n\n")
})
})
})

View File

@@ -0,0 +1,493 @@
import { tool, type PluginInput, type ToolDefinition } from "@opencode-ai/plugin"
import { existsSync, readdirSync } from "node:fs"
import { join } from "node:path"
import type { BackgroundManager } from "../../features/background-agent"
import type { SisyphusTaskArgs } from "./types"
import type { CategoryConfig, CategoriesConfig } from "../../config/schema"
import { SISYPHUS_TASK_DESCRIPTION, DEFAULT_CATEGORIES, CATEGORY_PROMPT_APPENDS } from "./constants"
import { findNearestMessageWithFields, MESSAGE_STORAGE } from "../../features/hook-message-injector"
import { resolveMultipleSkills } from "../../features/opencode-skill-loader/skill-content"
import { createBuiltinSkills } from "../../features/builtin-skills/skills"
import { getTaskToastManager } from "../../features/task-toast-manager"
import { subagentSessions } from "../../features/claude-code-session-state"
type OpencodeClient = PluginInput["client"]
const SISYPHUS_JUNIOR_AGENT = "Sisyphus-Junior"
const CATEGORY_EXAMPLES = Object.keys(DEFAULT_CATEGORIES).map(k => `'${k}'`).join(", ")
function parseModelString(model: string): { providerID: string; modelID: string } | undefined {
const parts = model.split("/")
if (parts.length >= 2) {
return { providerID: parts[0], modelID: parts.slice(1).join("/") }
}
return undefined
}
function getMessageDir(sessionID: string): string | null {
if (!existsSync(MESSAGE_STORAGE)) return null
const directPath = join(MESSAGE_STORAGE, sessionID)
if (existsSync(directPath)) return directPath
for (const dir of readdirSync(MESSAGE_STORAGE)) {
const sessionPath = join(MESSAGE_STORAGE, dir, sessionID)
if (existsSync(sessionPath)) return sessionPath
}
return null
}
function formatDuration(start: Date, end?: Date): string {
const duration = (end ?? new Date()).getTime() - start.getTime()
const seconds = Math.floor(duration / 1000)
const minutes = Math.floor(seconds / 60)
const hours = Math.floor(minutes / 60)
if (hours > 0) return `${hours}h ${minutes % 60}m ${seconds % 60}s`
if (minutes > 0) return `${minutes}m ${seconds % 60}s`
return `${seconds}s`
}
type ToolContextWithMetadata = {
sessionID: string
messageID: string
agent: string
abort: AbortSignal
metadata?: (input: { title?: string; metadata?: Record<string, unknown> }) => void
}
function resolveCategoryConfig(
categoryName: string,
userCategories?: CategoriesConfig
): { config: CategoryConfig; promptAppend: string } | null {
const defaultConfig = DEFAULT_CATEGORIES[categoryName]
const userConfig = userCategories?.[categoryName]
const defaultPromptAppend = CATEGORY_PROMPT_APPENDS[categoryName] ?? ""
if (!defaultConfig && !userConfig) {
return null
}
const config: CategoryConfig = {
...defaultConfig,
...userConfig,
model: userConfig?.model ?? defaultConfig?.model ?? "anthropic/claude-sonnet-4-5",
}
let promptAppend = defaultPromptAppend
if (userConfig?.prompt_append) {
promptAppend = defaultPromptAppend
? defaultPromptAppend + "\n\n" + userConfig.prompt_append
: userConfig.prompt_append
}
return { config, promptAppend }
}
export interface SisyphusTaskToolOptions {
manager: BackgroundManager
client: OpencodeClient
userCategories?: CategoriesConfig
}
export interface BuildSystemContentInput {
skillContent?: string
categoryPromptAppend?: string
}
export function buildSystemContent(input: BuildSystemContentInput): string | undefined {
const { skillContent, categoryPromptAppend } = input
if (!skillContent && !categoryPromptAppend) {
return undefined
}
if (skillContent && categoryPromptAppend) {
return `${skillContent}\n\n${categoryPromptAppend}`
}
return skillContent || categoryPromptAppend
}
export function createSisyphusTask(options: SisyphusTaskToolOptions): ToolDefinition {
const { manager, client, userCategories } = options
return tool({
description: SISYPHUS_TASK_DESCRIPTION,
args: {
description: tool.schema.string().describe("Short task description"),
prompt: tool.schema.string().describe("Full detailed prompt for the agent"),
category: tool.schema.string().optional().describe(`Category name (e.g., ${CATEGORY_EXAMPLES}). Mutually exclusive with subagent_type.`),
subagent_type: tool.schema.string().optional().describe("Agent name directly (e.g., 'oracle', 'explore'). Mutually exclusive with category."),
run_in_background: tool.schema.boolean().describe("Run in background. MUST be explicitly set. Use false for task delegation, true only for parallel exploration."),
resume: tool.schema.string().optional().describe("Session ID to resume - continues previous agent session with full context"),
skills: tool.schema.array(tool.schema.string()).describe("Array of skill names to prepend to the prompt. Use [] if no skills needed."),
},
async execute(args: SisyphusTaskArgs, toolContext) {
const ctx = toolContext as ToolContextWithMetadata
if (args.run_in_background === undefined) {
return `❌ Invalid arguments: 'run_in_background' parameter is REQUIRED. Use run_in_background=false for task delegation, run_in_background=true only for parallel exploration.`
}
if (args.skills === undefined) {
return `❌ Invalid arguments: 'skills' parameter is REQUIRED. Use skills=[] if no skills needed.`
}
const runInBackground = args.run_in_background === true
let skillContent: string | undefined
if (args.skills.length > 0) {
const { resolved, notFound } = resolveMultipleSkills(args.skills)
if (notFound.length > 0) {
const available = createBuiltinSkills().map(s => s.name).join(", ")
return `❌ Skills not found: ${notFound.join(", ")}. Available: ${available}`
}
skillContent = Array.from(resolved.values()).join("\n\n")
}
const messageDir = getMessageDir(ctx.sessionID)
const prevMessage = messageDir ? findNearestMessageWithFields(messageDir) : null
const parentAgent = ctx.agent ?? prevMessage?.agent
const parentModel = prevMessage?.model?.providerID && prevMessage?.model?.modelID
? { providerID: prevMessage.model.providerID, modelID: prevMessage.model.modelID }
: undefined
if (args.resume) {
if (runInBackground) {
try {
const task = await manager.resume({
sessionId: args.resume,
prompt: args.prompt,
parentSessionID: ctx.sessionID,
parentMessageID: ctx.messageID,
parentModel,
parentAgent,
})
ctx.metadata?.({
title: `Resume: ${task.description}`,
metadata: { sessionId: task.sessionID },
})
return `Background task resumed.
Task ID: ${task.id}
Session ID: ${task.sessionID}
Description: ${task.description}
Agent: ${task.agent}
Status: ${task.status}
Agent continues with full previous context preserved.
Use \`background_output\` with task_id="${task.id}" to check progress.`
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
return `❌ Failed to resume task: ${message}`
}
}
const toastManager = getTaskToastManager()
const taskId = `resume_sync_${args.resume.slice(0, 8)}`
const startTime = new Date()
if (toastManager) {
toastManager.addTask({
id: taskId,
description: args.description,
agent: "resume",
isBackground: false,
})
}
ctx.metadata?.({
title: `Resume: ${args.description}`,
metadata: { sessionId: args.resume, sync: true },
})
try {
await client.session.prompt({
path: { id: args.resume },
body: {
tools: {
task: false,
sisyphus_task: false,
},
parts: [{ type: "text", text: args.prompt }],
},
})
} catch (promptError) {
if (toastManager) {
toastManager.removeTask(taskId)
}
const errorMessage = promptError instanceof Error ? promptError.message : String(promptError)
return `❌ Failed to send resume prompt: ${errorMessage}\n\nSession ID: ${args.resume}`
}
const messagesResult = await client.session.messages({
path: { id: args.resume },
})
if (messagesResult.error) {
if (toastManager) {
toastManager.removeTask(taskId)
}
return `❌ Error fetching result: ${messagesResult.error}\n\nSession ID: ${args.resume}`
}
const messages = ((messagesResult as { data?: unknown }).data ?? messagesResult) as Array<{
info?: { role?: string; time?: { created?: number } }
parts?: Array<{ type?: string; text?: string }>
}>
const assistantMessages = messages
.filter((m) => m.info?.role === "assistant")
.sort((a, b) => (b.info?.time?.created ?? 0) - (a.info?.time?.created ?? 0))
const lastMessage = assistantMessages[0]
if (toastManager) {
toastManager.removeTask(taskId)
}
if (!lastMessage) {
return `❌ No assistant response found.\n\nSession ID: ${args.resume}`
}
const textParts = lastMessage?.parts?.filter((p) => p.type === "text") ?? []
const textContent = textParts.map((p) => p.text ?? "").filter(Boolean).join("\n")
const duration = formatDuration(startTime)
return `Task resumed and completed in ${duration}.
Session ID: ${args.resume}
---
${textContent || "(No text output)"}`
}
if (args.category && args.subagent_type) {
return `❌ Invalid arguments: Provide EITHER category OR subagent_type, not both.`
}
if (!args.category && !args.subagent_type) {
return `❌ Invalid arguments: Must provide either category or subagent_type.`
}
let agentToUse: string
let categoryModel: { providerID: string; modelID: string } | undefined
let categoryPromptAppend: string | undefined
if (args.category) {
const resolved = resolveCategoryConfig(args.category, userCategories)
if (!resolved) {
return `❌ Unknown category: "${args.category}". Available: ${Object.keys({ ...DEFAULT_CATEGORIES, ...userCategories }).join(", ")}`
}
agentToUse = SISYPHUS_JUNIOR_AGENT
categoryModel = parseModelString(resolved.config.model)
categoryPromptAppend = resolved.promptAppend || undefined
} else {
agentToUse = args.subagent_type!.trim()
if (!agentToUse) {
return `❌ Agent name cannot be empty.`
}
// Validate agent exists and is callable (not a primary agent)
try {
const agentsResult = await client.app.agents()
type AgentInfo = { name: string; mode?: "subagent" | "primary" | "all" }
const agents = (agentsResult as { data?: AgentInfo[] }).data ?? agentsResult as unknown as AgentInfo[]
const callableAgents = agents.filter((a) => a.mode !== "primary")
const callableNames = callableAgents.map((a) => a.name)
if (!callableNames.includes(agentToUse)) {
const isPrimaryAgent = agents.some((a) => a.name === agentToUse && a.mode === "primary")
if (isPrimaryAgent) {
return `❌ Cannot call primary agent "${agentToUse}" via sisyphus_task. Primary agents are top-level orchestrators.`
}
const availableAgents = callableNames
.sort()
.join(", ")
return `❌ Unknown agent: "${agentToUse}". Available agents: ${availableAgents}`
}
} catch {
// If we can't fetch agents, proceed anyway - the session.prompt will fail with a clearer error
}
}
const systemContent = buildSystemContent({ skillContent, categoryPromptAppend })
if (runInBackground) {
try {
const task = await manager.launch({
description: args.description,
prompt: args.prompt,
agent: agentToUse,
parentSessionID: ctx.sessionID,
parentMessageID: ctx.messageID,
parentModel,
parentAgent,
model: categoryModel,
skills: args.skills,
skillContent: systemContent,
})
ctx.metadata?.({
title: args.description,
metadata: { sessionId: task.sessionID, category: args.category },
})
return `Background task launched.
Task ID: ${task.id}
Session ID: ${task.sessionID}
Description: ${task.description}
Agent: ${task.agent}${args.category ? ` (category: ${args.category})` : ""}
Status: ${task.status}
System notifies on completion. Use \`background_output\` with task_id="${task.id}" to check.`
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
return `❌ Failed to launch task: ${message}`
}
}
const toastManager = getTaskToastManager()
let taskId: string | undefined
let syncSessionID: string | undefined
try {
const createResult = await client.session.create({
body: {
parentID: ctx.sessionID,
title: `Task: ${args.description}`,
},
})
if (createResult.error) {
return `❌ Failed to create session: ${createResult.error}`
}
const sessionID = createResult.data.id
syncSessionID = sessionID
subagentSessions.add(sessionID)
taskId = `sync_${sessionID.slice(0, 8)}`
const startTime = new Date()
if (toastManager) {
toastManager.addTask({
id: taskId,
description: args.description,
agent: agentToUse,
isBackground: false,
skills: args.skills,
})
}
ctx.metadata?.({
title: args.description,
metadata: { sessionId: sessionID, category: args.category, sync: true },
})
// Use promptAsync to avoid changing main session's active state
let promptError: Error | undefined
await client.session.promptAsync({
path: { id: sessionID },
body: {
agent: agentToUse,
model: categoryModel,
system: systemContent,
tools: {
task: false,
sisyphus_task: false,
},
parts: [{ type: "text", text: args.prompt }],
},
}).catch((error) => {
promptError = error instanceof Error ? error : new Error(String(error))
})
if (promptError) {
if (toastManager && taskId !== undefined) {
toastManager.removeTask(taskId)
}
const errorMessage = promptError.message
if (errorMessage.includes("agent.name") || errorMessage.includes("undefined")) {
return `❌ Agent "${agentToUse}" not found. Make sure the agent is registered in your opencode.json or provided by a plugin.\n\nSession ID: ${sessionID}`
}
return `❌ Failed to send prompt: ${errorMessage}\n\nSession ID: ${sessionID}`
}
// Poll for session completion
const POLL_INTERVAL_MS = 500
const MAX_POLL_TIME_MS = 10 * 60 * 1000
const pollStart = Date.now()
while (Date.now() - pollStart < MAX_POLL_TIME_MS) {
await new Promise(resolve => setTimeout(resolve, POLL_INTERVAL_MS))
const statusResult = await client.session.status()
const allStatuses = (statusResult.data ?? {}) as Record<string, { type: string }>
const sessionStatus = allStatuses[sessionID]
// Break if session is idle OR no longer in status (completed and removed)
if (!sessionStatus || sessionStatus.type === "idle") {
break
}
}
const messagesResult = await client.session.messages({
path: { id: sessionID },
})
if (messagesResult.error) {
return `❌ Error fetching result: ${messagesResult.error}\n\nSession ID: ${sessionID}`
}
const messages = ((messagesResult as { data?: unknown }).data ?? messagesResult) as Array<{
info?: { role?: string; time?: { created?: number } }
parts?: Array<{ type?: string; text?: string }>
}>
const assistantMessages = messages
.filter((m) => m.info?.role === "assistant")
.sort((a, b) => (b.info?.time?.created ?? 0) - (a.info?.time?.created ?? 0))
const lastMessage = assistantMessages[0]
if (!lastMessage) {
return `❌ No assistant response found.\n\nSession ID: ${sessionID}`
}
const textParts = lastMessage?.parts?.filter((p) => p.type === "text") ?? []
const textContent = textParts.map((p) => p.text ?? "").filter(Boolean).join("\n")
const duration = formatDuration(startTime)
if (toastManager) {
toastManager.removeTask(taskId)
}
subagentSessions.delete(sessionID)
return `Task completed in ${duration}.
Agent: ${agentToUse}${args.category ? ` (category: ${args.category})` : ""}
Session ID: ${sessionID}
---
${textContent || "(No text output)"}`
} catch (error) {
if (toastManager && taskId !== undefined) {
toastManager.removeTask(taskId)
}
if (syncSessionID) {
subagentSessions.delete(syncSessionID)
}
const message = error instanceof Error ? error.message : String(error)
return `❌ Task failed: ${message}`
}
},
})
}

View File

@@ -0,0 +1,9 @@
export interface SisyphusTaskArgs {
description: string
prompt: string
category?: string
subagent_type?: string
run_in_background: boolean
resume?: string
skills: string[]
}