Commit Graph

8 Commits

Author SHA1 Message Date
Sisyphus
7cb3f23c2b feat: make preemptive compaction enabled by default (#372) 2025-12-31 12:55:39 +09:00
Sisyphus
058e6adf96 revert(truncation-compaction): rollback to experimental opt-in config (#348) 2025-12-30 20:42:06 +09:00
YeonGyu-Kim
83c1b8d5a4 Preserve agent context in preemptive compaction's continue message
When sending the 'Continue' message after compaction, now includes the
original agent parameter from the stored message. Previously, the Continue
message was sent without the agent parameter, causing OpenCode to use the
default 'build' agent instead of preserving the original agent context
(e.g., Sisyphus).

Implementation:
- Get messageDir using getMessageDir(sessionID)
- Retrieve storedMessage using findNearestMessageWithFields
- Pass agent: storedMessage?.agent to promptAsync body

🤖 Generated with assistance of [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
2025-12-23 15:17:51 +09:00
YeonGyu-Kim
0a90f5781a Add fallback to use stored message model info when session.idle event lacks providerID/modelID
Adds getMessageDir() helper function and fallback logic in the session.idle event handler to retrieve stored model information (providerID/modelID) when the API response lacks these fields. This mirrors the approach used in todo-continuation-enforcer hook to ensure preemptive compaction can proceed even when model info is missing from the initial response.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
2025-12-23 02:33:31 +09:00
YeonGyu-Kim
fea9477302 feat(preemptive-compaction): auto-continue after compaction (#166)
Send 'Continue' prompt automatically after preemptive compaction
completes successfully, matching anthropic-auto-compact behavior.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
2025-12-22 11:16:13 +09:00
YeonGyu-Kim
a9459c04bf Improve preemptive compaction with Claude model filtering and configurable context limits
- Limit preemptive compaction to Claude models only (opus, sonnet, haiku pattern)
- Add support for detecting `anthropic-beta: context-1m-*` header to use 1M context limit for Sonnet models
- Add `getModelLimit` callback to read model limits from OpenCode config (`provider.*.models.*.limit.context`)
- Remove hardcoded MODEL_CONTEXT_LIMITS and replace with pattern-based model detection
- Cache model context limits from config at startup for performance

This enables flexible per-model context limit configuration without hardcoding limits in the plugin.

Generated with assistance of OhMyOpenCode
2025-12-21 17:03:30 +09:00
YeonGyu-Kim
a3ff28b250 feat(preemptive-compaction): add onBeforeSummarize callback and context injection
- Added BeforeSummarizeCallback type to allow injecting context before session summarization
- Added onBeforeSummarize option to PreemptiveCompactionOptions
- Created compaction-context-injector module that injects summarization instructions with sections:
  - User Requests (As-Is)
  - Final Goal
  - Work Completed
  - Remaining Tasks
  - MUST NOT Do (Critical Constraints)
- Wired up callback invocation in preemptive-compaction before calling summarize API
- Exported new hook from src/hooks/index.ts

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
2025-12-20 15:39:54 +09:00
YeonGyu-Kim
3c039cba49 feat(preemptive-compaction): implement automatic session compaction at token threshold
Monitor token usage after assistant responses and automatically trigger session
compaction when exceeding configured threshold (default 80%). Toast notifications
provide user feedback on compaction status.

Controlled via experimental.preemptive_compaction config option.

🤖 Generated with assistance of OhMyOpenCode (https://github.com/code-yeongyu/oh-my-opencode)
2025-12-20 13:31:30 +09:00