Skip to content

feat(crew): add opt-in context_strategy='summarized' to reduce inter-task token usage#4980

Open
jasonmatthewsuhari wants to merge 1 commit intocrewAIInc:mainfrom
jasonmatthewsuhari:feat/context-strategy-summarized
Open

feat(crew): add opt-in context_strategy='summarized' to reduce inter-task token usage#4980
jasonmatthewsuhari wants to merge 1 commit intocrewAIInc:mainfrom
jasonmatthewsuhari:feat/context-strategy-summarized

Conversation

@jasonmatthewsuhari
Copy link

Summary

Fixes #4661.

In multi-task crews, every task's prompt grows unboundedly because all prior task outputs are concatenated verbatim as context. This causes ContextLengthExceeded errors and degrades LLM quality in longer crews.

This PR adds an opt-in context_strategy field that compresses prior outputs to 2-3 sentence summaries before injecting them as context.

Changes

  • TaskOutput — adds context_summary: str | None field to store the condensed version of each output
  • formatter.py — adds aggregate_summarized_outputs_from_task_outputs() that uses context_summary when available, falls back to raw
  • Crew — adds context_strategy: Literal["full", "summarized"] field (default "full", zero breaking changes); updates _get_context() to route to the correct aggregator; adds _generate_context_summary() called from _process_task_result() after each task completes
  • Task — adds context_strategy: Literal["full", "summarized"] | None for per-task override of the crew-level setting

Usage

# Crew-level (all tasks use summaries as context)
crew = Crew(
    agents=[...],
    tasks=[...],
    context_strategy="summarized",
)

# Per-task override (this task gets full context even if crew is summarized)
important_task = Task(
    description="...",
    expected_output="...",
    agent=agent,
    context_strategy="full",
)

Behaviour

  1. After each task completes, _process_task_result calls _generate_context_summary if context_strategy="summarized" is active
  2. A single LLM call (using the task's own agent LLM) condenses the raw output to 2-3 sentences stored in TaskOutput.context_summary
  3. On any LLM error the summary is skipped silently — aggregate_summarized_outputs_from_task_outputs falls back to raw for outputs with no summary
  4. context_strategy="full" (default) is completely unchanged

Test plan

  • aggregate_summarized_outputs_from_task_outputs uses context_summary when present, falls back to raw
  • TaskOutput.context_summary defaults to None
  • Crew.context_strategy defaults to "full"
  • Task.context_strategy defaults to None (inherit from crew)
  • _generate_context_summary sets context_summary on success
  • _generate_context_summary ignores empty/whitespace LLM responses
  • _generate_context_summary swallows LLM errors without raising
  • _generate_context_summary is a no-op when task.agent is None
  • _process_task_result calls summary generation only when strategy is "summarized"
  • Per-task context_strategy overrides crew-level setting
  • _get_context routes to correct aggregator based on effective strategy

…task token usage

In multi-task crews, each task's LLM prompt grows unboundedly because all
prior task outputs are concatenated verbatim as context. This causes
ContextLengthExceeded errors and degrades quality in longer crews.

Add context_strategy field to Crew and Task:
- Crew(context_strategy="summarized") — crew-level opt-in (default "full")
- Task(context_strategy="summarized") — per-task override

When "summarized", a small LLM call after each task completion condenses
the raw output to 2-3 sentences stored in TaskOutput.context_summary.
Subsequent tasks receive these summaries instead of full raw outputs.
Falls back to raw output silently on any LLM error. Default behavior is
unchanged (context_strategy="full").

Closes crewAIInc#4661
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] [FEATURE] Add opt in context_strategy="summarized" to reduce inter task context pollution

1 participant