feat(crew): add opt-in context_strategy='summarized' to reduce inter-task token usage#4980
Open
jasonmatthewsuhari wants to merge 1 commit intocrewAIInc:mainfrom
Open
Conversation
…task token usage In multi-task crews, each task's LLM prompt grows unboundedly because all prior task outputs are concatenated verbatim as context. This causes ContextLengthExceeded errors and degrades quality in longer crews. Add context_strategy field to Crew and Task: - Crew(context_strategy="summarized") — crew-level opt-in (default "full") - Task(context_strategy="summarized") — per-task override When "summarized", a small LLM call after each task completion condenses the raw output to 2-3 sentences stored in TaskOutput.context_summary. Subsequent tasks receive these summaries instead of full raw outputs. Falls back to raw output silently on any LLM error. Default behavior is unchanged (context_strategy="full"). Closes crewAIInc#4661
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #4661.
In multi-task crews, every task's prompt grows unboundedly because all prior task outputs are concatenated verbatim as context. This causes
ContextLengthExceedederrors and degrades LLM quality in longer crews.This PR adds an opt-in
context_strategyfield that compresses prior outputs to 2-3 sentence summaries before injecting them as context.Changes
TaskOutput— addscontext_summary: str | Nonefield to store the condensed version of each outputformatter.py— addsaggregate_summarized_outputs_from_task_outputs()that usescontext_summarywhen available, falls back torawCrew— addscontext_strategy: Literal["full", "summarized"]field (default"full", zero breaking changes); updates_get_context()to route to the correct aggregator; adds_generate_context_summary()called from_process_task_result()after each task completesTask— addscontext_strategy: Literal["full", "summarized"] | Nonefor per-task override of the crew-level settingUsage
Behaviour
_process_task_resultcalls_generate_context_summaryifcontext_strategy="summarized"is activeTaskOutput.context_summaryaggregate_summarized_outputs_from_task_outputsfalls back torawfor outputs with no summarycontext_strategy="full"(default) is completely unchangedTest plan
aggregate_summarized_outputs_from_task_outputsusescontext_summarywhen present, falls back torawTaskOutput.context_summarydefaults toNoneCrew.context_strategydefaults to"full"Task.context_strategydefaults toNone(inherit from crew)_generate_context_summarysetscontext_summaryon success_generate_context_summaryignores empty/whitespace LLM responses_generate_context_summaryswallows LLM errors without raising_generate_context_summaryis a no-op whentask.agentisNone_process_task_resultcalls summary generation only when strategy is"summarized"context_strategyoverrides crew-level setting_get_contextroutes to correct aggregator based on effective strategy