feat(mcp-chat): allow debugging LLM calls#7902
Merged
Lucifergene merged 9 commits intobackstage:mainfrom Mar 29, 2026
Merged
Conversation
Contributor
Missing ChangesetsThe following package(s) are changed by this PR but do not have a changeset:
See CONTRIBUTING.md for more information about how to add changesets. Changed Packages
|
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Add payload truncation (4KB limit) to prevent log explosion, fix logger type from `any` to `LoggerService` with optional chaining for graceful degradation, and extend debug logging to Gemini and Ollama providers which bypass the base makeRequest() by using their SDKs directly. - Add truncateForLogging helper to base LLMProvider class - Apply truncation to all debug log sites across providers - Change logger type to optional LoggerService in types.ts and base-provider.ts - Add debug logging to Gemini provider (request/response/error/truncation) - Add debug logging to Ollama provider (request/response/error) - Include error details in OpenAI Responses provider error logs - Add security warning to all three mirrored READMEs - Standardize test mocks to use mockServices.logger.mock() - Add base-provider.test.ts with truncation and logging tests - Regenerate API report to reflect type changes Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com>
2e11622 to
d8aa03d
Compare
Lucifergene
approved these changes
Mar 29, 2026
Contributor
Lucifergene
left a comment
There was a problem hiding this comment.
Hey @michael-todorovic,
Solid feature -- debug logging for LLM calls is a real gap in observability for this plugin, and this addresses it well. The architecture of injecting the logger through the factory pattern and gating on debug level via Backstage's standard backend.logger.overrides config is clean.
During review I noticed a few things I went ahead and fixed in a follow-up commit (d8aa03de1):
- Security: Raw payloads were logged without bounds -- added a
truncateForLogginghelper that caps at 4KB to prevent log explosion and reduce sensitive data exposure. Added a security warning to the READMEs as well. - Type safety: Changed
logger: anytologger?: LoggerServicewith optional chaining, so logging degrades gracefully when no logger is injected. - Gemini & Ollama coverage: These providers use their SDKs directly and bypass
makeRequest(), so the debug logging wasn't reaching them. Added equivalent request/response/error logging in theirsendMessage()methods. - Test coverage: Standardized all test mocks to
mockServices.logger.mock(), and addedbase-provider.test.tswith unit tests for the truncation logic and logging integration (429 tests passing).
All changes verified -- TypeScript compiles clean, full test suite passes, API report regenerated, and I tested the Gemini + GitHub MCP tool call flow end-to-end.
2 tasks
evanlankveld
pushed a commit
to evanlankveld/community-plugins
that referenced
this pull request
Apr 28, 2026
* feat(mcp-chat): allow debugging LLM calls Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: changeset wording Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: missing api extractor run Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: simplify Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: tsc warns Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: tsc Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: update report Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * fix: log llm errors Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> * feat: add safety rails and comprehensive debug logging for LLM providers Add payload truncation (4KB limit) to prevent log explosion, fix logger type from `any` to `LoggerService` with optional chaining for graceful degradation, and extend debug logging to Gemini and Ollama providers which bypass the base makeRequest() by using their SDKs directly. - Add truncateForLogging helper to base LLMProvider class - Apply truncation to all debug log sites across providers - Change logger type to optional LoggerService in types.ts and base-provider.ts - Add debug logging to Gemini provider (request/response/error/truncation) - Add debug logging to Ollama provider (request/response/error) - Include error details in OpenAI Responses provider error logs - Add security warning to all three mirrored READMEs - Standardize test mocks to use mockServices.logger.mock() - Add base-provider.test.ts with truncation and logging tests - Regenerate API report to reflect type changes Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com> --------- Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com> Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com> Co-authored-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com> Signed-off-by: Emiel van Lankveld <evanlankveld@bol.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Hey, I just made a Pull Request!
This PR adds support for debugging LLM calls without requiring external tooling such as LangSmith and friends. We simply need to enable debug logging for
mcp-chatplugin as described in the readme. The screenshot below shows the bulk payloads with the LLM (here, Gemini through our internal OpenAI compatible gateway). Between hooks, we have the provider name.I hope you'll find this useful :) this helped me in #7881
✔️ Checklist
Signed-off-byline in the message. (more info)