Skip to content

feat(mcp-chat): allow debugging LLM calls#7902

Merged
Lucifergene merged 9 commits intobackstage:mainfrom
michael-todorovic:feat/mcp-llm-debug
Mar 29, 2026
Merged

feat(mcp-chat): allow debugging LLM calls#7902
Lucifergene merged 9 commits intobackstage:mainfrom
michael-todorovic:feat/mcp-llm-debug

Conversation

@michael-todorovic
Copy link
Copy Markdown
Contributor

@michael-todorovic michael-todorovic commented Mar 3, 2026

Hey, I just made a Pull Request!

This PR adds support for debugging LLM calls without requiring external tooling such as LangSmith and friends. We simply need to enable debug logging for mcp-chat plugin as described in the readme. The screenshot below shows the bulk payloads with the LLM (here, Gemini through our internal OpenAI compatible gateway). Between hooks, we have the provider name.

image

I hope you'll find this useful :) this helped me in #7881

✔️ Checklist

  • A changeset describing the change and affected packages. (more info)
  • Added or updated documentation
  • Screenshots attached (for UI changes)
  • All your commits have a Signed-off-by line in the message. (more info)

@backstage-goalie
Copy link
Copy Markdown
Contributor

backstage-goalie Bot commented Mar 3, 2026

Missing Changesets

The following package(s) are changed by this PR but do not have a changeset:

  • @backstage-community/plugin-mcp-chat

See CONTRIBUTING.md for more information about how to add changesets.

Changed Packages

Package Name Package Path Changeset Bump Current Version
@backstage-community/plugin-mcp-chat-backend workspaces/mcp-chat/plugins/mcp-chat-backend minor v0.7.0
@backstage-community/plugin-mcp-chat workspaces/mcp-chat/plugins/mcp-chat none v0.5.0

michael-todorovic and others added 9 commits March 29, 2026 21:15
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Add payload truncation (4KB limit) to prevent log explosion, fix logger
type from `any` to `LoggerService` with optional chaining for graceful
degradation, and extend debug logging to Gemini and Ollama providers
which bypass the base makeRequest() by using their SDKs directly.

- Add truncateForLogging helper to base LLMProvider class
- Apply truncation to all debug log sites across providers
- Change logger type to optional LoggerService in types.ts and base-provider.ts
- Add debug logging to Gemini provider (request/response/error/truncation)
- Add debug logging to Ollama provider (request/response/error)
- Include error details in OpenAI Responses provider error logs
- Add security warning to all three mirrored READMEs
- Standardize test mocks to use mockServices.logger.mock()
- Add base-provider.test.ts with truncation and logging tests
- Regenerate API report to reflect type changes

Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@Lucifergene Lucifergene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @michael-todorovic,

Solid feature -- debug logging for LLM calls is a real gap in observability for this plugin, and this addresses it well. The architecture of injecting the logger through the factory pattern and gating on debug level via Backstage's standard backend.logger.overrides config is clean.

During review I noticed a few things I went ahead and fixed in a follow-up commit (d8aa03de1):

  • Security: Raw payloads were logged without bounds -- added a truncateForLogging helper that caps at 4KB to prevent log explosion and reduce sensitive data exposure. Added a security warning to the READMEs as well.
  • Type safety: Changed logger: any to logger?: LoggerService with optional chaining, so logging degrades gracefully when no logger is injected.
  • Gemini & Ollama coverage: These providers use their SDKs directly and bypass makeRequest(), so the debug logging wasn't reaching them. Added equivalent request/response/error logging in their sendMessage() methods.
  • Test coverage: Standardized all test mocks to mockServices.logger.mock(), and added base-provider.test.ts with unit tests for the truncation logic and logging integration (429 tests passing).

All changes verified -- TypeScript compiles clean, full test suite passes, API report regenerated, and I tested the Gemini + GitHub MCP tool call flow end-to-end.

@Lucifergene Lucifergene merged commit a81325a into backstage:main Mar 29, 2026
12 checks passed
@Lucifergene Lucifergene linked an issue Mar 31, 2026 that may be closed by this pull request
2 tasks
evanlankveld pushed a commit to evanlankveld/community-plugins that referenced this pull request Apr 28, 2026
* feat(mcp-chat): allow debugging LLM calls

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: changeset wording

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: missing api extractor run

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: simplify

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: tsc warns

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: tsc

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: update report

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* fix: log llm errors

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>

* feat: add safety rails and comprehensive debug logging for LLM providers

Add payload truncation (4KB limit) to prevent log explosion, fix logger
type from `any` to `LoggerService` with optional chaining for graceful
degradation, and extend debug logging to Gemini and Ollama providers
which bypass the base makeRequest() by using their SDKs directly.

- Add truncateForLogging helper to base LLMProvider class
- Apply truncation to all debug log sites across providers
- Change logger type to optional LoggerService in types.ts and base-provider.ts
- Add debug logging to Gemini provider (request/response/error/truncation)
- Add debug logging to Ollama provider (request/response/error)
- Include error details in OpenAI Responses provider error logs
- Add security warning to all three mirrored READMEs
- Standardize test mocks to use mockServices.logger.mock()
- Add base-provider.test.ts with truncation and logging tests
- Regenerate API report to reflect type changes

Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com>

---------

Signed-off-by: Michael Todorovic <michael.todorovic@outlook.com>
Signed-off-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com>
Co-authored-by: Avik Kundu <47265560+Lucifergene@users.noreply.github.com>
Signed-off-by: Emiel van Lankveld <evanlankveld@bol.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

🐛 mcp-chat: Output incomplete when finish_reason!=stop

3 participants