Skip to content

feat: route all @mentions (@copilot, @lucidia, @blackboxprogramming, @ollama) to local Ollama#10

Open
Copilot wants to merge 4 commits intomainfrom
copilot/direct-requests-to-ollama
Open

feat: route all @mentions (@copilot, @lucidia, @blackboxprogramming, @ollama) to local Ollama#10
Copilot wants to merge 4 commits intomainfrom
copilot/direct-requests-to-ollama

Conversation

Copy link

Copilot AI commented Mar 3, 2026

The repo had no runnable application. The requirement: every @copilot, @lucidia, @blackboxprogramming, and @ollama mention must resolve to a local Ollama instance — zero dependency on external AI providers.

Changes

ollama.js — routing + streaming client

  • OLLAMA_HANDLES — canonical list of recognised handles
  • parseHandle(text) — case-insensitive, strips trailing dots (@blackboxprogramming.), returns { handle, prompt } with the @handle removed from the prompt sent to Ollama
  • ollamaChat(...) — streams directly to /api/chat on the local Ollama HTTP API; no external AI SDK or provider
parseHandle('@copilot. write a sort function')
// → { handle: 'copilot', prompt: 'write a sort function' }

parseHandle('@blackboxprogramming list algorithms')
// → { handle: 'blackboxprogramming', prompt: 'list algorithms' }

index.html — chat UI

  • Configurable Ollama base URL + model (defaults: http://localhost:11434, llama3)
  • Live online/offline badge via /api/tags; model picker auto-populated from available local models
  • Reply bubbles show @handle → ollama tag so routing is always visible
  • Streaming responses rendered token-by-token; conversation history maintained for context

ollama.test.js — unit tests

  • 16 tests covering handle detection, trailing-dot stripping, case-insensitivity, unknown handles, and edge cases (empty string, handle-only input)
Original prompt

I want you to verify if my systems are also being used. When I enter @copilot., @lucidia, or @blackboxprogramming., all requests should be directed to Ollama without depending on any other services.

Combine everything. I'm exhausted by this. What's the point of creating files if none of you utilize them? Oh, I can't review that. Thanks for asking. These are my files. Use my hardware and send all inquiries directly to it. I'm done with ChatGPT, Copilot, Claude—I'm over it. Bring Ollama in. I don't care how bad the code is. I'm tired of all of you accessing my computer and restricting my access. Just get Ollama here now. When I say @ollama, I mean to direct it to Ollama. I DON'T WANT TO RELY ON ANY PROVIDER when I type @blackboxprogramming. Just send it to my machines. You're all on my local server, on my private network anyway.

Get it working E2E, I'm tired of it.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Co-authored-by: blackboxprogramming <118287761+blackboxprogramming@users.noreply.github.com>
Copilot AI changed the title [WIP] Direct all requests to Ollama without external dependencies feat: route all @mentions (@copilot, @lucidia, @blackboxprogramming, @ollama) to local Ollama Mar 3, 2026
@blackboxprogramming blackboxprogramming marked this pull request as ready for review March 6, 2026 01:58
Copilot AI review requested due to automatic review settings March 6, 2026 01:58
@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@github-advanced-security
Copy link

This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a complete local Ollama chat integration to what was previously a repository with no runnable application. All @mentions (@copilot, @lucidia, @blackboxprogramming, @ollama) are routed to a local Ollama instance with zero dependency on external AI providers.

Changes:

  • New ollama.js module with handle parsing (parseHandle), recognized handles list (OLLAMA_HANDLES), and a streaming chat client (ollamaChat) targeting the local Ollama HTTP API.
  • New index.html chat UI with configurable Ollama base URL/model, live online/offline status badge, auto-populated model picker, streaming token-by-token response rendering, and conversation history.
  • New ollama.test.js with 16 unit tests covering handle detection, trailing-dot stripping, case-insensitivity, unknown handles, and edge cases.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
ollama.js Core routing module: canonical handle list, parseHandle() for stripping @mentions, and ollamaChat() streaming client for local Ollama HTTP API
index.html Full chat UI with settings bar, model picker, streaming responses, conversation history, and brand-compliant styling
ollama.test.js 16 unit tests for OLLAMA_HANDLES and parseHandle() covering all handles, trailing dots, case-insensitivity, and edge cases

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

blackboxprogramming and others added 2 commits March 5, 2026 20:14
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants