Skip to content

AnswerDotAI/conkernelclient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

conkernelclient

Background

Jupyter’s KernelClient is designed around a simple request-reply pattern: you send one message on the shell channel, wait for its reply, then send the next. This works fine for a single-threaded notebook, but falls apart when you need concurrent execution. For instance, running multiple cells in parallel, or letting an LLM tool loop fire off code while a long-running computation is still in flight. The underlying ZMQ socket isn’t safe to share across tasks, and there’s no built-in mechanism to route replies back to the correct caller when multiple requests are outstanding.

conkernelclient solves this with ConKernelClient, a drop-in replacement for AsyncKernelClient that makes concurrent execute() calls safe. It patches Session.send to synchronise with the ZMQ I/O thread (preventing a race where two sends interleave), and spins up a dedicated reader task on the shell channel that demultiplexes incoming replies by message ID. Each execute(..., reply=True) call gets its own asyncio.Queue, so multiple coroutines can await their replies independently without interfering with each other.

Installation

Install from pypi

$ pip install conkernelclient

How to use

from conkernelclient import *

The main entry point is ConKernelManager, a drop-in replacement for AsyncKernelManager that creates ConKernelClient instances. Start a kernel and connect a client in the usual way:

import asyncio
from jupyter_client.session import Session
km = ConKernelManager(session=Session(key=b'x'))
await km.start_kernel()
kc = await km.client().start_channels()
await kc.is_alive()
True

Once connected, execute() works like the standard client. Pass reply=True to await the shell reply, or reply=False (the default) to fire-and-forget and collect results later via get_pubs:

r = await kc.execute('2+1', timeout=1, reply=True)
r['content']['status']
'ok'

The key feature is safe concurrent execution. Multiple execute(..., reply=True) calls can be outstanding simultaneously — each gets its own asyncio.Queue, and a background reader task routes replies by message ID:

from fastcore.test import test_eq
a = kc.execute('x=2', reply=True)
b = kc.execute('y=3', reply=True)
r = await asyncio.wait_for(asyncio.gather(a, b), timeout=2)
test_eq(len(r), 2)
r[0]['parent_header']['msg_id']
'dab23f68-96c28dd9c776844176afdff1_66028_2'

Both replies arrive independently, each routed to the correct caller. Without ConKernelClient, the second execute would either block waiting for the first to finish, or the replies would get crossed.

As usual, we clean up when we’re done:

if await km.is_alive():
    kc.stop_channels()
    await km.shutdown_kernel()