Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1248 +/- ##
==========================================
+ Coverage 82.78% 83.79% +1.00%
==========================================
Files 32 28 -4
Lines 3079 10237 +7158
==========================================
+ Hits 2549 8578 +6029
- Misses 530 1659 +1129 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
2206cea to
e6b20cc
Compare
Add decoder limits for HPACK string length, header list size, and allowed table-size updates. Cap accumulated HTTP/2 header-block bytes on client and server paths, deriving server-side decode limits from max_header_bytes and adding targeted regressions for HPACK and both H2 directions.
Require the initial peer SETTINGS frame, propagate advertised header table and header list limits into encoder behavior, and reject unsupported server push explicitly. Teach the high-level client to maintain multiple reusable H2 connections per origin, respect MAX_CONCURRENT_STREAMS, and mark streams above GOAWAY last_stream_id with a targeted error instead of failing the whole connection generically. On the server side, process duplicate peer settings in order and apply outbound encoder constraints from client SETTINGS.
Allow legal request trailers on the server, preserve response trailers on the client, and reject trailer pseudo-headers on both sides. Keep trailer state per stream so response trailers are only published after body EOF, while request trailers become visible to handlers once the request body is fully consumed.
Fail illegal stream-id and connection-only frame placements in the HTTP/2 framer on both read and write paths instead of leaving them to higher-level state machines. Add explicit frame-level regressions for malformed DATA, HEADERS, PRIORITY, RST_STREAM, SETTINGS, PUSH_PROMISE, PING, GOAWAY, and CONTINUATION combinations.
e6b20cc to
45f44bc
Compare
Add URIs.jl parity for the high-level client API by accepting URI inputs across request/open helpers and constructing _URLParts directly from URI components. Also replace raw Threads.Atomic request-write flags with a private _RequestWriteState helper using @atomic fields so the transport keeps the same coordination behavior without the banned Atomic wrapper type.
Implement multi-level verbose client logging with compact lifecycle output at level 1 plus detailed dumps at levels 2 and 3. - capture exact HTTP/1 request and response wire bytes via teeing and _ConnReader hooks - emit best-effort HTTP/2 request and response message dumps - suppress compressed and other non-text bodies in verbose output - cover summaries, H1 wire capture, compressed-body suppression, and H2 dumps with regression tests
Introduce a typed request-scoped timeout config shared by request,
HTTP.open, and websocket handshakes.
Keep readtimeout as a deprecated alias that seeds the new timeout
settings while preserving the existing request/header timeout
behavior until deeper transport enforcement lands.
Verification:
- julia --project=. -e 'using Test; using HTTP; using Reseau; _http_windows_ci() = false; include("test/http_client_tests.jl")'
- julia --project=. test/http_websocket_client_tests.jl
Apply connect timeouts per request across HTTP/1, HTTP/2, and
websocket handshakes, including explicit-client calls.
Bound response header waits in the H1 transport, H2 stream header
wait path, and websocket handshake path.
Verification:
- julia --project=. -e 'using Test; using HTTP; using Reseau; _http_windows_ci() = false; include("test/http_client_tests.jl")'
- julia --project=. test/http_client_transport_tests.jl
- julia --project=. test/http_websocket_client_tests.jl
- julia --project=. test/http2_client_tests.jl
Make readtimeout a true deprecated alias for read inactivity and
separate it cleanly from overall request_timeout semantics.
Refresh H1 socket deadlines on body reads and request writes, and
bound H2 stream waits plus flow-control stalls with read/write
idle deadlines.
Verification:
- julia --project=. -e 'using Test; using HTTP; using Reseau; _http_windows_ci() = false; include("test/http_client_tests.jl")'
- julia --project=. test/http_client_transport_tests.jl
- julia --project=. test/http_websocket_client_tests.jl
- julia --project=. test/http2_client_tests.jl
Update the high-level request docstring so connect_timeout matches the implemented behavior across DNS, TCP, proxy CONNECT, TLS, and HTTP/2 setup. Verification: - julia --project=. test/runtests.jl
Refresh the client, server, protocol, and migration docs so they match the new timeout model and deprecation story. Call out the richer 2.0 timeout surface compared with HTTP.jl 1.x and the old master-era readtimeout/connect_timeout workflow. Verification: - julia --project=docs docs/make.jl
|
[codex] Follow-up on the TLS trim path: TLS is still included in the trim frontier. HTTP now builds internal TLS configs through a full positional Local probe with That no longer fails on the keyword |
|
[codex] CI follow-up: I added a compatibility fallback so current registered Reseau keeps using the keyword constructor and ordinary TLS/WebSocket tests do not fail before Reseau #102 is merged/released. When the positional constructor is available, |
Read ordinary server request bodies before invoking request handlers so handlers receive a ready in-memory body. Keep incremental body reads in stream handlers, and simplify the affected docs examples to use JSON's direct struct support and req.body.
Remove the optional frontier trim gate and keep the trim verifier strict for all included workloads. Add high-level H1 TLS request trim coverage, include TLS/gzip/deflate in the precompile workload, and broaden WebSocket plus HTTP/2 server branch coverage. Local verification: JULIA_NUM_THREADS=2 julia --project -e 'using Pkg; Pkg.test()'
…rapping, status field, parse_multipart_form(req), default UA, doc clarifications Closes a cluster of friction points surfaced by a multi-persona (Python / JS / Rust / Go / curl) developer-experience review of the 2.0 branch: - Add `register!(handler, router, [method,] path)` shims so the documented `do`-block syntax actually works. - Accept `listenany=true` on `WebSockets.listen!`/`serve!` to bind to an ephemeral port (matches `HTTP.serve!`). - Wrap `Reseau.IOPoll.DeadlineExceededError`, `HostResolvers.DialTimeoutError`, and `TLS.TLSHandshakeTimeoutError` as `HTTP.TimeoutError` at the public `request` / `HTTP.open` / `WebSockets.open` boundaries, matching the migration guide promise. - Add `status::Int16` field to `StatusError` so callers can write `e.status` instead of `e.response.status`. - Add `parse_multipart_form(req)` server-side overload + test. - Send a default `User-Agent: HTTP.jl/<version>` when the caller does not override it. - Document the `String(response.body)` body-aliasing gotcha and recommend `String(copy(response.body))` for round-trippable reads. - Document the JSON.jl pattern (`body = JSON.json(payload)` plus an explicit `Content-Type: application/json` header) in the client guide, README, and `HTTP.request` docstring. - Document that `HTTP.open(f, ...)` returns the final `HTTP.Response`, not the value returned by `f`. - Note that `HTTP.download` still resolves (to `Base.download`) but bypasses HTTP.jl machinery; updated migration guide accordingly. - Add `using HTTP` (and `using JSON` / `using Downloads` where relevant) to runnable examples in the docs and README. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Cumulative effect on /json c=64 H/1.1: 16,300 req/s (the v2 starting
point) → 88,000 req/s (~2.93× v1.x baseline). H/1.1 hits parity-or-better
(1.57×–2.93×) on every cell vs HTTP.jl 1.x; H/2 is comfortably above 1×
for small/medium responses, ~0.86× v1 H/1.1 for /large at sustained
concurrency (within typical production-server H/2-vs-H/1.1 overhead).
Server-side fixes that materially moved throughput, ranked by impact:
1. Batch the entire HTTP/1.1 response head into a single socket write
(status line + headers + blank CRLF). Reseau's TCP.Conn.write does
not buffer internally, so the previous `print(io, x, y, z, …)`
pattern issued ~20 syscalls per response head. This was the
dominant per-request cost. (+5x on /json c=64.)
2. Skip per-request deadline kernel calls when serve! is called with no
timeouts (the default). Two useless syscalls per request → 0.
3. Zero-copy String body wrapping in `_compat_body_arg` — the body is
referenced via Base.codeunits(s) instead of a fresh
Vector{UInt8}(codeunits(s)) memcpy on every Response construction.
String immutability makes this safe; the body code paths only read
via copyto!/unsafe_write. (+200% on /large H/1.1, +26% on /large H/2.)
4. HTTP/2 batched DATA-frame emission with stamped frame headers — all
DATA frames the current peer flow-control window allows go in one
contiguous buffer with one socket write per batch.
5. HTTP/2 HEADERS + first DATA batch in one socket write under one
write_lock acquisition (HPACK encoder mutation and wire emission
stay on the same lock for ordering).
6. _readline_crlf fast path — `unsafe_string(ptr, len)` directly from
the connection buffer when the line fits in the current fill,
avoiding the per-line Vector{UInt8} alloc + append! chain.
7. HPACK static-table O(1) lookup — pre-built `Dict` for static-table
exact (name, value) and name-only lookups in `_find_exact_index` /
`_find_name_index`. The static table has 61 entries scanned twice
per encoded header field; this drops to a single hash lookup.
(+5–6% on /json H/2 c=64.)
Other quality fixes folded in:
- `_H2_SERVER_MAX_DATA_FRAME_SIZE` bumped from 16 KiB → 64 KiB; we
still respect peer's SETTINGS_MAX_FRAME_SIZE at framing time.
- HTTP.Headers Dict-indexable: `Base.getindex`, `Base.get`, and
`Base.haskey` now work case-insensitively, matching the `r.headers["..."]`
/ `get(r.headers, ..., default)` / `haskey(r.headers, ...)` idioms
developers from Python/JS/Go expect.
- HPACK encoder pre-sizes its output Vector via `sizehint!` based on
name+value lengths, avoiding the doubling-grow chain.
- `_write_data_frames_h2_server!` and `_write_h2_response_body_h2_server!`
no longer attempt to drain additional flow-control reservations
before writing the first batch's bytes — the previous "build more
then write" inner loop deadlocked against tests/peers that gate
further DATA on a WINDOW_UPDATE in response to the first emitted
bytes.
Bench suite (`bench/`) is included for reproduction:
- `bench/server.jl` — matched serve! handler used against both v1.x
and v2 projects; three endpoints (/tiny, /json, /large).
- `bench/all.sh`, `bench/all_repeated.sh` — drive h2load matrix
(3 endpoints × 2 protocols × 3 concurrencies, optional 3-trial
median).
- `bench/parse.jl`, `bench/parse_avg.jl` — h2load output → CSV +
Markdown summary.
- `bench/REPORT.md`, `bench/PARITY_REPORT.md` — write-ups of the
optimization rounds and which fixes mattered.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Overview
This is the HTTP.jl 2.0 overhaul PR. The branch moves HTTP.jl onto the new Reseau-backed transport/TLS/resolver substrate while keeping HTTP.jl responsible for the user-facing HTTP stack: request/response types, client and server APIs, HTTP/1.1 and HTTP/2 protocol handling, WebSockets, SSE, forms, cookies, redirects, retries, proxies, docs, and compatibility guidance.
The sibling Reseau work has now landed and Reseau is publicly registered, so this PR no longer depends on a local/sibling branch or a
[sources]override. HTTP.jl now declares a normal registered compat lower bound ofReseau = "1.1.1", which is the 1.x release line containing the TLS constructor/API shape HTTP.jl 2.0 uses internally.Goals
HTTP.request,HTTP.get,HTTP.open,HTTP.serve!, andHTTP.listen!, while exposing clearerClient,Transport,Server,Stream,Request,Response, andHeadersbuilding blocks.Major Changes
retry_if, redirect policy, tracing, verbose logging, and granular timeout handling.serve!/listen!, request handlers, stream handlers, graceful close, active-request draining, keep-alive handling,servefile,fileserver,servecontent, SSE helpers, body buffering for ordinary handlers, and incremental stream IO for lower-level handlers.HTTP.WebSockets, with client/server handshakes, frame codec coverage, subprotocol negotiation, origin checks, proxy support, close/error behavior, and TLS coverage.bench/plus the latest server-side performance pass: response-head batching, reduced deadline syscall churn, HPACK/static-table lookup improvements, and HTTP/2 DATA batching/writev paths.Breaking and Migration Notes
Reseau = "1.1.1"compat; there is no branch-local Reseau source override.HTTP.downloadhelper is gone. Users should preferDownloads.downloadfrom Julia's stdlib for download-to-file workflows. The migration guide also shows anHTTP.request(...; response_stream=io)pattern for users who want to stay on HTTP.jl APIs.HTTP.WebSockets;HTTP.openis the HTTP request streaming API, not the WebSocket entry point.response_streamown that IO object, and ordinary responses expose collected body bytes.Validation
Recent local validation while driving this branch:
git diff --checkjulia --project=. test/http2_server_tests.jljulia --project=. test/http_integration_tests.jljulia --project=. test/http_client_tests.jljulia --project=. test/http_server_http1_tests.jlJULIA_NUM_THREADS=2 julia --project=. test/runtests.jldocs/Project.toml, thenjulia --project=<tmp-docs-env> docs/make.jlThe trim-compile tests keep the verifier strict. Some task-backed trimmed executables are compiled but not executed because they currently hang in the Julia runtime; that is tracked as an upstream Julia follow-up rather than hidden behind verifier-budget tolerance.
Current Status
This PR is intended to be the release candidate shape for HTTP.jl 2.0. The latest server performance pass exposed a few correctness/stability edges while CI was running: an HTTP/2 response flow-control deadlock, two Windows-sensitive raw socket timing tests, and an HTTP/2 batched-write text-body path that needed to handle
codeunits(String)as anAbstractVector{UInt8}. The branch now includes targeted fixes and regression coverage for those cases, and CI is being driven on the updated head.