Background
While working on the Torrust Tracker Demo we discovered two gaps related to IPv4 vs IPv6 traffic visibility in the tracker's Prometheus metrics. Both gaps were surfaced when rebuilding the Grafana dashboards for the new multi-protocol dual-stack deployment (torrust-tracker-demo#6).
All tracker services in the demo bind to [::] (the IPv6 wildcard), which on Linux with the default kernel setting (net.ipv6.bindv6only = 0) causes a single dual-stack socket to accept both IPv4 and IPv6 clients. IPv4 clients are transparently handled by the kernel via IPv4-mapped IPv6 addresses (::ffff:<ipv4>), defined in RFC 4291 §2.5.5.2.
The binding decision was made intentionally and is documented in detail in ADR-001 in the demo repo, including the experimental observation that binding 0.0.0.0:<port> and [::]:port simultaneously fails with EADDRINUSE on a system with net.ipv6.bindv6only = 0 (the Linux default). Whether this is purely an OS configuration issue or also requires tracker changes is one of the open questions this issue aims to answer.
The core problem: there is currently no way to distinguish IPv4 clients from native IPv6 clients in Prometheus metrics, because:
- The existing
server_binding_address_ip_family label is always inet6 (it describes the server socket, not the connecting client).
- The existing
server_binding_address_ip_type label is also a server-side label and is always plain (because the server binds to ::, a plain IPv6 wildcard, not to an IPv4-mapped address). It does not reflect the client's address type.
This issue tracks two related tasks to fix that.
Task 1 — Verify separate IPv4/IPv6 socket bindings
Goal: verify whether the tracker can be configured to bind two instances of the same service on the same port — one to 0.0.0.0:<port> (IPv4-only) and one to [::]:port (IPv6-only) — and confirm the metrics then correctly separate IPv4 and IPv6 traffic.
What this requires
On Linux, net.ipv6.bindv6only defaults to 0, meaning a [::] socket claims port ownership for both address families. Attempting to also bind 0.0.0.0:<same-port> then fails immediately:
ERROR UDP TRACKER: panic! (error when building socket)
addr=[::]:6969 err=Address already in use (os error 98)
This was observed on a development machine running Ubuntu 25 with net.ipv6.bindv6only = 0 (the Linux default). It is not yet known whether this is purely an OS configuration issue or whether the tracker also needs changes (e.g. setting IPV6_V6ONLY per socket). For separate bindings to work, at minimum one of these must hold:
- The system has
net.ipv6.bindv6only = 1, or
- The tracker sets the
IPV6_V6ONLY socket option on IPv6 sockets before binding.
The tracker does not currently set IPV6_V6ONLY. The system-wide change (sysctl -w net.ipv6.bindv6only=1) would affect all applications on the host and is not suitable for the demo server.
What to verify
- When the tracker opens an IPv6 socket, does it (or should it) set
IPV6_V6ONLY = 1 on that socket before calling bind()?
- With
IPV6_V6ONLY = 1, confirm that two UDP tracker instances can coexist on the same port (e.g. both on 6969, one on 0.0.0.0 and one on [::]).
- Confirm that
server_binding_address_ip_family is then inet for the IPv4 socket and inet6 for the IPv6 socket, allowing Grafana panels to be split by address family.
- Confirm that UDP connection IDs issued by the IPv4 socket are NOT visible to the IPv6 socket and vice versa (the separation must be complete to avoid
connect/announce cross-socket mismatches, which would cause connection ID errors for dual-stack clients).
Testing note
Changing net.ipv6.bindv6only system-wide on a development machine affects all running services. Testing in a dedicated VM or container is recommended.
Task 2 — Add client address labels to request metrics
Goal: add per-request labels that identify the connecting client's IP address family and type (mirroring the existing server binding labels) so that Grafana dashboards can filter and separate IPv4 and IPv6 traffic without requiring separate socket bindings.
Why this is needed
Issue #1375 introduced server_binding_address_ip_type for the server socket's address type, but this label is not useful for client traffic analysis:
- It is always
plain in a dual-stack setup (the server binds to ::, a plain IPv6 wildcard).
- It never reflects the client's address family.
An example metric from the tracker demo illustrates the current state:
{
"type": "counter",
"name": "udp_tracker_server_requests_aborted_total",
"unit": "count",
"description": "Total number of UDP requests aborted",
"samples": [
{
"value": 1,
"recorded_at": "2026-03-10T11:45:54.116141470+00:00",
"labels": [
{ "name": "server_binding_address_ip_family", "value": "inet6" },
{ "name": "server_binding_address_ip_type", "value": "plain" },
{ "name": "server_binding_ip", "value": "::" },
{ "name": "server_binding_port", "value": "6969" },
{ "name": "server_binding_protocol", "value": "udp" }
]
}
]
}
All of the existing labels are server-side. There is no label for the client's address type. IPv4 clients (::ffff:a.b.c.d) and native IPv6 clients are counted identically under the same inet6 series.
Proposed labels
Add client-side counterparts to the existing server binding labels, following the same naming pattern. The full set of server labels and their client equivalents would be:
| Existing server label |
Proposed client label |
Notes |
server_binding_address_ip_family |
client_address_ip_family |
inet or inet6 |
server_binding_address_ip_type |
client_address_ip_type |
plain or v4_mapped_v6 |
server_binding_ip |
(not proposed) |
Raw IP — unbounded cardinality |
server_binding_port |
(not proposed) |
Raw port — unbounded cardinality |
Only client_address_ip_family and client_address_ip_type are proposed. Raw client IP and port must never be used as label values because they are unbounded and would cause a Prometheus cardinality explosion (one time series per unique client address).
The value sets should reuse the existing IpType enum (plain, v4_mapped_v6) and the existing ip-family values (inet, inet6), keeping the label model consistent with the server side.
Cardinality note
Prometheus creates one time series per unique combination of label values. The proposed labels each have a small bounded value set (inet/inet6 and plain/v4_mapped_v6), adding only a constant factor to existing series. Never use raw client IPs or ports as label values — they are unbounded and would cause a cardinality explosion.
Which metrics to instrument
At minimum, all per-request counters that already carry server binding labels should also carry the new client label:
udp_tracker_server_requests_accepted_total
udp_tracker_server_requests_received_total
udp_tracker_server_requests_aborted_total
udp_tracker_server_requests_banned_total
udp_tracker_server_responses_sent_total
udp_tracker_server_errors_total
http_tracker_core_requests_received_total
- (any other per-request counters added in future)
Global/aggregate counters without a request context (swarm_coordination_registry_*, tracker_core_persistent_*) do not need this label.
Where to look in the code
The server-side IP type is already modelled in IpType. The client address is available at the point where each request is dispatched, so the same approach can be applied to derive client_address_ip_type from the client's socket address. The label would be attached when the metric counter is incremented.
Related
cc @da2ce7
Background
While working on the Torrust Tracker Demo we discovered two gaps related to IPv4 vs IPv6 traffic visibility in the tracker's Prometheus metrics. Both gaps were surfaced when rebuilding the Grafana dashboards for the new multi-protocol dual-stack deployment (torrust-tracker-demo#6).
All tracker services in the demo bind to
[::](the IPv6 wildcard), which on Linux with the default kernel setting (net.ipv6.bindv6only = 0) causes a single dual-stack socket to accept both IPv4 and IPv6 clients. IPv4 clients are transparently handled by the kernel via IPv4-mapped IPv6 addresses (::ffff:<ipv4>), defined in RFC 4291 §2.5.5.2.The binding decision was made intentionally and is documented in detail in ADR-001 in the demo repo, including the experimental observation that binding
0.0.0.0:<port>and[::]:portsimultaneously fails withEADDRINUSEon a system withnet.ipv6.bindv6only = 0(the Linux default). Whether this is purely an OS configuration issue or also requires tracker changes is one of the open questions this issue aims to answer.The core problem: there is currently no way to distinguish IPv4 clients from native IPv6 clients in Prometheus metrics, because:
server_binding_address_ip_familylabel is alwaysinet6(it describes the server socket, not the connecting client).server_binding_address_ip_typelabel is also a server-side label and is alwaysplain(because the server binds to::, a plain IPv6 wildcard, not to an IPv4-mapped address). It does not reflect the client's address type.This issue tracks two related tasks to fix that.
Task 1 — Verify separate IPv4/IPv6 socket bindings
Goal: verify whether the tracker can be configured to bind two instances of the same service on the same port — one to
0.0.0.0:<port>(IPv4-only) and one to[::]:port(IPv6-only) — and confirm the metrics then correctly separate IPv4 and IPv6 traffic.What this requires
On Linux,
net.ipv6.bindv6onlydefaults to0, meaning a[::]socket claims port ownership for both address families. Attempting to also bind0.0.0.0:<same-port>then fails immediately:This was observed on a development machine running Ubuntu 25 with
net.ipv6.bindv6only = 0(the Linux default). It is not yet known whether this is purely an OS configuration issue or whether the tracker also needs changes (e.g. settingIPV6_V6ONLYper socket). For separate bindings to work, at minimum one of these must hold:net.ipv6.bindv6only = 1, orIPV6_V6ONLYsocket option on IPv6 sockets before binding.The tracker does not currently set
IPV6_V6ONLY. The system-wide change (sysctl -w net.ipv6.bindv6only=1) would affect all applications on the host and is not suitable for the demo server.What to verify
IPV6_V6ONLY = 1on that socket before callingbind()?IPV6_V6ONLY = 1, confirm that two UDP tracker instances can coexist on the same port (e.g. both on6969, one on0.0.0.0and one on[::]).server_binding_address_ip_familyis theninetfor the IPv4 socket andinet6for the IPv6 socket, allowing Grafana panels to be split by address family.connect/announcecross-socket mismatches, which would cause connection ID errors for dual-stack clients).Testing note
Changing
net.ipv6.bindv6onlysystem-wide on a development machine affects all running services. Testing in a dedicated VM or container is recommended.Task 2 — Add client address labels to request metrics
Goal: add per-request labels that identify the connecting client's IP address family and type (mirroring the existing server binding labels) so that Grafana dashboards can filter and separate IPv4 and IPv6 traffic without requiring separate socket bindings.
Why this is needed
Issue #1375 introduced
server_binding_address_ip_typefor the server socket's address type, but this label is not useful for client traffic analysis:plainin a dual-stack setup (the server binds to::, a plain IPv6 wildcard).An example metric from the tracker demo illustrates the current state:
{ "type": "counter", "name": "udp_tracker_server_requests_aborted_total", "unit": "count", "description": "Total number of UDP requests aborted", "samples": [ { "value": 1, "recorded_at": "2026-03-10T11:45:54.116141470+00:00", "labels": [ { "name": "server_binding_address_ip_family", "value": "inet6" }, { "name": "server_binding_address_ip_type", "value": "plain" }, { "name": "server_binding_ip", "value": "::" }, { "name": "server_binding_port", "value": "6969" }, { "name": "server_binding_protocol", "value": "udp" } ] } ] }All of the existing labels are server-side. There is no label for the client's address type. IPv4 clients (
::ffff:a.b.c.d) and native IPv6 clients are counted identically under the sameinet6series.Proposed labels
Add client-side counterparts to the existing server binding labels, following the same naming pattern. The full set of server labels and their client equivalents would be:
server_binding_address_ip_familyclient_address_ip_familyinetorinet6server_binding_address_ip_typeclient_address_ip_typeplainorv4_mapped_v6server_binding_ipserver_binding_portOnly
client_address_ip_familyandclient_address_ip_typeare proposed. Raw client IP and port must never be used as label values because they are unbounded and would cause a Prometheus cardinality explosion (one time series per unique client address).The value sets should reuse the existing
IpTypeenum (plain,v4_mapped_v6) and the existing ip-family values (inet,inet6), keeping the label model consistent with the server side.Cardinality note
Prometheus creates one time series per unique combination of label values. The proposed labels each have a small bounded value set (
inet/inet6andplain/v4_mapped_v6), adding only a constant factor to existing series. Never use raw client IPs or ports as label values — they are unbounded and would cause a cardinality explosion.Which metrics to instrument
At minimum, all per-request counters that already carry server binding labels should also carry the new client label:
udp_tracker_server_requests_accepted_totaludp_tracker_server_requests_received_totaludp_tracker_server_requests_aborted_totaludp_tracker_server_requests_banned_totaludp_tracker_server_responses_sent_totaludp_tracker_server_errors_totalhttp_tracker_core_requests_received_totalGlobal/aggregate counters without a request context (
swarm_coordination_registry_*,tracker_core_persistent_*) do not need this label.Where to look in the code
The server-side IP type is already modelled in
IpType. The client address is available at the point where each request is dispatched, so the same approach can be applied to deriveclient_address_ip_typefrom the client's socket address. The label would be attached when the metric counter is incremented.Related
EADDRINUSEfailure and a detailed rationaleserver_binding_address_ip_type; the client label was not includedcc @da2ce7