You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Language server binary:language_server_linux_x64 (Go binary, located in .windsurf-server/bin/.../extensions/windsurf/bin/)
Description
When using Windsurf via Remote SSH, the language_server_linux_x64 process has an unbounded memory leak that grows from ~3.5 GB at startup to 60+ GB within hours, consuming all available RAM and swap, ultimately freezing the entire remote system.
This has been consistently reproducible across multiple days (Feb 11–15, 2026), happening every time the IDE is used for a few hours.
Reproduction
Connect Windsurf to a remote Linux machine via Remote SSH
Open a workspace and use Cascade normally
After several hours of usage, language_server_linux_x64 will have consumed all available RAM
The remote machine becomes completely unresponsive (hard freeze, requires power cycle)
Diagnostic Evidence
A custom system monitor service logging every 60 seconds captured the memory growth leading up to the crash:
TIME language_server RSS System Memory Used Load Average
───────────── ────────────────── ────────────────── ────────────
21:11 UTC 33 GB (51% of RAM) 38 GB 1,038
21:12 UTC 42 GB (66% of RAM) 47 GB 877
21:13 UTC 58 GB (89% of RAM) 62 GB 830
21:14 UTC 59 GB (91% of RAM) 62 GB 838
21:15 UTC 60 GB (92% of RAM) 62 GB 781
21:17 UTC 60 GB (93% of RAM) 62 GB (swap: 3.3 GB) 702
... SYSTEM FREEZE — required instance stop/start to recover
Key observations:
Only language_server_linux_x64 grows — all other Windsurf node processes remain under 2 GB
Memory growth is not linear — it accelerates dramatically (33 GB → 60 GB in 6 minutes)
CPU idle drops to 0.0% as the system thrashes
Load average exceeds 1,000 due to processes blocked on I/O
The system freeze is a hard hang — no OOM killer logs, no kernel panic, journal shows corruption from unclean shutdown
This happened every day across Feb 11–15 with consistent patterns
Previous instance type was r7i.xlarge (4 vCPU, 32 GB) — upgraded to r7i.2xlarge (8 vCPU, 64 GB) thinking it was CPU, but the language server just consumed the additional memory
What was ruled out:
CPU saturation — occurred with 50% CPU idle; the CPU spike is a consequence of OOM, not the cause
sshd configuration — hardened keepalive settings, MaxSessions, MaxStartups; sshd itself is fine
pam_systemd — disabled to prevent logind timeout; didn't fix the root cause
Network/ENA driver — no errors in dmesg; network adapter is healthy
Other processes — only language_server_linux_x64 shows unbounded growth
Workaround
A watchdog service that monitors the language server's RSS every 10 seconds and kill -9s it when it exceeds 16 GB. Windsurf auto-restarts the language server, resetting the leak. This prevents the system freeze but causes periodic brief reconnections.
Expected Behavior
language_server_linux_x64 should have stable memory usage and not grow unboundedly over time.
Root Cause Analysis
Investigation of the .codeium/windsurf/ directory revealed:
Directory
Size
Contents
implicit/
36 GB
3,008 .tmp files (index/embedding cache, never cleaned up)
cascade/
8.4 GB
Conversation history .pb and .tmp files
Total
44 GB
For a 2 GB workspace (22x bloat!)
The implicit/ directory accumulates .tmp files across sessions (spanning days) that are never garbage collected
The language server loads these into the Go heap on startup — /proc/PID/smaps shows nearly all memory in [anon: Go: heap]
The largest .tmp files are ~40 MB each, with 3,008 files totaling 36 GB
Deleting .tmp files older than 1 day reduced the cache from 44 GB to 525 MB
The .pb base files (protobuf) are ~40 MB and seem to be the base index; the .tmp files are incremental snapshots that never get consolidated or deleted
Suggested Fix Areas
Implement cache eviction for implicit/*.tmp files — they accumulate unboundedly and are loaded into Go heap memory
Consolidate incremental snapshots — merge .tmp deltas into the base .pb file periodically instead of keeping all snapshots in memory
Bug:
language_server_linux_x64memory leak causes system OOM on Remote SSHEnvironment
language_server_linux_x64(Go binary, located in.windsurf-server/bin/.../extensions/windsurf/bin/)Description
When using Windsurf via Remote SSH, the
language_server_linux_x64process has an unbounded memory leak that grows from ~3.5 GB at startup to 60+ GB within hours, consuming all available RAM and swap, ultimately freezing the entire remote system.This has been consistently reproducible across multiple days (Feb 11–15, 2026), happening every time the IDE is used for a few hours.
Reproduction
language_server_linux_x64will have consumed all available RAMDiagnostic Evidence
A custom system monitor service logging every 60 seconds captured the memory growth leading up to the crash:
Key observations:
language_server_linux_x64grows — all other Windsurf node processes remain under 2 GBWhat was ruled out:
language_server_linux_x64shows unbounded growthWorkaround
A watchdog service that monitors the language server's RSS every 10 seconds and
kill -9s it when it exceeds 16 GB. Windsurf auto-restarts the language server, resetting the leak. This prevents the system freeze but causes periodic brief reconnections.Expected Behavior
language_server_linux_x64should have stable memory usage and not grow unboundedly over time.Root Cause Analysis
Investigation of the
.codeium/windsurf/directory revealed:implicit/.tmpfiles (index/embedding cache, never cleaned up)cascade/.pband.tmpfilesimplicit/directory accumulates.tmpfiles across sessions (spanning days) that are never garbage collected/proc/PID/smapsshows nearly all memory in[anon: Go: heap].tmpfiles are ~40 MB each, with 3,008 files totaling 36 GB.tmpfiles older than 1 day reduced the cache from 44 GB to 525 MB.pbbase files (protobuf) are ~40 MB and seem to be the base index; the.tmpfiles are incremental snapshots that never get consolidated or deletedSuggested Fix Areas
implicit/*.tmpfiles — they accumulate unboundedly and are loaded into Go heap memory.tmpdeltas into the base.pbfile periodically instead of keeping all snapshots in memory