Given we are trying to optimise things, it makes sense we have benchmarks to measure whether things actually got faster or slower. My best guess would be to synthesise a large number of files (say 500?), use lsp-test, and then fire a representative set of requests/changes at them (e.g. a boat load of hover because that's what emacs does, and a little bit of everything else). The result should be useful to figure out why things like forkOn give performance improvements, as per https://gitlab.haskell.org/ghc/ghc/issues/18224#note_275367. CC @bgamari, @wz1000 and ndmitchell/shake#751
Given we are trying to optimise things, it makes sense we have benchmarks to measure whether things actually got faster or slower. My best guess would be to synthesise a large number of files (say 500?), use lsp-test, and then fire a representative set of requests/changes at them (e.g. a boat load of hover because that's what emacs does, and a little bit of everything else). The result should be useful to figure out why things like forkOn give performance improvements, as per https://gitlab.haskell.org/ghc/ghc/issues/18224#note_275367. CC @bgamari, @wz1000 and ndmitchell/shake#751