This PR replaces the sort implementations with tailor-made ones that strike a balance of run-time, compile-time and binary-size, yielding run-time and compile-time improvements. Regressing binary-s...
name diff % speedup
slice::sort_large_random -65.49% x 2.90
slice::sort_large_strings -37.75% x 1.61
slice::sort_medium_random -47.89% x 1.92
slice::sort_small_random 11.11% x 0.90
slice::sort_unstable_large_random -47.57% x 1.91
slice::sort_unstable_large_strings -25.19% x 1.34
slice::sort_unstable_medium_random -22.15% x 1.28
slice::sort_unstable_small_random -15.79% x 1.19
orlp invented PDQSort and Glidesort. He collaborated with Voultapher on Driftsort.
Driftsort is like a successor to Glidesort.
Glidesort had some issues that prevented it from being merged into std, and which are addressed in Driftsort. IIRC it had something to do with codegen bloat.
Does the Rust compiler use their std sort algorithms, or does it already use specialized ones? If the former, it would be a great side-effect if the compiler itself receives additional speed ups because of this.
The post mentioned that the introduction of these new algorithms brings compile-time improvements too, so how should I see this? I assumed it meant that compiling applications that use sorting would speed up, but that seems like a meaningless improvement if overall compilation times have regressed. Or do you mean compiling the compiler has become slower?