• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle


  • I greatly fear refactoring in Rust. Making a single conceptual change can require a huge number of code changes in practice, especially if it’s a change to ownership.

    Refactoring in languages like Java and C# is effortless in comparison, and not error prone at all in a modern codebase.

    You can use RC and clone everywhere, but now your code is unreadable, slow, and might even have memory leaks.

    You can use slotmaps everywhere, but now you’re embedding a different memory model with verbose syntax that doesn’t even have first-class references.

    I don’t even dislike Rust; I recently chose it for another project. I just think it has clear weaknesses and this is one of them.


  • I wouldn’t draw conclusions from random benchmarks like this without at least opening godbolt to see what’s going on.

    It really could be anything. e.g. final may have enabled inlining in more places, but this may have inlined a very uncommon branch in a hot loop, causing way more cache misses when fetching instructions. Writing compilers is hard, and all optimisation passes are using imperfect heuristics.

    Compiling with PGO might make the results more compelling, if that wasn’t already tried.


  • What a whirlwind!

    Also, Rust is perhaps, the shittiest, slowest compiled language out there. Even TCC has a leg up on it.

    TCC is written exclusively to compile quickly, not to do any real optimisation. There is no conceivable situation in which TCC output will outperform equivalent Rust code.

    If you really like how Rust handles its syntax, use a real functional language like OCaml

    Rust takes inspiration from OCaml in almost every area except syntax. Close to zero syntax similarly.

    In fact, SML compilers like MLton are sometimes faster than Rust.

    Lmao, this is a classic line from ~2009 message boards, but with “C++” swapped out for Rust.

    Almost every single thing you said is wrong, but in a way too precise to be attributed to random noise. Like scoring zero in a multiple choice exam. I don’t know if you are some kind of performance art troll, but please continue. I’m an instant fan of your work.


  • Is it really fair to say retain doesn’t compose as well just because it requires reference-based update instead of move-based? I also think using move semantics for in-place updates makes it harder to optimise things like a single field being updated on a large struct.

    It also seems harsh to say iterators aren’t a zero-cost abstraction if they miss an optimisation that falls outside what the API promises. It’s natural to expect collect to allocate, no?

    But I’m only writing this because I wonder if I haven’t understood your point fully.

    (Side note: I think you could implement the API you want on top of retain_mut by using std::mem::replace with a default value, but you’d be hoping that the compiler optimises away all the replace calls when it inlines and sees the code can’t panic. Idk if that would actually work.)




  • Great article, though I would love to see a summary that breaks down the possible approaches and what the status of each is.

    I’m quite interested in the research that adds runtime provenance info to pointers, so you store (for example) a region ID that lets you do bounds-checking on pointer arithmetic. It doesn’t achieve Rust-level safety, but means buffer overflows can only get so far before they segfault.

    I know there are many cases where ordinary code will cast mystery memory into a pointer, but in modern C++ these generally live in templated library code. If we introduce a Rust style “unsafe block” to disable compiler warnings on these, I think I could refactor most of the others out of the legacy code I maintain.

    I don’t know how many exploits this would prevent in practice though. I have no expertise there


  • A bit tangential, but if the US government really commits to pushing big tech firms to migrate to memory safe languages, where would that leave Zig?

    It seems completely implausible to rewrite all the deployed C and C++ in the world any time soon. Even so, the uncertainty created by a top-down push might be enough to stall adoption of an unsafe language.

    I just wondered if anyone knows whether that story has affected the plans of the Zig maintainers at all. Or whether there has been any spike in Rust job postings?



  • Haskell has very famously not solved this problem. As in, pretty much every academic paper published on the subject in the past 15 years includes a brief paragraph explaining why Haskell’s effect monads are terrible.

    Also, it would be surprising for Rust’s developers to be scared of monads when Rust already has monads as a core language feature, with special syntax support and everything.