• 122 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle

  • The only (arguably*) baseless claim in that quote is this part:

    You do understand you’re making that claim on the post discussing the proposal of Safe C++ ?

    And to underline the absurdity of your claim, would you argue that it’s impossible to write a"hello, world" program in C++ that’s not memory-safe? From that point onward, what would it take to make it violate any memory constraints? Are those things avoidable? Think about it for a second before saying nonsense about impossibilities.



  • The problem with C++ is it still allows a lot of unsafe ways of working with memory that previous projects used and people still use now.

    Why do you think this is a problem? We have a tool that gives everyone the freedom to manage resources the way it suits their own needs. It even went as far as explicitly supporting garbage collectors right up to C++23. Some frameworks adopted and enforced their own memory management systems, such as Qt.

    Tell me, exactly why do you think this is a problem?


  • From the article.

    Josh Aas, co-founder and executive director of the Internet Security Research Group (ISRG), which oversees a memory safety initiative called Prossimo, last year told The Register that while it’s theoretically possible to write memory-safe C++, that’s not happening in real-world scenarios because C++ was not designed from the ground up for memory safety.

    That baseless claim doesn’t pass the smell check. Just because a feature was not rolled out in the mid-90s would that mean that it’s not available today? Utter nonsense.

    If your paycheck is highly dependent on pushing a specific tool, of course you have a vested interest in diving head-first in a denial pool.

    But cargo cult mentality is here to stay.










  • lysdexic@programming.devOPMtoC++@programming.devNew features in C++26 [LWN.net]
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    That’s perfectly fine. It’s a standardization process. Its goal is to set in stone a specification that everyone agrees to. Everything needs to line up.

    In the meantime, some compiler vendors provide their own contracts support. If you feel this is a mandatory feature, nothing prevents you from using vendor-specific implementations. For example, GCC has support for contracts since at least 2022, and it’s mostly in line with the stuff discussed in the standardization meetings.


  • lysdexic@programming.devOPMtoC++@programming.devNew features in C++26 [LWN.net]
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Still no contracts?

    In line with the release process for C++ standard specifications, where standards ship every 3 years but alternate between accepting new features and feature freeze releases, C++23 was the last release that was open to new features. This would mean C++26 is a feature freeze release following the new features introduced in C++23.














  • ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.

    That’s highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.

    When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come.

    That’s not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you’re project is set, and misconfigurations.

    Compilation of everything from scratch was an hour at the end.

    That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.

    Switching to lld was a huge win, as well as going from 12 to compilation 24 threads.

    That’s perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.

    Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.

    I was a linux dev there, the pch’s worked, (…)

    I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it’s a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.



  • I’ve had mixed results with ccache myself, ending up not using it.

    Which problems did you experienced?

    Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads.

    To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.

    This together with pchs (…)

    This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.

    https://ccache.dev/manual/4.9.1.html#_precompiled_headers

    Also noteworthy, msvc doesn’t play well with ccache. Details are fuzzy, but I think msvc supports building multiple source files with a single invocation, which prevents ccache to map an input to an output object file.




  • lysdexic@programming.devOPMtoC++@programming.devHow to avoid one C++ foot gun
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 months ago

    Naked pointers are just too stupid for modern C++ ;)

    Anyone who works on real-world production software written in C++ knows for a fact that pointers are a reality.

    Also, there are plenty of frameworks who employ their own memory management frameworks, and raw pointers are perfectly fine in that context. For example, Qt uses raw pointers extensively because It’s object system implements an object ownership system where each object can have child and parents, and you can simply invoke deleteLater() to free the whole dependency tree when you no longer need it.


  • lysdexic@programming.devOPMtoC++@programming.devHow to avoid one C++ foot gun
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Simply taking std::string by value (as it is a memory management class created for that explicit purpose) would have solved the problem without kneecapping every class you make.

    I think you are missing the whole point.

    The blogger tried to make a point about how special member functions can be tricky to get right if you don’t master them. In this case, the blogger even presents a concrete example of how the rule of 3/rule of 5 would fail to catch this issue. As the blogger was relying on the implicit special member functions to manage the life cycle of CheeseShop::clerkName and was oblivious to the possibility of copying values around, this resulted in the double free.

    You can argue that you wouldn’t have a problem if the string was used instead of a pointer to string, which is a point that the blogger indirectly does, but that would mean you’d be missing the root cause and missing the bigger picture, as you’d be trusting things to work by coincidence instead of actually knowing how they work.

    The blogger also does a piss-poor job in advocating to explicitly delete move constructors, as that suggests he learned nothing from the ordeal. A preferable lesson would be to a) not use raw pointers and instead try to adopt a smart pointer with the relevant semantics, b) actually provide custom implementations for copy/move constructors/assignment operators whenever doing anything non-trivial to manage resources, such as handling raw pointers and needing to both copy them and free them whenever they stop being used.



  • Such gains by limiting included headers is surprising to me, as it’s the first thing anyone would suggest doing.

    Yes indeed. I think this is a testament of the loss of know-how we’re seeing in software engineering in general, and how overambitious but underworking developers try to stake claims in technical expertise when they even failed to onboard onto the very basics of a tech stack.

    I’m sure it’s a matter of time before there’s a new post in Figma’s blog showing off their latest advanced technique to drive down build times: onboarding ccache. Followed by another blog post on how Figma is researching cutting edge distributed computing techniques to optimize build times by replacing ccache with sccache.




  • I don’t really see how it’s daunting enough to avoid mentioning.

    I think it’s a good call not to mention them because they are irrelevant given the topic. If your code base and/or the consumers of your code base are using C-style arrays for input and/or output, it’s hardly helpful to suggest changing all your interfaces to use another data type. It’s outright impossible if you’re dealing with extern C interfaces.