Um no. Containers are not just chroot. Chroot is a way to isolate or namespace the filesystem giving the process run inside access only to those files. Containers do this. But they also isolate the process id, network, and various other system resources.
Additionally with runtimes like docker they bring in vastly better tooling around this. Making them much easier to work with. They are like chroot on steroids, not simply marketing fluff.
Documentation is generally considered one of the stronger points of rust libraries. Crates.io is not a documentation site you want https://docs.rs/ for that though it is generally linked to on crates.io. A lot of bigger crates also have their own online books for more in depth stuff. It is not that common to find a larger crate with bad documentation.
Not sure why you need an arc mutex to delegate it to the responsible component. Seems like the type of thing that should not cross thread boundaries nor be cloned multiple times.
Transactions should be short lived, they block data on the database side from acessing those tables or rows. Best to not jole onto a transaction that long and instead gather your data first or rethink your access patterns to your data base.
But arc does give you a try_unwrap which returns the inner type if there is only one strong copy left. And mutex gives you an into_inner to move out of it. But really transactions should not be held for a long period of time.
Doesn't a lot of the money for the research come from tax payers? And a lot of effort put into tweaking formulas with no real impact just so they can extend the patents? And then they jack up the prices to insane levels so that those tax payers cannot even afford the results anyway... The system is broken and massively abused. It needs to be changed. We might need something to help foster innovation but the current system just stifles it far more then it helps.
TLDR; Install the relevant packages for the language you care about like every other guide tells you do to.
By the end of this guide, you’ll have a robust development setup ready to tackle any project.
And by a robust dev setup they mean the bare minimum packages installed for projects in one of C/C++, Python, Java, or Javascript.
If you really want a robust developer setup look for guides and tutorials about the language you care about. This goes into so little detail on anything that it is basically useless.
Why do we need tests to be understandable by any human. IMO tests that go to that degree do so by obscuring what logic is actually running and make it harder as a developer to fully understand what is going on. I would rather just keep tests plain and simple with as few abstractions around them as possible.
Cypress cy.get('h1').contains('Result')
Playwright await expect(page.getByTitle('Result')).toHaveCount(1)
Testing library expect(screen.getByTitle(/Result/i)).toBeTruthy()
We can nit pick about syntax here and I prefer the cypress one as it immediately tells me what it is doing and I am not even familiar with those frameworks but:
UUV Then I should see a title named "Result"
That tells me nothing about what it is actually doing. How is the framework meant to interpret that or similar things? It is imprecise and I have no way to validate it will do what I expect it should. I do not trust AI or LLMs enough to translate that into a workable test. Even if it works for simple situations like this how does it grow to far more real and complex test cases?
It would be one thing to use a LLM to generate a test for you that you can inspect - but to generate it probably on every run quite likely without being able to see what it did? Um No thanks. Not with the current state of LLMs.
At least I assume it is LLM based as there is no other way to do this as far as I am aware, though they dont seem to mention it at all.
I am not convinced by this argument TBH. The one use case they come up with is testing APIs exposed through the FFI. APIs that need to be written in such a way as to avoid RAII as C does not support that. Their use case is testing and that is the only use case I can see that makes this valid - but TBH I don't really see the value of adding defer as a language feature for testing FFI APIs. Any leaked data will be cleaned up at the end of the test suite and I would never use such APIs in production rust code without wrapping them in a RAII abstraction. Any other uses of such an API I would expect to happen in other languages where a rust defer feature wont help at all.
I am not a huge fan of the defer feature overall. It is error prone and easy to forget to use. With RAII the compiler deals with all that so you don't have to. I have seen far too many memory leaks in prod in go where someone has forgotten a defer foo.Close()
and it is not obvious at all when they have done that. Even flagged a few times where it looked like it was needed (like with things that are constructed with a Open(...)
but that didn't have a Close method... It just makes code reviews harder than when you can rely on RAII IMO.
This feels like wanting a feature to code in a non-rust style for those not used to the language which I really don't want to encourage.
Hmm, ok... So I guess the only way to not trigger Undefined Behavior on the C side when freeing, would be to keep the capacity of the Vec around and do:
if (capacity > 0) { free(foos); }
Let's ignore for now that this will surprise every C developer out there that have been doing if (NULL != ptr) free(ptr) for 50 years now.
That is not the only way, you can in the rust code return a null pointer when the Vecs capacity in 0 which will give you the behavior C developers are used to.
And I think generally their problem with needing to keep track of cap and length is due to Vec not being equivalent to a C array. If you want a C array then use a array or boxed slice instead of a Vec. The big problem here is not that C developers are not used to caring about capacity - but that they are not used to having to care about the difference between length and capacity. Instead the just call capacity 'length' and allocated/deallocate based on that. So the 'length' in a Vec is not the 'length' they typically deal with.
It could be a tight bend in the line somewhere - make sure there are no tight bends. Otherwise if it is the tube then get a thicker tube.
It might. Depending on how much tension there is. Too much and it will cause the filament to slip in the extruder causing under extrusion. If you are not seeing signs of under extrusion then you are fine for now - but that might change if you change filament or anything else. I would try to lower how much tension the filament is under to avoid problems in the future. Otherwise it would be something to keep in mind if you do start seeing signs of under extrusion.
When I change devices or hit file size limits, I’ll compress and send things to my NAS.
Whaaatt!?!!? That sounds like you don't use git? You should use git. It is a requirement for basically any job and there is no reason to not use it on every project. Then you can keep your projects on a server somewhere, on your NAS if you want else something like github/gitlab/bitbucket etc. That way it does not really matter about your local projects, only what is on the remote and with decent backups of that you don't need to constantly archive things from your local machine.
Did you read the article at all?
“Putting all new code aside, fortunately, neither this document nor the U.S. government is calling for an immediate migration from C/C++ to Rust — as but one example,” he said. “CISA’s Secure by Design document recognizes that software maintainers simply cannot migrate their code bases en masse like that.”
Companies have until January 1, 2026, to create memory safety roadmaps.
All they are asking for by that date is a roadmap for dealing with memory safety issues, not rewrite everything.
Sounds like you just need to keep the data on your server and use samba or NFS and a network mount on the other devices.
What? You can easily escape from it if there are better alternatives you can use. Pointing at one language and saying it is not easy to code like it is another language is a pointless argument. You can do that about any two languages. They all differ for good reasons and as long as you can solve similar problems in both, even if in different ways then what does it matter that you cannot do it in the same way?
You could do a lot of things. Rust had a gc and it was removed so they have already explored this area and are very unlikely to do so again unless there is a big need for it that libraries cannot solve. Which I have not seen anyone that actually uses the language a lot see the need for.
Not like how async was talked about - that required a lot if discussion and tests in libraries before it was added to the language. GC does not have anywhere near as many people pushing for it, the only noise I see is people on the outside thinking it would be nice with no details on how it might work in the language.
So someone that is not involved in rust at all and does not seem to like the language thinks it will get a GC at some point? That is not a very credible source for such a statement. Rust is very unlikely to see an official GC anytime soon if ever. There are zero signs it will ever get one. There was a lot of serious talk about it before 1.0 days - but never made it into the language. Similar to green threads which was a feature of the language pre 1.0 days but dropped before the 1.0 release. Rust really wants to have a no required runtime and leans heavy on the zero-cost abstractions for things. Which a GC would impose on the language.
There are quite a few places where a GC is just not acceptable. Anything that requires precise timing for one. This includes kernel development, a lot of embedded systems, gaming, high frequency trading and even latency critical web servers. Though you are right that a lot of places a GC is fine to have. But IMO rust adds more than just fast and safe code without a GC - lots of people come to the language for those but stay for the rest of the features it has to offer.
IMO a big one is the enum support it has and how they can hold values. This opens up a lot of patterns that are just nice to use and one of the biggest things I miss when using other languages. Built with that are Options and Results which are amazing for representing missing values and errors (which is nicer than coding with exceptions IMO). And generally they whole type system leads you towards thinking about the state things can be in and accounting for those states which tends to make it easier to write software with fewer issues in production.
but imagine if you have to perform this operation for an unknown amount of runtime values
This is a poor argument. You dont write code like this in rust. If you can find a situation where it is an actual issue we can discuss things but to just say imagine this is a problem when it very likely is not a problem that can be solved in a better way at all let alone a common one is a very poor argument.
Typically when you want an escape from lifetimes that means you want shared ownership of data which you can do with an Arc. Cow and LazyLock can also help in situations - but to dismiss all these for some imagined problem is a waste of time. Comes up with a concrete example where it would help. Very likely you would find another way to solve the problem in any realistic situation you can come up with that I would suspect leads to a better overall design for a rust program.
I would say this is just a straw man argument - but you have not even created a straw man to begin with, just assume that one exists.
For someone only on chapter 7, this is ok. I would not call it idiomatic but you have not gotten to the Error Handling in chapter 9 yet. I would probably hold on getting feedback on error handling until you have gotten to that point.
But the TLDR of it is rust has two forms of errors, unrecoverable errors in the form of panic and recoverable ones in the form of returning a Result
. In this case you have opted for panicking which IMO is the wrong choice for something that is expected to fail - and http requests and parsing external data is expected to fail (even if only some of the time). Networks fail all the time, servers go down, send back wrong responses and many other things.
Do you really want to crash your program every time that happens? Probably not - at least not at this level. Instead you likely want to return an error from this function and let the caller deal with it instead as they will likely have more context as to what to do with it rather than in the leaf functions of where the error originates.
But all that is probably for once you have read through chapter 9. For now it is good to know that when you have the pattern
match foo {
Ok(value) => value,
Err(err) => panic!("it broke! {}", err),
}
You can generally replace that with a call to expect instead:
foo.expect("it broke")
Or just unwrap it if you dont need to add more context for what ever reason.
It doesn’t technically have drivers at all or go missing. All supporting kernel modules for hardware are always present at the configuration level.
This isn't true? The Linux kernel has a lot of drivers in the kernel source tree. But not all of them. Notably NVIDIA drivers have not been included before. And even for the included drivers they may or may not be compiled into the kernel. They can and generally are compiled with the kernel but as separate libraries that are loaded at runtime. These days few drivers are compiled in and most are dynamically loaded depending on what hardware is present on the system. Distros can opt to split these drives up into different packages that you may or may not have installed - which is common for less common hardware.
Though with the way most distros ship drivers they don't tend to spontaneously stop working. Well, with the exception of Arch Linux which deletes the old kernel and modules during an upgrade which means the current running kernel cannot find its drivers and stops dynamically loading them - which often results in hotplug devices like USB to stop working if you try to plug them in again after the drivers get unloaded (and need a reboot to fix as that boots into the latest kernel that has its drivers present).