But that’s what I mentioned regarding Java there. Java calls them “exceptions”, but generally forces the caller to either handle them or explicitly bubble them upwards…
But that’s what I mentioned regarding Java there. Java calls them “exceptions”, but generally forces the caller to either handle them or explicitly bubble them upwards…
As I see it, the difference is that we now have capable game engines freely available. Indie studios can, for the most part, offer the same quality of gameplay. AAA studios can only really differentiate themselves by how much content they shove into a game.
In particular, this also somewhat limits creativity of AAA games. In order to shove tons of content into there, the player character has to be a human, the gameplay has to involve an open world, there has to be a quest system etc…
The guy keeps on picking on Go, which is infamous for having terrible error handling, and then he has the nerve to even pick on the UNIX process return convention, which was designed in the 70s.
The few times he mentions Rust, for whatever reason he keeps on assuming that .unwrap()
is the only choice, which’s use is decidedly discouraged in production code.
I do think there is room for debate here. But error handling is a hellishly complex topic, with different needs between among others:
And even if you pick out a specific field, the two concepts are not clearly separated.
Error values in Rust usually have backtraces these days, for example (unless you’re doing embedded where this isn’t possible).
Or Java makes you list exceptions in your function signature (except for unchecked exceptions), so you actually can’t just start throwing new exceptions in your little corner without the rest of the codebase knowing.
I find it quite difficult to properly define the differences between the two.
I feel like this problem might be somewhat endemic to the US?
In my experience, US culture in general is a lot more positive about everything. Like, if someone from the US is not praising the living shit out of something, that means they didn’t like it.
Whereas here in Germany, it’s usually the other way around. If you don’t find anything to grumble about, that’s the highest form of praise.
Obviously, US culture isn’t one massive blob, the extremely positive folks are probably just those I notice the most, but maybe that’s also what the video author is fed up with.
Well, and then people from the US tend to also be a lot more positive about companies in general, presumably a remainder from Cold War propaganda. The journalists/entertainers from Germany and the UK that I watch, do criticize games quite directly…
I’m pretty sure that’s not how dyslexia works, but either way, I didn’t write that. And while the title of the article suggests otherwise, the news here isn’t that Google says something is easy. The news is that they published a guide to make that thing easy.
Wut? They’re a member, because they find Rust useful. This is just them saying another time that they find Rust useful.
While they (and everyone using Rust) will benefit off of more people using Rust, it’s not like they have a vested interest to the point of spreading misinformation.
They’ve got a page for all the Rust stuff: https://community.kde.org/Rust
If you’ve so far been able to do this stuff in Java, then presumably all your hardware has an OS and such and you don’t need this, but a colleague has been having a lot of fun with Rust and proper embedded development.
It’s very different from regular development, as you’ve got no OS, no filesystem, no memory allocations, no logging. It can most definitely be a pain.
But the guy always has the biggest grin on his face when he tells that he made a custom implementation of the CAN protocol (TCP is too complex for embedded 🙃) and that he had to integrate an LED to output error information and stuff like that. At the very least, it seems to be a lot less abstract, as you’re working directly with the hardware.
Hmm, I’ve never looked too much at benchmarks for this, but is there reason to believe Python would use less memory for a similarly complex project? It still needs a runtime, and it has to do a larger interpretation step at runtime (i.e. it needs to start from human-readable code rather than from bytecode)…
Yeah, we do this regularly at $DAYJOB, although we use Cross.
Basically, if you pull in any libraries with vendored C code, like e.g. OpenSSL, then you’d need to configure a linker and set up cross-compilation for C code.
Cross does the whole compilation in a container where this is already set up, so you just need to install Docker or Podman on your host system.
Basically:
cargo install cross
cross build --release --target=armv7-unknown-linux-gnueabihf
…and out plops your binary for a Raspberry Pi.
I don’t? There’s also 4chan.
Well, jokes aside, I’m not of the opinion that humans are either gross idiots or non-gross idiots. I rather think that their social context brings out the gross that lives in all of us.
Reddit is big enough that people feel even more anonymous there, and that there’s enough people willing to share their gross interests to form communities. When those communities exist, you also get an influx of users specifically looking for all of that. Lemmy is just not big enough.
I’m still curious to see, if the Microsoft leadership pushes them to do that, especially with the more recent titles being duds, but in general, I don’t expect them to do it, because:
At first, I thought this was a screenshot from Lemmy and thought what the hell. Then I saw that it’s Reddit and all my questions got answered. ¯\_(ツ)_/¯
I don’t know a thing about cats, but I would’ve expected there to be more diseases in the city, with all the humans, car exhaust, trash etc…
At first I thought, they’re releasing this news now to drown out the Concord news, but 30 year anniversary, maybe they did have this planned a little longer. 🙃
I tried something like that once. Basically, I was trying to create an API with which sysadmins could script deployments. That involves lots of strings, so I was hoping I could avoid the String
vs. &str
split by making everything &'static str
.
And yeah, the problem is that this only really works within one function. If you need to pass a parameter into a function, that function either accepts a &'static
reference which makes it impossible to call this function with an owned type or non-static reference, or you make it accept any reference, but then everything below that function has normal borrowing semantics again.
I guess, with the former you could Box::leak()
to pass an owned type or non-static reference, with the downside of all your APIs being weird.
Or maybe your prototyping just happens at the top and you’re fine with making individual functions accept non-static references. I guess, you’ll still have to try it.
Since you’re already at the bargaining stage of grief programming, maybe you’re aware, but Rc
and Arc
are the closest you can get to a GC-like feel. These do reference counting, so unlike GC, they can’t easily deal with cyclic references, but aside from that, same thing.
Unfortunately, they do still have the same problem with passing them as parameters…
Yeah, I don’t think that can happen without splitting the whole ecosystem in half. Garbage collection requires a runtime, and tons of the code semantics are also just different between the two, particularly with asynchronous code.
I also imagine that many people wouldn’t just leave their particular program in the GC version, but never even bother to learn the ownership/borrowing semantics, even though those largely stop being painful after a few months.
But yeah, I actually don’t think it’s too bad to have individual parts of the ecosystem using their own memory management strategies.
The two examples that I can think of are ECS for gamedev and signals/reactivity for UI frameworks. Which is what is used in C++ or JavaScript, Kotlin, too. You’d probably still want to use these strategies, even if you’ve got garbage collection available…
How many bugs you encounter is unfortunately not a good metric, because devs will compensate by just thinking harder. The goal is rather to not need to think as hard, which increases productivity and helps out your team members (including your future self).
It took me a few months of working in an immutable-by-default language before I had the epiphany that everything is exactly like it’s written down in code (so long as it’s not marked as mutable). I don’t need to look at each variable and think about whether it might get changed somewhere down the line. A whole section of my brain just switched off that day.
As the other person said, there’s also nothing stopping you from using mutability, it’s just not the default, because most variables simply don’t get mutated, even in C code.
But I would even go so far that Rust is actually making mutability fashionable again. It has introduced various new concepts in this regard, which you won’t know from other languages.
For example, you can opt a variable into mutability, make your changes and then opt out of it again.
And if a function wants to modify one of its parameters, you have to explicitly pass a mutable reference, which is visible both in the function signature and where you’re calling the function.
But perhaps most importantly, it blocks you from mutating variables that you’ve passed into a different thread (unless you wrap the value in a mutex or similar).
In most of the immutable languages up until this point, the immutability was achieved by always copying memory when you want to change it, which is insanely inefficient. Rust doesn’t need this, by instead requiring that you follow its ownership/borrowing rules.
Edit:
I also don’t know what you heard, but this is for example written in Rust: https://bevyengine.org/examples/3d-rendering/bloom-3d/
The code is right below. It uses lots of mutability…
It’s the “Entity-Component-System architecture”, consisting out of:
It’s kind of a competing strategy to OOP. It offers better performance and better flexibility, at the cost of being somewhat more abstract and maybe not quite as intuitive. But yeah, those former two advantages make it very popular for simulations / gamedev. Any major game engine has an ECS architecture under the hood, or at least something similar to it.
There’s two alternatives currently in development, inZOI from the PUBG devs, and Paralives from a smaller indie studio.