Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Installing Linux after Windows should be fine without disconnecting drives.
The reverse is troublesome. Microsoft’s installer is all too happy to shit on your drives, even the ones you’re not using for installation. But Linux installers are much more friendly to dual-booting and all kinds of complex setups.
What part of this is misinformation, exactly? Seems pretty well-supported.
Haven’t heard of Hiren’s BootCD in like 15 years. Good to see it’s still around!
I keep seeing this claim, but never with any independent verification or technical explanation.
What exactly is listening to you? How? When?
Android and iOS both make it visible to the user when an app accesses the microphone, and they require that the user grant microphone permission to the app. It’s not supposed to be possible for apps to surreptitiously record you. This would require exploiting an unpatched security vulnerability and would surely violate the App Store and Play Store policies.
If you can prove this is happening, then please do so. Both Apple and Google have a vested interest in stopping this; they do not want their competitors to have this data, and they would be happy to smack down a clear violation of policy.
I agree completely.
I understand the motivation here — apps that lack location permission shouldn’t be able to get backdoor access to your location via your camera roll. That makes sense, because you know damn well every spyware social media company would be doing that if they could.
But the reverse is also true: apps that legitimately need to read photos and access all their metadata shouldn’t need to be granted full location access.
Yeah, I had to disconnect all my SATA HDs to stop the Windows installer from shitting all over them.
I’d be worried about Windows updates doing the same thing now, after the the recent glitch that broke bootloaders.
It was bought out and cleaned up a few years ago. It’s legit again now, though I don’t think it’ll ever really recover from that fiasco.
Chromium itself will. Other Chromium-based browser vendors have confirmed that they will maintain v2 support for as long as they can. So perhaps try something like Vivaldi. I haven’t tried PWAs in Vivaldi myself, but it supports them according to the docs.
Debian still supports Pentium IIs. They axed support for the i586 architecture (original Pentium) a few years back, but Debian 12 (current stable, AKA Bookworm) still supports i686 chips like the P2.
Not sure how the rest of the hardware in that Compaq will work.
See: https://www.debian.org/releases/stable/i386/ch02s01.en.html
Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.
With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.
For sure. It’ll never be enforced completely, but it gives teeth to go after some big offenders.
It’s worth mentioning that with a large generational gap, the newer low-end CPU will often outperform the older high-end. An i3-1115G4 (11th gen) should outperform an i7-4790 (4th gen), at least in single-core performance. And it’ll do it while using a lot less power.
Interesting. I’m not sure that’s a Lemmy thing per se, maybe specific to your client, or some extension or something altering CSS?
I just checked in my browser’s inspector, and the italicized text’s <em> tag has the same calculated font setting as the main comment’s <div> tag.
FWIW, I’m using Firefox with my instance’s default Lemmy web UI.
YES.
And not just the cloud, but internet connectivity and automatic updates on local machines, too. There are basically a hundred “arbitrary code execution” mechanisms built into every production machine.
If it doesn’t truly need to be online, it probably shouldn’t be. Figure out another way to install security patches. If it’s offline, you won’t need to worry about them half as much anyway.
Hospitals and airports typically have their own backup generators, yeah. Not entirely sure how long they’re prepared to operate off-grid.
The linked article focuses on Mastodon. I’d be interested to hear more about how this relates to Lemmy in your experience.
Both.
The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.
The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.
That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.
The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.
Backing up / in it’s entirety might cause issues since there will be a lot of special files and crossed mount points. You should probably exclude /proc and any system folders from the backup. See: https://github.com/bit-team/backintime/blob/dev/FAQ.md#does-back-in-time-support-full-system-backups
Since you’re planning to start with a clean Nobara install, you can probably exclude those during the restore step. Just be careful not to restore files that are in active use by the running system.
Have you tested restoring from your backup? Can you do it from the liveUSB?
There’s one called Redox that is entirely written in Rust. Still in fairly early stages, though. https://www.redox-os.org/