Who let out 426?? I thought I was supposed to be in a windowless room!
(/j)
reference
ICYMI, the joke is about SCP-426
Who let out 426?? I thought I was supposed to be in a windowless room!
(/j)
ICYMI, the joke is about SCP-426
Yup. My background is computer science transitioned to IT Infra.
My sister sent me a screenshot of a Spotify one-liner error, white text on black background, captioned “they wrote a lazy error”. I immediately recognized that the actual problem was the load balancer in the front end trying and failing to connect to the backend/middleware in the first error, then in the second it recognized a failed health check and reporting that no back ends were available. Root cause is probably networking issue or actual server crash.
I also have a bonus that in high school I had watched a ton of videos on VFX/SFX and knew a rough way around After Effects and compositing (before I jumped into CS I had considered this as a career path), so now when I watch TV and movies I can also see some of the “layers” they use to compile the on screen effect.
Memory unlocked that’s been a hot minute ago
Didn’t apple used to make their own IR remote for that? Is the hardware onboard the Mini preset to use their hardware or is it more generic once Linux is installed?
I hear it’s also bad to get into a battle of wits with a Sicilian - especially when death is on the line.
Others have some good information here - all I’d like to add to the root is that Windows and Mac have a built-in DNS cache and it’s pretty straightforward to add a DNS cache to systemd distros (if it’s not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.
Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.
HLS is a bidirectional protocol though - the system’s total network latency affects how quickly it can change to a new bitrate stream as conditions improve or degrade. And despite the name, it’s not just limited to live content. You can use this to deliver fixed-length content
I’d put the deflate algorithm over the LZMA algorithm just because deflate is used by both windows (zip) and Unix (gzip). Windows I don’t think has added LZMA/xz support until recently if at all.
If a big MMO closes that’d be rough, but those types of games tend to form communities anyways like Minecraft. You don’t have to pay Microsoft a monthly rate to host a Java server for you and a few friends, you just have to have a little bit of IT knowledge and maybe a helper package to get you and your friends going. It’s still a single binary, even if it doesn’t run on a laptop well for larger settings.
With a big MMO, there will form support groups and turnkey scripts to get stuff working as well as it can be, and forums online for finding existing open community servers by people who have the hardware and knowledge to host a few dozen to a few hundred of their closest friends online.
Life finds a way.
If it’s a complicated multi-node package where you need stuff to be split up better as gateway/world/area/instance, the community servers that will form may tend towards larger player groups, since the knowledge and resource to do that is more specific.
Not on a flash based motherboard (so basically almost everything recent). On modern systems usually the only thing the battery powers is the clock, which is why they have a separate reset to defaults header/button/switch.
(The CMOS memory of old is replaced with flash memory, al la SD Card or flash drive)
They confirmed that there was a range of CPUs affected by a fabrication issue outside of the press release that went to media. So while we know about the i7/i9, manufacturing process is often shared between different CPU models and with Intel being opaque about what they found it’s hard to understand what actually happened and what’s truly unaffected.
Ref: GamersNexus
https://youtu.be/OVdmK1UGzGs
TLDR: probably a lot of people continue using the thing that they know if it just works as long as it works well enough not to be a bother.
Many many years ago when I learned, I think the only ones I found were Apache and IIS. I had a Mac at the time which came pre installed with Apache2, so I learned Apache2 and got okay at it. While by release dates Nginx and HAProxy most definitely existed, I don’t think I came across either in my research. I don’t have any notes from the time because I didn’t take any because I was in high school.
When I started Linux things, I kept using Apache for a while because I knew it. Found Nginx, learned it in a snap because the config is more natural language and hierarchical than Apache’s XMLish monstrosity. Then for the next decade I kept using Nginx whenever I needed a webserver fast because I knew it would work with minimal tinkering.
Now, as of a few years ago, I knew that haproxy, caddy, and traefik all existed. I even tried out Caddy on my homelab reverse proxy server (which has about a dozen applications routed through it), and the first few sites were easy - just let the auto-LetsEncrypt do its job - but once I got to the sites that needed manual TLS (I have both an internal CA and utilize Cloudflare’ origin HTTPS cert), and other special config, Caddy started becoming as cumbersome as my Nginx conf.d directory. At the time, I also didn’t have a way to get software updates easily on my then-CentOS 7 server, so Caddy was okay-enough, but it was back to Nginx with me because it was comparatively easier to manage.
HAProxy is something I’ve added to my repertoire more recently. It took me quite a while and lots of trial and error to figure out the config syntax which is quite different from anything I’d used before (except maybe kinda like Squid, which I had learned not a year prior…), but once it clicked, it clicked. Now I have an internal high availability (+keepalived) load balancer than can handle so many backend servers and do wildcard TLS termination and validate backend TLS certs. I even got LDAP and LDAPS load balancing to AD working on that for services like Gitea that don’t behave well when there’s more than one LDAPS backend server.
So, at some point I’ll get around to converting that everything reverse proxy to HAProxy. But I’ll probably need to deploy another VM or two because the existing one also has a static web server and I’ve been meaning to break up that server’s roles anyways (long ago, it was my everything server before I used VMs).
A static PNG tile database for world.osm is even larger. Without a solid vector tile solution, this is the most efficient data format for disk space.
Also, there’s a post render CDN cache in front of the rendering layer to offset load, plus there’s I think some internal caching in renderd. It’s a pretty complex machine, but databases of the world are in fact huge.
OSM’s core tile servers have dozens of cores, hundreds of GB of RAM each, and the rendering and lookup databases are a few TB. That’s not trivial to self host, especially since one self hosted tile server cannot always keep up with a user flick scrolling.
Edit: car GPS maps and the old TomTom and Garmin devices have significantly less metadata embedded than a modern map.
deleted by creator
Okay that’s fair. Their pricing is awful in general, and that’s especially egregious for something that used to be free
There’s probably specific ticket queues and wiki/doc spaces for each support team.
Problem with an app? Send it to the internal dev/support team. Then if needed it gets routed.
How was Parsec before the acquisition?
I only really have experience after, and it’s the only Unity product I’ve actually found that I like. My only major complaint is that it’s not compatible with the base configuration of Palo Alto, but that’s really more of a Palo Alto problem than a Parsec problem.
A well managed server won’t init an arbitrary drive and has a lock screen with a password so that the most a rubber ducky would be able to do is reboot it. Which is something you’d already be able to do if you had access to the front panel with the power button.
Tbh this is a programming community. While yes, a quick summary would not have gone amiss, I don’t fault OP for not including it. RFCs are often pretty dry but this one is reasonably straightforward as a subset of JSON to reduce some ambiguity.