• 25 Posts
  • 131 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • There are really two reasons ECC is a “must-have” for me.

    • I’ve had some variant of a “homelab” for probably 15 years, maybe more. For a long time, I was plagued with crashes, random errors, etc. Once I stopped using consumer-grade parts and switched over to actual server hardware, these problems went away completely. I can actually use my homelab as the core of my home network instead of just something fun to play with. Some of this improvement is probably due to better power supplies, storage, server CPUs, etc, but ECC memory could very well play a part. This is just anecdotal, though.
    • ECC memory has saved me before. One of the memory modules in my NAS went bad; ECC detected the error, corrected it, and TrueNAS sent me an alert. Since most of the RAM in my NAS is used for a ZFS cache, this likely would have caused data loss had I been using non-error-corrected memory. Because I had ECC, I was able to shut down the server, pull the bad module, and start it back up with maybe 10 minutes of downtime as the worst result of the failed module.

    I don’t care about ECC in my desktop PCs, but for anything “mission-critical,” which is basically everything in my server rack, I don’t feel safe without it. Pfsense is probably the most critical service, so whatever machine is running it had better have ECC.

    I switched from bare-metal to a VM for largely the same reason you did. I was running Pfsense on an old-ish Supermicro server, and it was pushing my UPS too close to its power limit. It’s crazy to me that yours only pulled 40 watts, though; I think I saved about 150-175W by switching it to a VM. My entire rack contains a NAS, a Proxmox server, a few switches, and a couple of other miscellaneous things. Total power draw is about 600-650W, and jumps over 700W under a heavy load (file transfers, video encoding, etc). I still don’t like the idea of having Pfsense on a VM, though; I’d really like to be able to make changes to my Proxmox server without dropping connectivity to the entire property. My UPS tops out at 800W, though, so if I do switch back to bare-metal, I only have realistically 50-75W to spare.


  • corroded@lemmy.worldtoSelfhosted@lemmy.worldLow Cost Mini PCs
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    I have a few services running on Proxmox that I’d like to switch over to bare metal. Pfsense for one. No need for an entire 1U server, but running on a dedicated machine would be great.

    Every mini PC I find is always lacking in some regard. ECC memory is non-negotiable, as is an SFP+ port or the ability to add a low-profile PCIe NIC, and I’m done buying off-brand Chinese crop on Amazon.

    If someone with a good reputation makes a reasonably-priced mini PC with ECC memory and at least some way to accept a 10Gb DAC, I’ll probably buy two.



  • It really depends on how far back you want to look.

    If the US was to suddenly stop projecting its interests internationally, then as others have mentioned, then likely the world work become somewhat more socialized. European countries would probably step up and try to keep China in check, but without the US contributing to these efforts, it would cause a significant strain on their military resources.

    If the US was to take an isolationist policy 100 years ago, then there is a good chance that WW2 would have been won by the Axis. The Allied forces likely would have put up a good fight, but I’m not sure they would have emerged victorious against the combined Axis forces. The war in the Pacific would have raged on much longer, and without nuclear weapons, there would have been an extreme loss of life invading Japan. At the very least, WW2 would have lasted much much longer than it did. Depending on the outcome, plenty of countries might currently be speaking German and debating if they should tear down 80-year-old statues of Hitler.



  • This is only true when you have a single transmission medium and a fixed band. Cable internet is a great example; you only have a few MHz of bandwidth to be used for data transmission, in any direction; the rest is used up by TV channels and whatever else. WiFi is also like this; you may have full-duplex communications, but you only have a very small portion of the 2.4Ghz or 5Ghz band that your WiFi router can use.

    Ethernet is not like this. You have two independent transmission lines; each operates in one direction, and each is completely isolated from any other signals outside the transmitter and receiver. If your ethernet hardware negotiates a 10Gb connection, you have 10Gb in one direction and 10Gb in the other. Because the transmission lines are separate, saturating one has absolutely no effect on the other.


  • You are absolutely correct; I phrased that badly. Over any kind of RF link, bandwidth is just bandwidth. I was more referring to modern ethernet standards, all of which assume a separate link for upload and download. As far as I am aware, even bi-directional fiber links still work symmetrically, just different wavelengths over the same fiber.

    If you have a 10GBaseT connection, only using 5Gb in one direction doesn’t give you 15Gb in the other. It’s still 10Gb either way.


  • This is a really good explanation; thank you!

    There is one thing I’m having a hard time understanding, though; I’m going to use my ISP as an example. They primarily serve residential customers and small businesses. They provide VDSL connections, and there isn’t a data center anywhere nearby, so any traffic going over the link to their upstream provider is almost certainly very asymmetrical. Their consumer VDSL service is 40Mb/2Mb, and they own the phone lines (so any restriction on transmit power from the end-user is their own restriction).

    To make the math easy, assume they have 1000 customers, and they’re guaranteeing the full 40Mb even at peak times (this is obviously far from true, but it makes the numbers easy). This means that they have at least a 40Gbit link to their upstream provider. They’re using the full 40Gb on one side of the link, and only 2Gbit on the other. I’ve used plenty of fiber SFP+ modules, and I’ve never seen one that supports any kind of asymmetrical connection.

    With this scenario, I would think that offering their customers a faster uplink would be free money. Yet for whatever reason, they don’t. I’d even be willing to buy whatever enterprise-grade equipment is on the other end of my 40/2 link to get a symmetrical 40/40; still not an option. Bonded DSL, also not an option.

    With so much unused upload bandwidth on the ISP’s part, I would think they’d have some option to upgrade the connection. The only thing I can think is that having to maintain accounts for multiple customers with different service levels costs more than selling some of their unused upload bandwidth.





  • Like several people here, I’ve also been interested in setting up an SSO solution for my home network, but I’m struggling to understand how it would actually work.

    Lets say I set up an LDAP server. I log into my PC, and now my PC “knows” my identity from the LDAP server. Then I navigate to the web UI for one of my network switches. How does SSO work in this case? The way I see it, there are two possible solutions.

    • The switch has some built-in authentication mechanism that can authenticate with the LDAP server or something like Keycloak. I don’t see how this would work as it relies upon every single device on the network supporting a particular authentication mechanism.
    • I log into and authenticate with an HTTP forwarding server that then supplies the username/password to the switch. This seems clunky but could be reasonably secure as long as the username/password is sufficiently complex.

    I generally understand how SSO works within a curated ecosystem like a Windows-based corporate network that uses primarily Microsoft software for everything. I have various Linux systems, Windows, a bunch of random software that needs authentication, and probably 10 different brands of networking equipment. What’s the solution here?



  • If you’re concerned about power, I don’t see any reason it should matter at all where you have your cameras, as long as your PoE switch is rated to supply your cameras. If your NVR has some kind of built-in PoE switch, then you can probably avoid having a second PoE switch for your cameras by co-locating them in the same network closet, but PoE switches are so cheap, I’d say set it up however it’s most convenient for you. To answer your question of “is it possible,” it absolutely is. I’m doing something similar. I have a lot of cameras, but two of them are PoE and are quite a distance away from my NVR server. They feed into a PoE switch that connects to a second switch that acts as the main switch for the building. That switch has a fiber connection to a third switch that lives in my server rack, and that switch has a DAC connection to my DVR server. They work just as well as the ones plugged directly into my rack switch.

    The only real concern I see is bandwidth. If your cameras and NVR are on the same switch, you’d avoid having to pass the data from the cameras out across your network to the switch that has your NVR. For 4 cameras, though (even at 4k), your total bandwidth is going to be far less than what even a 1GB network can handle. It’s very easy to saturate a switch, though, so this is going to depend largely on your network topology and what you’re using your network for.

    I would highly encourage you to keep your IP cameras on a separate VLAN, though. IP cameras all have a tendency to want to “call home,” and while that might just be for something as simple as checking for firmware updates, I don’t want my cameras connecting to anything outside my network without my permission.


  • Got my two CRS310s, set them up, and they’re working well. I’m amazed with how configurable they are in comparison to my old Zyxel switches.

    I’m not sure I’m setting up VLANs correctly, though. There’s an option to set up VLANS under Interface or Bridge. I have several ports that pass more than one tagged VLAN, and as far as I can tell, that’s only possible on the Bridge. So my Interface -> VLAN setup is completely empty, and my Bridge -> VLAN setup contains all my VLAN assignments.

    I’ve researched this a bit, and it seems like I’m doing it the right way, but I’m a bit concerned I’m passing the VLANs off to the CPU instead of the switch chip. This is the first switch I’ve used with this kind of VLAN setup. Am I on the right track?

    Also, my 1GB SFP modules only work if I disable Autonegotiation; then they show as “Up,” with all the lights on, even if no cable is attached. Not a big deal really, but strange. I don’t have this issue with my 10GB SFP+ modules.




  • I understand what you’re saying. As far as using your school account to sign in to Microsoft Office, the fact that you use a school account should not make a difference in terms of privacy. If you’re using Outlook and Teams for school, just don’t use them for personal things, and you should be fine. If you’re using the web versions through a web browser, then you have nothing at all to worry about. If you actually install the apps, you still likely have nothing to worry about, although I would make sure they’re at least signed out and closed when you’re not using them. You don’t want to accidentally send a message to your school’s Teams group when you’re drunk and watching YouTube videos at 3am.

    As far as “enrolling in your school’s environment,” I’m afraid I don’t know what you mean by that. I know that some companies will install corporate nanny-ware on systems that they issue out to their employees (you’ve probably heard about CrowdStrike), but if you’re using a personal laptop for school, that’s not going to happen unless you hand it over to the school’s IT department and say “please fuck up my computer.”

    Most likely the “cloud” file you see in your documents is a Microsoft OneDrive account that comes included with your school’s Office subscription. You can use it as a backup for schoolwork, ignore it completely, or just uninstall OneDrive. I like keeping my important stuff on local storage, but if you want a place to back up a project, go ahead and use it. Maybe don’t copy your porn stash over to your OneDrive account.

    I am a strong advocate for keeping things separate on your computer. Not necessarily from a privacy standpoint, but more so just to keep everything tidy and easy to manage. If I was just using Teams and Outlook, maybe logging into an online portal, I’d probably just do exactly that without a second thought. If you find that you’re installing a lot of different applications for your studies, like I mentioned before, you might consider setting up a VM. A VM (Virtual Machine) essentially acts as a second computer within your own. You would install a hypervisor (I’d recommend VirtualBox for you), and inside the hypervisor, you can create separate “virtual” computers. You install your operating system, boot up the virtual machine, and use it just like you would a whole separate PC. When you’re done, you shut it down, and when you no longer need it, just delete the VM, and your PC isn’t cluttered with a bunch of stuff you don’t need. The “hard drive” for your VM lives in a single file, and once that file is deleted, it’s as if your virtual machine never existed. One way to think of it is like building a house inside a room in your own house. You still have a bedroom, a kitchen, bathrooms, and a living room. Only in this “virtual” house, you can paint the walls, throw parties, trash the carpet, invite hobos to live on your couch, whatever you want. When the house gets too trashed to live in any more, you just hit “delete” and it disappears; the actual house you live in is still in pristine condition.

    So just as a summary, my opinion is just use your computer normally. Log into whatever school resources you need and don’t worry. If you need to install a whole bunch of school-related stuff that you don’t want cluttering up your PC, set up a VM.

    It’s probably also worth noting that your school almost certainly isn’t trying to damage your computer or catch you doing something you want to keep private. They’re providing resources (a free Office subscription, for example) that they think might help facilitate your studies. You can use those resources, or not, but your computer is still your personal property, and your school isn’t trying to infringe on that.



  • I had no idea. Microtik is definitely new to me. For a long time, I always used surplus or recycled enterprise-level hardware, and that usually ended up being Dell, HP, or Cisco. When I did my most recent upgrade, I replaced most of that with Trendnet or TP-Link; it just made more sense, and I recognized the brand names.

    The fact that Miktotik has a CLI at all is kind of a plus to me, even if it’s horrible. Regardless, though, my network setup usually consists of Factory Default Settings -> Assign a Static IP -> Configure port-based VLANs. It’s not particularly advanced. Most likely I wouldn’t even need to use anything other than the web-based management interface.

    I really appreciate the suggestion. Microtik makes a few switches that would work perfectly for me, but I had written them off as a “random white-label brand.” I think I’ll probably be replacing my Zyxel switches with Microtik.



  • I haven’t used the Omada switches, but I’ve had good luck with TP-Link in the past.

    Switch fans are almost always going to have some level of noise. The smaller the fan, the faster it has to spin to match whatever the target airflow is. I did a fan swap on one of my Dell switches a few years ago, and while it did help, it took it from “profoundly annoying from behind a closed door” to “it’s not too bad if there’s TV or music on.” The Omada switches look like they might be a good solution, though.