Taking a New Direction with the Server

As I said a couple weeks ago, I got my DL380 server going. At least for a little while anyways. I started testing some services on it like WordPress and Grocy (the latter of which will be a post of its own in a couple weeks). I was satisfied with the web services, so I decided to try getting my stupid Ceton TV tuner card set up in the server. I got Proxmox ready to do a PCI passthrough of the card to a Windows 10 VM and then installed the card. To do so, I had to detach the SAS cables from the RAID card. Unfortunately, the server won’t boot up correctly with the tuner card installed. So I took the tuner out, which means I detach and reattach the SAS cables again. I made sure to connect them to the same ports as before. But, to my annoyance, when I started the server, it couldn’t boot from the hard drives anymore. I don’t know if disconnecting the cables ruined my arrays, or maybe I mistakenly connected the cables to the wrong ports on the RAID card, or what. This enterprise server seems so touchy. I guess part of that might be because I’m not really using it as it was intended. Anyways, I’m going to cut my losses and use some of the parts to put together my own “white label” server.

I should have built my own to begin with, but I couldn’t find any parts that could beat the price to performance ratio of the DL380 on paper. I think part of the reason for the DL380’s excellent price/performance is the relatively unloved socket LGA 1356 for the processors. LGA 2011 is from about the same era, but was used in a lot of servers and desktops. The processors and motherboards, even many years later, are decent amount more expensive than similar LGA 1356 parts. At the beginning of this, I was unaware of the relative rarity of 1356 parts.

I found a dual-socket Intel motherboard that should do the trick. I’m teaming that with a Dell H310 RAID card that I’ll be flashing for use in IT mode so I can use ZFS and an HP NC365T NIC. This motherboard has enough slots to accept my TV tuner card, so hopefully it’ll boot with it installed. There’s also a PCIe x16 slot, so if I’m really lucky I might be able to put my RX 480 GPU in there too. The motherboard is a CEB size, which the internet tells me is between a regular ATX and E-ATX in size, and uses the same IO shield size and screw holes as ATX. I picked an Antec P101 case to hold everything. It says it should hold an E-ATX board and eight 3.5″ hard drives. Plus, it has a 5.25″ drive space on the outside, so I can move my Blu-Ray drive from my desktop over there and set up an automatic ripper.

Right now the only part I don’t have on order is a power supply. I want something that’s at least 750 watts, and I’d probably go up to 1000 watts. I definitely want something 80+ Gold efficiency at the least, and I have to have two CPU power connectors for the dual socket board. This narrows down my selection, but not severely so. Both EVGA power supplies I have in my house right now meet those criteria. Unfortunately, it appears coronavirus has totally wrecked the supply of power supplies. Hardly anything is available, let alone decent power supplies, and what is available is two to three times more expensive than usual. I’m going to keep my eyes peeled for reasonably priced used ones, but I may have to wait a while before I get this server going. I’ll update when I get it built.

Fixing Internet Speed on Virtualized pfSense

It’s been a few weeks since I set up my pfSense router inside Proxmox inside a HP desktop computer. After I set it up, I noticed my internet speeds weren’t quite what I was getting with the Orbi acting as the router. With the Orbi, I generally got somewhere around 650-700 Mbps for downloads and 700-800 Mbps for uploads. With pfSense, I was getting around 520-550 Mbps for both. My internet service should be 1Gbps in both directions (actually a theoretical maximum of 940Mbps due to the way the network hardware works). I set up pfSense as the Great 2020 Work From Home was in full swing, so I thought maybe Verizon’s network had more concurrent users during the day slowing me down. I didn’t really think anything of it until today, when I was downloading a 100 GB game.

When I first setup pfSense, I told Proxmox to give it two NICs of the VirtIO paravirtualized type. When I get pfSense set up, I noticed it told me the speed of the two interfaces was 10 Gbps, and my web page loading times were very long. I assumed this was a duplex mismatch, and changed the NIC type to Intel E1000. Pages loaded just fine after that. It turns out it was a mistake to change the NIC type. VirtIO was the correct type, and the 10 Gbps speed was referring to the link to the Proxmox virtual switch, not the link to the internet or my physical Cisco switch. I changed back to VirtIO and disabled all hardware offloading in the System > Advanced > Networking settings of pfSense.

I also happened upon a Reddit post describing the same issue I had. I followed the directions to install ethtool and add one line like

post-up ethtool -K vmbr0 tx off

for each virtual and physical interface in /etc/network/interfaces.

I also discovered that while pfSense CPU usage was only in the single digits when doing web browsing, during speed tests and large downloads, it hit close to 100%. I resolved that by adding another CPU core in the Proxmox hardware configuration. CPU usage is now 70-80% during big downloads.

I fixed everything up with these changes. My downloads and uploads now easily hit their maximum possible speeds of 940 Mbps, at least when other internet usage is kept to a minimum. I wish I did this last year when we first got Fios because I never got the advertised gigabit speeds with the Orbi router. I guess the Orbi wasn’t designed to handle a gigabit WAN connection. pfSense handles it with no trouble, at least once it’s properly configured.

The Server is Running

Long story short, I couldn’t figure out how to get the DL380e going again. I decided it was best to just get another because it was so cheap. The new one came in over the weekend and other than some damage to the hard drive cage, it’s working perfectly. I switched the damaged cage out for the good one on the broken server. Set up was fairly easy. I used the HP SmartArray tool to set up two arrays; one single SATA drive for the hypervisor, then six 3TB SAS drives in RAID 6 for my VMs. That gives me 12TB of storage. I got some virtual machines running today, and they’re working great.

One thing I’m not too thrilled about is the power usage. With a couple VMs going iLO reports power usage at around 165 watts. I was hoping for more like 100 or 120 watts. It’s hard to justify that kind of power usage. On the other hand, it’s hard to beat the performance per dollar of a used server. The server itself, processors and memory were around $250, for a machine with 16 cores, 32 threads and 46 GB of memory. I could get something that uses less power, but it would cost more for less performance. My ideal machine would probably be a 16-core Threadripper, but processor and motherboard separately cost more than my whole server. Maybe I can get some used stuff in a few years when it’s time to retire the HP server.