HP DL380 G5 – First Look

Hey, Krafting’s here! Today, we’ll take a tour of a somewhat old HP G5 server. This is an HP DL380 G5, a 2U server from around 2005. This one is way cleaner than any previous server I looked at, and it is in pretty good shape. So let’s get started!

Table of Contents

The Outside

From the outside, this server looks like any other 2U server. The front-facing plate is gray-colored, which was also the case for most HP servers from G1 to G4!

On the front, we can find everything we can expect of a server from this era. This one has a DVD/CD player, a VGA connector, 2 USB 2.0 type A ports, the power button, a big fault indicator, and 8x 2.5″ drive bays. This would have made a nice storage or compute server at the time, as 2.5″ and 3.5″ drives were often the same size.

Here are some closer shots of the front panel. As we can see, there are plenty of LEDs with every major component to indicate if any fault is occurring. There are LEDs for both power supplies, all the RAM, the CPUs, the VRMs, and of course all the fans. What’s interesting is, we also have LEDs to indicate the storage configuration, if it’s a RAID mirror, or even if we have a spare drive online and working, pretty neat! We don’t have all these indicators nowadays on servers; we only have a single LED changing color depending on the error.

This particular server had a sticker on the front when I got it, and before I removed it, “PVE-02” was written on it. I can only imagine, this server was serving as a hypervisor using Proxmox in its old life.

On the back of the server, we also have all the usual goodies: 2x USB ports, a VGA output, a serial port, and even 2x PS/2 ports for keyboard and mouse. It also has 2 ethernet ports surrounded by 2 PCIe brackets for small form-factor cards and an iLO 2 dedicated port. iLO is HP‘s way of managing the server even when it’s offline. This can be used to remotely see the status of the server, power on/off the server, and even access the console remotely.

Also, both in front and on the back, we have an ID LED, which can be turned on remotely. In a rack with a lot of servers, this can help you identify which server you are working on right now at a glance. Once you finish your work, you can press the ID button on the LED to turn it off (or even to turn it back on later).

Lastly, this server is equipped with two power supplies (PSU), both of which are 1000 Watts maximum. However, we can clearly tell that these two PSUs are not from the same era. This server probably had a faulty power supply one day and it had to be replaced. I believe the PSU with the yellow sticker is the original one, while the other is a replacement unit with a date of 2009 on it.

Here are some goodies of the server that I also wanted to share here. On the top of the server, we can see an original HP sticker to get help or support for the server. And also, the GIF shows the mechanism to slide the server off from the rails, if the server had any rails of course. I find it pretty satisfying to watch, so I made a small GIF for here. One last thing I found weird but useful is this small allen key in the back of the server. It’s strurdily attached on the server but can be used for maintenance to remove some components.

To open the server, we have to pull this small tab in the middle of the top plate. This tab is really weird to use and hard to pull. You have to pull forward, and the plate slides backward. It’s hard to explain this feeling, but using this feels really weird. Once open, on the back, we can see a boatload of stickers and instructions explaining how to do maintenance on the server and where components are located, nice!

The Inside

Once inside, we can’t immediately see all the components. Some of them are hidden behind a large black plastic tray used to direct the intake air through the right parts.

Top view of the opened server.

What we can see, however, are the 6 intake fans on the front, here with the little purplish-red tabs on them. This red color also means we can remove them while the server is running and replace them without causing downtime. The 2 PSUs also have this red color for the same reason. And there is also a blue color all over the server, which means, you guessed it, you have to power down the server to replace or remove those parts!

We can immediately see the RAID controller, which controls the 8 drive bays in the front. As this is an enterprise server it has hardware RAID instead of software RAID. Back in its era, hardware RAID was way faster than actually using the CPUs to do all the RAID stuff. The size of the cables and the connectors used is also interesting; they are BIG compared to newer cables in newer and faster servers. This is using a SFF-8484 interface connector, which is an old SAS connector. It is installed in the small form-factor PCIe slots we talked about above!

Coming back to those blue colored stickers, we can see two of them, one on the power supply cage and one on the PCIe riser cage. This shows you how to safely remove these parts and put them back on!

To remove the big black plastic cover over the CPUs, we need to remove a few things first. Drive cables are routed through this, so we need to remove them and also the battery from the RAID controller, which is not attached but just stands here in its place. Once this is done, we can lift up the cover and it will reveal the CPUs, VRMs, and the memory. It also reveals more fans connectors, which means we can put 6 more fans on the server for enhanced airflow!

CPU and Memory

Top view of the CPU and memory without the cover.

With the cover removed, we can finally access the core of the machine.

First and foremost, the RAM is beautiful. They all have a heatsink of different colors: orange, blue and gray, and they are metal, which feels really premium, and the pictures don’t do them justice. They are very beautiful and shiny. This server is equipped with a whopping 40GB of total memory, with four 4GB sticks and four 2GB sticks. I’ll definitely keep this RAM just for the cuteness of it! On the picture you can also steal my default iLO password if you want!

To remove the CPUs, we need to lift up the big plate that hides the CPU coolers. Once this is done, we can wiggle the big CPU cooler to unstick it from the CPU underneath. Both CPU coolers have this cool orange metallic cover on them, which makes them look really cool. However, this doesn’t mean you can remove them while the server is powered on.

Under those two big dogs, we can find some 4-core Intel Xeon 2.33 GHz E5345 processors, manufactured in 2005. These are not the best CPUs this server can handle, but they are still pretty decent for a server from this era.

Close-up of the VRM modules.

You might ask, what are these two big towers at the side of the CPUs? Well, this is something we do not see anymore on modern servers. Those are VRM modules; they are used to bring current to the CPUs and the chipset. Nowadays, VRMs are directly embedded into the motherboard and can no longer be replaced easily, but on older hardware it was common to see swappable VRMs like these.

PCIe and Drives

Compared to other server I reviewed in the past, this one is recent enough to have only PCI Express connectors, no more normal PCI ! To remove the PCIe bracket you just have to unscrew the two little blue screws (with the small allen key!) and lift it up. Pretty easy. Once removed you can insert any card of your choice, and you have two PCIe x16 slots and one PCIe x8 slot.

Not that while the interface is x16 and x8 the actual size would probably be divided by two for each slots as the motherboard connector is not wide enough to accomodate all PCIe lanes.

The drive cage had a big blue plastic thingy on it, and I was wondering what it was actually used for, after a bit of thinking this whole plastic bracket is for keeping the drive backplane in place and if you want to swap the backplane you can simply push this out of the way. This is sping-loaded so this is hard to push out of the way but I can imagine why. You clearly don’t want your backplane poping-out easily.

The server came with two drive sled and six blank fillers. These black blank fillers are only here to prevent dust to enter the drive cage. But the real drive sled allows you to put a 2.5″ disk on the server. So I went and got two small SSD that you might have seen in an older blog post. Once in there, you can boot up the server and configure the RAID card, which I will not be doing today; this blog post is long enough as it is!

Power consumption & Boot

With all this hardware, I was wondering how much power this server would consume, so here are the stats! I tested both power supplies, as they are not from the same generation, and I found some interesting results.

PSU \ StatePowered-off with iLOMax. powered onIdle powered on
Newer 2009 PSU15 W340 W243 W
Older Original PSU22 W399 W258 W

I measured 3 things: the first is when you plug in the server and iLO boots up, the server will consume a bit of power. The second is when I press the power button, I try to record the maximum the server will hit. And lastly, I just wait for the server to finish booting up and check the consumption. As we can see, the older power supply is way less energy efficient than the newer one. But these numbers are still really high, and I wouldn’t recommend running this server 24 / 7.

The newer PSU do make more noises than the older one. I suspect it is because of the fan inside it, so not really dangerous or a concern.

Also, after the teardown for this article, I tried to boot up the server, and it was chaos. The server would make a lot of beeping sounds and light up a lot of the LEDs in front of the server. I didn’t capture this, as I just went to unplug the server instead to prevent damaging the hardware. I re-seated all of the RAM sticks and the CPUs, and once I had done this, everything was fine! Phew…

GIF showing blinking LEDs of the front diagnostic panel.

Also, I noticed something weird. When booting up, all the LEDs of the fans were blinking really fast, but once the server calmed down a bit, all the LEDs turned off. And you can also notice the second power supply LED being ON; this means this power supply is not receiving any power or is faulty. In this case, it was not plugged in, the server simply alert you.

Conclusion

This server was really fun to explore, and I also have its cousin, the DL380 G6, which I might make a post about as well. I have no real use for this server, but I always enjoy looking at old hardware. I will probably end up giving this one away in my area (once I get all this beautiful RAM out though!)

Anyway, I hope you enjoyed this bit of content, and you might consider following me on the Fediverse at: @krafting@mamot.fr or follow this blog directly @admin@blog.krafting.net

Leave a Reply

Your email address will not be published. Required fields are marked *