What makes a server a server, is often how you manage it. Servers should be able to be managed remotely by default. And on this server, Dell haven’t made iDRAC yet, and so we are stuck with IPMI in the terminal to manage the server remotely – You can mostly power the server on remotely and monitor the status of the server. On this blog post I’ll talk about the main commands to manage and keep you system in good shape. I’ll also talk about the noise it makes as well as normal networking inside an OS and I will try to put some more networking PCI cards in the server.
Table of Contents
- Management and IPMI
- Network Speed
- Numbered Lock and Keys
- Adding Hardware
- Ending Note
Management and IPMI
IPMI is a set of computer specifications to allow remote administration of a server, which runs on an always online module in the server. This means that even if the server is powered down, the IPMI will be available – not if you unplug the server from the wall of course though.
On Neptune – this is the name of this server – there is 2 gigabits ethernet ports at the back, and the first one can be used for the IPMI over LAN system and the second one should be used as a normal networking port inside an installed OS. Even though Dell says that you should not use the first port as IPMI and as normal network port at the same time, it does, in fact, works perfectly in this configuration. After booting any OS, the interface get an IP address, different than the one I configured on the IPMI BIOS settings and I can still access the IPMI system too! All of this on only one interface.
Note that this should not be done on a production system.
To use IPMI you need to first configure an IP address and a password inside the IPMI setting – It can be accessed using Ctrl-E at boot time, as saw in the last blog post. After that you can use a network connected computer to access the IPMI shell with the following command:
ipmitool -H 192.168.1.233 -U root -P password shell
Of course, replace root and password with the username and password that you configured in the IPMI settings.
If all goes correctly, you should now be inside an IPMI shell, and to test this, we can try to show all the sensors data on the server with the sensor command, and as we can see there is a lot of sensors, but most of them doesn’t have any data on them. What makes it interesting is the CPU temperatures, because on Linux the sensors command – which is used to display temperatures and other sensors informations – doesn’t show any data about the CPU temperatures, but using IPMI we can have access to this information.
Also, instead of going into a shell to get the data, you can just use a command to directly output the results:
ipmitool -H 192.168.1.233 -U root -P password sensor
You can also only ask for temperature sensors using this command.
ipmitool -H 192.168.1.233 -U root -P password sdr type temperature
The IPMI shell can also report systems errors, and as my server is blinking orange currently – which means that there is an error somewhere – I can check the logs to see if anything shows up in the sel list command. The SEL command stands for System Event Logs, which logs everything that has happened to the server.
We can see some old logs from 2010 and also a log from 2006 – which was the last time that the logs got cleared. I have no idea what are the “Pre-Init” logs, but they seems to be reporting a critical voltage, maybe the server at some point was plugged to a bad outlet, or maybe it was during a power outage, I guess I’ll never know.
As for the 2023 logs, the Thermal Trip seems to be when the server overheat according to this article, but this should not have been here as I didn’t do any real stress CPU test on the 15th. So, I then proceded to clear the logs using the sel clear command.
After clearing the logs, the LED that was blinking stopped and stayed a steady blue as it should when everything is normal! I guess it was just asking you to review the logs and clear them. Later that night, after doing some tests on Windows Server, I noticed that the LED was blinking again, so I checked the logs, and it seems like the CMOS battery will be failing soon. I might want to find a replacement. It also seems like the processors reached the thermal limits again ? Which I don’t think it is true as the server was keeping a steady 40C in the sensor reading. The server still works fine even though it reports some weird stuff. So I’ll change the CMOS battery when I get one and we’ll see how it goes.
IPMI can also be used to power the server up and down with some simple commands such as:
ipmitool -H 192.168.1.233 -U root -P password power on
ipmitool -H 192.168.1.233 -U root -P password power off
Theses commands can really come quite handy when you want to automate the power on or off of your servers. As an example, maybe you could make Home Assistant turn on your servers an hour before you come back from work, then turn them off at 2am to save on electricity!
Also, speaking of the LEDs, I also found the documentation of this server, and the ABCD LEDs at the front of the server – which we’ve seen in the previous blog post – seems to be used as debugging LEDs. The LEDs can be orange or green and, depending on the color combinaison of each LED, it means something different. When Neptune is powered on, all LEDs are green, which means everything is alright – Yay! They act like the two digits display on a modern PC motherboard. You can see the details on the official documentation of this server here at page 16.
As I stated a few times previously, the server is quite loud, it does make the fan runs at 100% at startup, and 10 seconds later the fans calm down at around 50% I’d say. According to the IPMI all fans runs at around 5700 to 8000 RPM depending on the fan (see the IPMI screenshots above). This is a 1U server, so noise is excpected, but because of the noise I need to limit myself. As I don’t live alone in a house, there are people around me, so I try not to do too much noise, but with this server, noise can be heard in the hallway even with the door closed. So I had to limit myself and power everything down before 20h to not disturb my neighbours.
This server won’t be something I’ll be running 24/7 anyway, but only once in a while.
I wanted to see if I could get the whole gigabit connection speed, so I loaded up iperf3 on my main computer and made a few tests with it, and as you can see below I do indeed have a full gigabit connection on this server!
To do this test, I used the iperf3 -s command on my main computer, which output a port and wait for connections – the port is 5201 by default. Then, I launched the iperf3 -c <@IP of my main computer> -p 5201 on the server, and it does a speed test between the two devices.
Which makes me wonder, if a server from this era have gigabit interfaces, why our computers never evolved more networking wise. I have a PC with a 2.5Gbits port, but this is only because I have a high-end motherboard, everything else evolved so much in the last few years – CPUs, GPUs, RAM.. – and it’s sad to see that networking didn’t really evolved at all in consumer computers..
Numbered Lock and Keys
In new hardware that you don’t really know, you always find some strange things that you’ve never seen before. Which was the case when I saw a number written on the key and lock on the front panel of this server. At first, I thought that every server ordered got a different key with a different number – which would be awesome for security, but also this would have been aweful if you lose the keys…
So I did some research and I found out that these key were on a lot of Dell server, all with the same number: 361. I didn’t find any information as for why this number was choosen, so if you know anything about this, please let us know!
In the previous blog post, I said I would try to get my hands on a PCI graphics card to try some more gaming. But I didn’t find any PCI graphics card, these thing are way more complicated to find than I anticipated. I only have PCIe or AGP graphics cards.. I did find two PCI graphics cards, but theses are from 1994 and 1997 with only a few MB of graphics memory, which would have been useless in this server.
But I did find some PCI stuff that I can try on this server. First, I found a WiFi PCI card, which is 11a/11g. According to Wikipedia, it seems I could get around 54 Mbits/s. And second, I just found an old RJ45 network card, that can do 100 Mbits/s.
As you can see, these cards doesn’t have the same connector type, despite being both PCI. From Wikipedia, we can see that there are 6 PCI connectors variant. My server only accept 3.3 Volt or Universal cards (64bits or 32bits). Which means I won’t be able to test this RJ45 card on this server. But! I had other PCI network card laying around, and these have a universal PCI connector, I also found some old WiFi card not worth testing, as they were almost identical to the one I already have.
I will try the WiFi card first and then I’ll put the D-Link RJ45 card inside, and I might keep it there as I don’t have any other purpose for the PCI port, and another RJ45 doesn’t hurt that much, especially in a server that will be turned on once a month for backup purposes – more on that in the next blog post!
To install a card in the server, it’s pretty easy: just pop the blue stopper out of the server – at the bottom right in the picture – then remove the placeholder PCI bracket, put your card in and put the blue stopper back in place and you successfully installed the card!
This server is a 1U server so it will only accept cards that are 1 slot high.
Once the PCI card was installed I booted up the server again. And I know WiFi on Linux can be a bit of a hassle sometimes, but Linux really got a long way and the card was detected and worked right out of the box – or more like out of the cupboard I suppose in this case. You can see it right here in the picture below, with the ID 03:07.0. Also, it seems that the server initialize the PCI port before the second networking port, which would explain why it’s sandwiched between the two gigabit interfaces.
After connecting to the network using the GUI without any issue – note that I connected to a 2.4GHz network and not a 5GHz one – I tried to do a iperf3 test, same as I’ve done earlier.
20Mbits/s average, it is not even close to the 54Mbits/s advertised. But I’ll take it, for an old server like this, it’s enough. But I won’t keep the wifi card on the server, and I’ll now try the D-Link RJ45 card as I said before.
Installing the card inside the server was easy and once again, was recognized immediately by the OS. Same ID as the WiFi card, 03:07.0. It shows up as a VIA Technologies card – which is a brand I’ve never heard of before.
This card is supposed to do 100Mbits/s, so I did an iperf3 test once again.
And… there we go! 94Mbits/s, it’s not bad at all, well it’s not Gigabit either but it’s still closer to the advertised value than the WiFi card was. As stated I will keep this card inside the server to have access to another RJ45 port, I have no real use for it but we never know what the future is made of!
This server is not well equiped compared to today’s standards but it still does its job very well and if you’re not scared of the terminal you will be able to managed it without any issue! It’s a shame that I didn’t find any PCI graphics card to do some more gaming on this server.. but I’ll live!
In the next blog post I’ll setup a proxmox backup server – which I’ve never used or tried before – so I can have a server with some storage to backup all LXC and VMs from my main promox server. This will be an interesting one!
As always, thank you very much for reading, feel free to give me any suggestions or remarks! I wish you all a very safe day and see you next time!