Create a Server

Silencing the 100% Arm NAS—while making it FASTER!

Silencing the 100% Arm NAS—while making it FASTER!

#Silencing #Arm #NASwhile #making #FASTER

“Jeff Geerling”

Visit to start your 30-day free trial, and the first 200 get 20% off their annual subscription!

The HL15 NAS build using an Arm64 CPU was surprisingly loud—and power-hungry.

Can we improve that? This video will explore making it almost silent, while also reducing power consumption by half. Then we’ll dive deeper into performance and efficiency.

Check out the Ansible playbook I’m using to manage my NAS and set up ZFS and Samba:

Links to other things mentioned in this video (some links are affiliate links):

– Earlier build video (part 1):
– Noctua NF-A12x25 PWM fan:
– Noctua NA-IS1-12 Sx2 Inlet Side Spacers:
– Noctua NA-FH1 8-channel fan hub:
– Techno Tim’s HL15 video:
– Top500 Benchmarking repo:
– THIRDREALITY ZigBee Smart Plug:
– Reed Instruments R8050 Sound Level Meter:
– Home Assistant ApexCharts graphing plugin:

Support me on…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

36 Comments

  1. Hi Jeff, can you do one video about reducing the power usage of rpi5. Like under volting,CPU gov, turning off modules etc. mine is drawing 9W with two USB connected ssds.

  2. One thing to note with fans, there are lots of manufacturers out there that do comparable fans at quite competitive price points. Not all of them might hit the absolute lowest dBA like Noctua does, but you're very much paying a premium for that last little bit. Thermaltake, Arctic and Nidec all make really excellent fans that are worth considering. They also come in more palatable colours, too.

  3. Those junk QNAP NAS's only chew such small wattage because they are using software raid – good to see you are using a broadcom H/W raid – remember it has it's own CPU on the card and such. QNAP boxes fall over hard if you push them – but at least you get to know mdadm well…..

  4. The TrueNAS scale is good enough for me, no need to buy ARM system as NAS/home server, just choose motherboard carefully, make sure the motherboard can enter package power state C6/C8….

  5. ummmm so Rocky idle power usage was insane but switching to ubuntu brought that .. way down? i wish you just used ubuntu from the start, there's like no reason to use something less common like rocky unless you're a fan of rocky and trying to push it

  6. If my IMM2 module is to trust my RAM alone uses about 100W which pisses me off a bit, but that's with 768GB of DDR3 LRDIMMS, 24 of them, and the whole server only uses 250-300W under load, which is okay. Definitely has more power than my old PC that I used to keep running overclocked 24/7.

  7. What I get from this video is that I need to spend another $200 on fans to make the 45 drives case quiet. They should knock off that much to make it more appealing.

  8. As you found, SLOG only helps with random writes, and only if they're synchronous; ZFS is copy-on-write, so as long as the transfers are asynchronous and thus the storage layer doesn't have to wait until they're written to disk to return a response to the caller, the performance of random writes basically turns into that of sequential writes (assuming sufficient RAM to buffer the transfers).

    You show 600MB/s for async random writes for just the hard drive array and 243MB/s for synchronous random writes with the SLOG (since they're just SATA) but it would have been interesting to see the random write performance of just the hard drives with sync forced on (which I would assume would be substantially lower than your result with SLOG).

    Obviously this is not the scenario for async writes (which is most fileserver-style transfers like SMB) but sync 4k random writes are a very important metric for databases, hosting VMs on the pool, or in some cases mounted block devices like iSCSI depending on your stack and settings.

    If you can still find them, any of the old Intel Optane 16GB/32GB/64GB M.2 sticks make for great SLOGs as they provide extremely high IOPS and have excellent write endurance, and more predictable performance under load than regular SSDs (especially compared to something like QLC).

  9. It's too bad that there isn't a native build for Proxmox on ARM otherwise that would be a killer platform.

    I would've been very interested to see how GPU passthrough worked on a system like this.

    My current dual Xeon E5-2697A v4 (16-core/32-thread) Proxmox server with 256 GB of DDR4-2400 ECC Reg. RAM, and 36 HDDs consume around 670 W.

    Going with your numbers, the theoretical potential idle power consumption (if I was able to replace my dual Xeon Proxmox server with something like this, whilst keeping 36 HDDs) would go from ~670 W –> ~273 W.

    But also like you said in terms of application support (or lack thereof for the aarch64 architecture) — that's a bummer.

  10. I’ll be interested to see how you get on with these fans. I’ve had a 100% failure rate with that particular model. I’ve switched back to their previous models because of this

  11. How to silence your 1U server: Put it in your basement, but at least 3" off the ground and away from walls cause condensation and flooding and all that. That's what I do, although I only have 2U servers.

  12. Put headphones on for this to hear the difference between the fans and well, definitely didn't need to do that. The difference is crazy.

  13. Ah, Noctua. Even their cheap fans are fantastic. I replaced my AIO fans with their low-end fans and it was a straight upgrade in terms of cooling and noise.

  14. For speeding up thing in that case its better to use big SSDs for "Metadata", cause metadata pool can actually hold all small files too. That will leave hdd for big sequential transfers and ssd for more random stuff.
    btw, ZFS will kill your QVO drives very fast.

  15. Quick note: Your energy prices for europe are actually what the providers pay on the market that day, consumers pay between 0.20 – 0.40€ per kWh

Leave a Reply