proxmox
Is it time for ALL NVME in your HOMELAB? Ugreen NVMe NAS
Is it time for ALL NVME in your HOMELAB? Ugreen NVMe NAS
#time #NVME #HOMELAB #Ugreen #NVMe #NAS
“apalrd’s adventures”
Today I’m looking at the Ugreen DXP480T Plus, an ALL NVME NAS!
Key Specs:
– 4x m.2 2280 NVMe drive bays with cooling
– 10Gbe via Aquantia/Marvell NIC
– Intel Core i5-1235U processor with 8G DDR5, expandable to 64G
– 2x Thunderbolt expansion ports for high-speed IO or networking
Ugreen NASync…
source
To see the full content, share this page by clicking one of the buttons below |
It should be at least 8 nvme disks, preferably 16 even if 1xPCIe 4.0. BTW how cool it would be to have usb 4.0 network switches especially considering usb 4.0 v2 (80Gb/s) is a thing.
Man, $1,000 for this little device. The cost is a bit to high to me. Why this device has a such high price tag?
$1k plus all the extras M2 now way!
For the money I'd get the QNAP TBS-h574TX-i3-12G-US instead. 5 bays and can take E1.S which opens up a world of options for SSD's even with that unit's PCIE limitations. It is nice that the RAM is upgradeable in this unit though. What we really need is a NAS maker to use something like an Epyc Siena or ARM chip in one of these, fairly low power but gobs of PCIE lanes, none of these consumer chips have enough PCIE lanes to really do all NVME NAS's justice. Of course that would cost lot more though.
Great video but completely unrealistic from a price perspective. 4TB drives plus unit is 2k. Not a homelabber setup especially with 3 of these like was suggested.
If you are testing a lot of devices like this it'd be nice if you got 2 sticks of 48GB RAM to test compatibility
"I get plenty of people in the comment section saying oh that's so expensive you could do used Enterprise for cheaper"
bro, you could build a much higher performing and more flexible system using consumer parts for the price of this. the $999 price tag is silly and no one should buy this…the early bird pricing of 50ish% off was fine, but the full retail price is absurd. No thanks!
that power supply is gonna fail – too small to dissipate 140w – be careful of fire.
i want a big ass rack mounted server but my wife would not let me :/
When will it be available in EU? Please provide link.
Great review as usual but allow me to whine for a sec.
– No SFP+
– No ECC
– Not willing to spend 1k on this.
You can load OMV or truenas on this?
12:30 "SCSI's more performant than VirtIO" it is? I was under the impression VirtIO was more performant because because it was "native" to virtualization as opposed to having to fake a bunch of SCSI protocol stuff for the sake of standards compliance.
Yes please
Looks pretty nice… wish I could get one for review. Also would be nice if they used an SFP jack instead of the 10Gbps simple port.
I really want to love these SSD NAS units.. but I think I'll stick with my own 'built' system consisting of a MINI PC with 1 NVME and 1 SATA SSD and 64gb of RAM. Sure, there's no RAID running on it, but it as bandwidth permits, it stays in sync with a spinning disk server back at my main home office… I get my portability, speed and a powerhouse of a mini PC on top of it all that runs several virtual machines.
Ah, power issues. Does it support UPS,if so which ones? You can't afford data corruption due to power cuts. I bought the Terera Master F2-424 and about to put TrueNas on it, I'm going to try putting a low profile right-angle USB A cable in the onboard port to a slim internal NVMe case, then boot from an external USB drive to install TrueNas. I've upgraded to 32GB to run a few containers, but as it's a N95 I'm not going to be able to run many on it. I mainly want it for my video library, and backups from Veeam Community Edition via iSCSI LUNs.
This nas fucks.
Looking like an interesting little travel NAS.
Would have been nice to see some benchmarks tho.
And a tip for the next install: Don't cover the NAND with thermal pads, just the controller.
NAND actually likes being a bit toasty.
it does have dual usb4 but other than that this is a clear case where you should really just build your own serious nas for half the price and double the performance – going with all ssd/nvme is a great way to go but having a spinning rust raid is always nice as well – prices for these will drop a bunch
Why are you not a huge Docker fan? Maybe its because I'm also a developer, but as far i'm concerned Docker is a godsend for simplifying the distribution of all the multi-technology stacks used in modern software and web development. Its networking isn't that complicated if you understand just how its trying to security act in a explicit defined access only model. I've certainly never had an issue with it, although I guess maybe that comes with being a developer who works not only on how my containers communicate with the outside world, but also how different images and containers interact between each other, which probably helps me navigate all that quite a bit. Its definitely nothing thats gonna be easily used or fully understand with a GUI most of the time, absolutely is much easier just hacking away at the config files manually. Its a beautifully elegant solution that simplifies development, deployment and makes iteration on version of software much easier to manager. So idk… I love it, because I hack away at and mod lots of existing open source tools, and make a lot of my own, for things like this.
I hear about more and more people making 3-node clusters out of mini-pcs with Thunderbolt as the backend. I'm hoping we can see some optimizations to make the driver a bit more performant. I'd like to build my own sometime in the near future.
Docker networking…… that's what has put me off Docker….. and all the image layers and what not.
Instead i use containers on Proxmox.
My NAS has NVMe drives as the tip of the spear storage (fast, critical storage for low latency operations like VM/Container OS images, data caching, database storage), SATA SSDs for less critical storage (often used data and warm archives), Rust drives for long term cold archive (straight up, "never touch these" type files and 1st tier backups). NVMe and SSD disks are on a 2 x 10g nics and Rust drives are on a 2.5g nic, all running on top of TrueNAS, on top of a 48 core Epyc CPU and 128gb of RAM. Used the Epc for the large RAM capacity, CPU cores for small low priority VMs and containers, many PCIe lanes and the fast exotic storage buses.
Anyway you could compare this to something like the Asustor Flashstor 12 Pro FS6712X? $200 USD less for triple the bays but no thunderbolt. Curious what your thoughts are on that.
Try running with `iperf` rather than `iperf3`.
I have found that `iperf3` yields slower results than `iperf`. Not really sure why.
I can imagine what you can backup on 4TB. Considering how much expensive SSD are, there is no market for this stuff.
I still find it sad that there is nobody making a simple NAS appliance that does proper high-available NFS. For now I'm scratching by using a debian VM on top of proxmox with replicated ZFS local storage, but that's sadly not actual HA – one VM outage easily kills heavily written sqlite dbs on it.
Only 4 drives bays ?
At this price, just get a PCIe extension card with 4 drives bays it will be only 50$
protect what you love…..put it "safely" on this chinese nas 😀 love your videos though man 🙂
There's actually a way to test a Thunderbolt drive if all you have is a MacBook. You could place your MacBook into "target" mode, so it will act like a Thunderbolt-connected storage drive. Instructions are on the web.
Apple includes that as a backup data recovery method, in case you can't boot your Mac.
Please, use "ip -c a", so it will color and highlight the addresses for your viewers. It just makes it easier to read.
For that price tag , How about something like minisforum ms01
$1000, no disks included, and max individual disk size is 4TB…. you've gotta be kidding me.
I would love a video about your distaste for Docker (and possible alternatives. Do you just run every app bare-metal?)
Btw – docker networking is less of a disaster if you use MacVLAN network type. It lets the container look like a VM to the network, complete with its own MAC address and IP.
I stopped using consumer drives over a decade ago for critical infrastructure, ESPECIALLY for NVME drives. My u.2 drives are hot, they need air cooling but not much even though they pull up to 20w-30w (8x PM1733 u.3 15.36tb) depending on firmware, which is impossible to find for Samsung enterprise. Next time I will go with Intel as you can find the firmware. SR-IOV for NVME, yes please. Different firmware for different load types? Yes please? Slow down…fuck no.
How could I get proxmox to see those nvmes’s as CEPH OSD’s?
Such an underrated channel