Hyper-v
Blazingly FAST 25 gigabit Over Network! – Synology 3400d Review
Blazingly FAST 25 gigabit Over Network! – Synology 3400d Review ft. Kioxia PM7 SAS SSDs
#Blazingly #FAST #gigabit #Network #Synology #3400d #Review
“Level1Techs”
Grab hold your wallets, and maybe lock’em down cause you’ll be wanting to spend it after watching this video!
********************************
Check us out online at the following places!
*IMPORTANT* Any email lacking βlevel1techs.comβ should be ignored and…
source
To see the full content, share this page by clicking one of the buttons below |
I bought a SAS HDD by accident, bc it was such a good price. But I'm looking to put it into an externally powered enclosure to backup my pc periodically, a do-it-yourself external hdd backup product.
However, I cannot find an enclosure that is under $100 that will work with SAS interface, then output to SATA or USB 3.0. Any suggestions?
Enterprise grade … LOOOL. Never buy syno for any important enterprise level stuff. Home toy – with a hefty price tag. Best to keep the fingers away from syno.
But Can It Run Crysis?
Just run the patch that adds the drives to the supported list, makes them all good again
Most of my network stuff is small and private, almost cozy. The equipment is still interesting just because of how far back in generations it's possible to go before the upgrades no longer help. Onboard 1GbE is fine for most of my HDD jobs but dual 10GbE SFP hits in all the right places and is a wonderful security against bad modems that kick everything off the network during an Internet outage. Nothing I have is new or durable enough to invest in anything more. Maybe the day I retire this Ryzen box.
13:40 another note about SMB MultiChannel – if you enable LACP/Port Aggregation on your NICs, SMB MultiChannel will be unavailable. Many people will opt for Aggregation over MultiChannel
As always enjoyed the video! Little sad the 3020 couldn't hold its own against it, love those boxes. I guess ultimately its held back by the lower SKU CPU and limited memory bandwidth to differentiate it from its peers and lower the price point.
Oh OK , How about application?
Synology Hardware is very slow for the price. But the software is an insane bargain. Compare the pricing on Sharepoint, Resilio or Minio on a file sharing tool similar to Synology Drive (which is free with even a $500 Synology) and you'll realize even $10k for a server means you're getting the hardware for essentially free compared to competing software options.
And that's just Synology Drive. Hyperbackup, their VM backup software, LDAP server, etc is so user friendly and reliable that if you are a small company with light-to-no IT, the Synology hardware is a no-brainer.
Wendell is sure religious about Windows. Took him 17 seconds to start complaining. Mellanox ConnectX-4 25Gbe cards work great in all my Windows machines and my Mac Pro.
5:00 This topic of NVME dual pathing got me thinking about the Dell PowerStore T line which does Active/Active NVME storage. I wonder how they are communicating with all the NVME drives from both nodes.
Would a connect x4 work in with 4 pcie lanes (i only need 1 port to work at full speed)
Casually dropping 2-3 giga on the table for all to see .
I actually wanted to grab one of these for some FC tests and labs to some Cisco MDS switches, maybe try some FCoE etc, just for cert and lan experience. Problem is the total noise, heat and power use for 4 switches (2 nexus, 2 MDS) starts to get insane for a home environment.
Either way, a lovely machine! Even having a nice HA LDAP box is totally worth it. But the noise….π I can't wait until these things advance to the point of 30 dBm or less!
Thanks for the review π
That 25Gb switch shortage on the second hand market seems to be a very recent thing, I've been upgrading my network infrastructure for the past year and you could find a bunch 25Gb switches around the 800 euro/usd mark (i forget which i checked at the time), and for the past few months all of those switches have been bought and there has been a lack of new switches coming in at that price point. Let's hope some large datacentre somewhere upgrades to 56Gb as that seems to slowly be becoming a thing.
What is this meme at 9:52 that flashes on by lol
You should look at the QNAP ES1686dc system.
It's a little bit more expensive, ($14,999 on B&H Video, but it also comes with 64GB of RAM, and you can use SATA drives with QNAP) wheras the Synology is $9999 (also on B&H Video), but that system only comes with 16 GB of RAM, and you can't use SATA drives with it.
But both are dual controller, and the QNAP system has 4x 10 GbE SFP+ ports.
Synology also has only one PCie 3.0 x8 slot whilst the QNAP has two PCIe 3.0 x8 slots (2U vs. 3U respectively).
Such a pity that the dual controllers are so expensive.
It's different than other NASAs?
(plugs in 10/100 PCI card) "whats that metal plug thing on his ethernet cord?"
9:57 screen recording scrolling flicker. I'd bet money you're on Wayland π
It was nice to hear you include some information about Hyper-V and iSCSI.
TLDR: This 20000β¬ server gives you 25 gigabit transfer speeds via network
Products: (1) Synology SA3400D (12) kioxia PM7-V SAS SSD
You could run mcafee on your nas….don't do that.
Dat Nas
Spinning rust huh, well Iβve seen more rust on SSD than HDD internals
Man, I was already pretty happy with the old RS3617xs+ with 10Gb links and iSCSI mpio…. somehow I missed synology making dual-controller units.
so I get that virtualizing a router is "forbidden," but I have not only that but a "forbidden" NAS on my proxmox box that I've had set up for quite some time, all on AMD on a Aorus X570 board… I also managed to get traffic shaping and failover set up on pfSense CE (virtually), as well as a Shinobi server running about 10 4k cameras.
am i doing it wrong?
I'd rather have an iX Systems X10 or X20 for the price/spec ratio of this thing.
Is NAS singular and plural like deer, or is it NASs or NAS's…
politics is the best form of Planned Non-Parenthood.
Good old Fischer Price!
For the scenario where you don't have the build in dual High Availability – don't you still need two identical Synology servers? Their documentation says this:
"Implementation of Synology High Availability requires two identical Synology servers to act as active and passive servers."
meehhh can get my MT27500 ConnectX-3 or NetXtreme II BCM57800 (both 10 Gigabit) & a file manger to show 1.1GB/s… over 30m of copper!, sure it started at 3…what was the limit? HDD write?
some iperf tests would have been nice..anyhoo always nice to see a lvl1 tech vid
…but ACTUAL measured performance would have been welcomed when we are ment to be focused on the 25 part of it all….
Edit: NB… not one of the 14 thumbs down folks… can't wait for "links" with friends later π
A bit of an off-topic: is there any way to run a vdi with windows on a proxmox server, preferably with free solutions?
100GbE Intel Ethernet Network Adapter E810 CQDA2T maybe worth a look. FYI – youβd need about 46,451 bananas stacked on top of each other to reach the height of Mount Everest!
I feel like we are due for a major change in computing. So much of what was originally designed for home computers 20-25 years ago can still be found on current computer. A lot of those standards have evolved and we have even added new ones but a lot of what we know as a computer is still found on modern computers. ATX as the motherboard layout was set in 1995. Gigabit ethernet was 1999. SATA is 2000 and even SAS was 2004. PCIE was 2003. DDR Memory was 2000. I feel it is time to introduce some breaking changes and update what we know as computers.
I would love to see a change where the memory was moved on die with the CPU and treated as an Addin card on the motherboard. This would inherently change the ATX layout entirely. If you really wanted expandable memory I would use CAMM as the standard and you could even slot it on the back of the PCB now that it is vertical on the motherboard. Along with this we would adopt ATX12VO for our power delivery. Gigabit ethernet really needs to go. 40G base-t was finalized 8 years ago and should be the standard on new premium devices. 10G should have been ubiquitous years ago. Really though we should be moving to fiber connections like they do in datacenters. SATA should be removed and SAS honestly should be on its way out. I think NVME has show that it really is the better protocol in most scenarios. Finally PCIE. It is fine as the protocol though we really should update the connector to allow for greater power delivery.
Now that we have upgraded the hardware we could really simplify a lot in the OS. remove support for legacy systems in the UEFI. We don't need to be backwards compatible with devices 20 years old. I shouldn't have to run through the UEFI and have to turn on above 4G decoding in the bios anymore. OSs have had 64bit support since windows xp. Why are some of these things still turned off in the UEFI? Next I would require NVME for the boot drive on the OS. Given how fast NVME is your could completely change how the OS works, especially in regards to power states like sleep, shutdown, hybernation, etc. It is fast enough that for background tasks like an update while the computer is in sleep you wouldn't even need to turn on the RAM. All of that could be done using the NVME as the memory. Other software things like heavy use of virtualization for security as well as enabling multiple users to a single home server could really be a thing. If faster networking were common it would be possible. WE really need to make some changes on PCs that finally break backwards compatibility in real ways other than to add a basic TPM requirement. We need a real step forward here that we haven't had since XP.
Thx 4 being there π
Why would people use software that doesn't respect their freedom of speech? That's fucking stupid.
Played with a Synology DS920 that a consultant spec'd but never used… It kept reaching out to Turkey, so I unplugged it from my network.
I just upgraded to a bonded 2x10Gbps Intel controller between my workstation and my server. I've tried hard with a lot of configurations, but I can't get anything over 16gbps between my systems (RAM disk to RAM disk). I'm using 5000 series AMD CPUs between each, so it's modern hardware. Who knows? I couldn't saturate 16Gbps if I wanted either ,so meh.
And how much is this? $10k. Much better buying a dell rackmount and putting nvme in it, could probably afford 100gig. Synology software is good though!
1gbit here ππππππ
Who would pay $10k for that, what a joke π
Hey wendell, you said want to post video about wrx90e / north XL build..im still waiting for it..