proxmox

IBM System p5 550Q

IBM System p5 550Q

#IBM #System #550Q

“clabretro”

Taking a look at an IBM System p5 550Q with two quad-core Power5+ processors. We’ll try to get this thing fired up and connected to an HMC, or Hardware Management Console, for remote management and LPAR creation.

Check me out on Patreon:

Rack stuff
StarTech…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

46 Comments

  1. Can't say I've used HSL but I have hooked SGI Altix bricks together with NumaLink. Now that is a chonker of a cable.
    Building one big system with many points of failure. What could go wrong? They didn't even have redundant PSUs.

  2. Don’t tell me anything about the 110VAC system πŸ˜…
    you better install it in conduits too 🀣

    Mate, well done. this is the kind of issues we all should have.

  3. In the next 40 years when we will have an entire AI just to manage the hardware, It will be funny see the IT guy arguing against the machine about documentation to convince the beast to do something. πŸ˜‚

  4. I worked for Sequent Computer Systems as a level 3 support engineer (doing mostly kernel crash analysis of Sequent's DYNIX/3 and DYNIX/ptx Unix like OS's and Linux) when IBM bought that company in the early 2000's. Which meant we ended up with a bunch of gear like the p5 servers you discuss in this video (albeit x86_64 and similar rather than Power5). I was impressed by the quality of those servers, but even at the time didn't really understand how customers justified the price. When I went to work at Google in their data center automation team in 2007 and started visiting Google's data centers it was immediately obvious that using cheap hardware that has no redundancy and dealing with server failures in software makes a lot more economic sense.

  5. Those cables you have there are for the 5791, 5794, 7040, D11, and D20 PCI-X expansion units which add more PCI-X slots to the system directly attached to the system planar. As implied by the name, the D20 also contains DISK BAYS. Do they contain 20 disk bays? No! That would make it easy. They contain 12 across a split backplane giving you 2 sets of 6. And no, not because of SCSI ID limitations – those were non-applicable. The D11, contains NO disk bays, and is basically specific to the 575 and above. Because IBM. And the PCIHB had other limits; the 520 had a maximum of one unit which required two cables.
    The 550 (8204) does NOT have these PCIHB ports; only the 9133. The 550 tried to work around this issue by having a Virtual Ethernet Adapter (goes above the HMC) with either 2 or 4 gigabit ports OR 1 or 2 10 gigabit adapters as T1-4. Because these supported both VIOS and HMC virtual switches! Which, you guessed it, is virtual in-chassis switches. Though this early setup had some pretty significant limitations and problems. And the cards themselves SUCKED. Those are the expansion slots you couldn't ID though. It only takes the VEA.
    Also it's very important to note, the 9133-55A does not denote quad core. It denotes that it is a POWER5+ system. Which is a BIG difference. The POWER5 is a 130nm process, the POWER5+ is a 90nm process; they both have the same number of dies per DCM or [M,Q]CM with the same number of cores and SMT modes. However, the efficiency gains from 90nm allowed them to fit the QCM (quad chip module) into the chassis' thermal envelope. But a 9133-55A can be 1-2 either dual or quad core dies giving 2 to 8 CPUs. And as you can see, claiming it's a Q when they have the same chassis number is as simple as swapping the faceplate.

    But honestly? The 8204 550 sucked. It was awful. It's a very different machine from the 9133, and the 9133 wasn't that great either.

  6. I suspected as much, and had to ask my wife who is the P Series expert here at home: The NICs can be divided between LPARs just like in the virtualisation platforms that, eh, learnt a lot from IBM. They're connected to VIO servers, which are special LPARs that only perform resource sharing for other LPARs. So, you don't need a NIC per LPAR. What you typically would do is to build a LACP pair (or quad, as it were) into your switching environment, and running that aggregate interface in 802.1q trunk mode, so as to be able to assign different networks to different LPARs.

    RIO lets you add more I/O drawers for even more PCI slots. Not link separate machines. Has been around a long time; I installed my first multi-drawer machine, a M80, back in 2000. One CEC, (CPU and RAM drawer) and one PIO (Primary I/O, looks like a 520 in its own right.)

  7. At 4:02 + on the little model, the white sticker has the text "This device must be attached to a grounded outlet" in swedish, norwegian and finnish. Also kilograms is outside of the paranthesis. Was this imported from scandinavia at some point? Just curious why it has scandinavian label stickers on IBM hardware on a device "over the pond" so to speak πŸ˜…

  8. This channel is beyond awesome, reminds me of simpler times when I tinkered away on old SGI, DEC and IBM gear. Always had a fond spot for those old Unix boxes ever since I started out in my IT job in 2001. Sadly, the business world already unified on boring x86/Wintel boxes back then. But you could score some sweet loot, since all the obsolete Unix stuff was being thrown out by the wagon load. Those were the days 😒 Thanks for fueling my nostalgia!

  9. That HSL link is to stack multiple power machines and share resources. Not really familiar that much with p5 but proto over those wires should be infiniband and should allow sharing of memory&compute or at least allows moving a live LPAR from one chassy to another.

  10. consider installing a nema L6-30 twist lock outlet and get yourself a 1U horizontal pdu that has c13 outlets. So you can use the very common / cheap c13-c14 power cords.

  11. It might not have mattered to have the model on the front in a rack envirnment.

    Whenever I'd call IBM for hardware support, they always started by asking for the Machine Type followed by the Serial Number, and then I'd tell them I already had a DSA log to upload for them to troubleshoot with.

  12. Get one of those tiny serial null-modem adapters that come in the same style as the gender changer you are using, they are almost always orange around the housing so its easy to spot between other similar adapters and you just try with and without the null-modem adapter to see if the serial connection works

  13. I think the reason why you could not set the ip address on the the HMC port is because it is configured in dynamic mode. So it will always get a ip address from a DHCP server (like 192.168.1.9). Try to set it on manual mode or something like that,

  14. I have no idea why but your videos are beyond entertaining and interesting for me, never in my life did I think I would actually enjoy watching stuff about networking and home networking but here I am having almost watched your entire backlog at this point because your videos are truly very fascinating! Thank you for doing what you do.

  15. I cringe every time, you touch one of these dense CPU connectors. If you ever do that with a SGI machine, you will just have bricked the board. You never touch any connectors πŸ™‚ But otherwise I love your videos. Especially Sun stuff.

  16. RIO or remote IO connects to what is known as an IO drawer, essentially a similar shaped/sized/coloured box with just a bunch of PCIe slots in them. Your P5 can handle much more peripherals than the 4 slots you have available in the chassis.

    The other use for the HSL is proprietary switching technology, so you can run HPC jobs on P5 clusters. You can also run GPFS on said cluster, so you can share storage volumes over the cluster nodes.

    If memory serves me right, the 590 was the biggest you could get, the size of a full rack and pretty loud.

    Also, IBM gear is very picky about serial connections. You will probably need a proper serial cable, and set it to 9600 baud.

  17. how do you lift those large machines without getting injured you are amazing at what you do i enjoy watching your videos so much i am a tech person myself but i never dealt with server equipment at all

  18. Speaking from personal experience, Mainframes aren't as dead as many people think the term is. Sure sounds like the 80s called and wanted their computers back… but IBM Still builds them around the globe. In Europe they are built in France as far as I was told. They are used by insurance companies and banks mostly. I have no direct z/OS Experience but my former coworkers have, when I was in the financial sector. And boy is licensing on them a pain and hella expensive…I was just a user for an z/OS Application.
    Funny thing: If you get one on lease, IBM even gives you a hardhat with the machine.

    If you wanna see some stuff about it LTT was in the IBM Factory for them some years ago and there are some pretty awesome videos on YouTube by other content creators. z/OS can run on a Raspberry Pi if you wanna dabble your feet into these waters.

    Edit: Just a train of thought, wouldn't a beefy UPS help to power the p5s need for 220V?

  19. The front LCD isn't readable probably because its contrast is off, sometimes there is adjustment for that in software. If not then its probably a bad pot/resistor inside next to the LCD, those 2byX displays usually had a pin dedicated to controlling contrast on them.

Leave a Reply