Hyper-v

Mainframe Myths Debunked in 5 Minutes

Mainframe Myths Debunked in 5 Minutes

#Mainframe #Myths #Debunked #Minutes

“IBM Technology”

Learn industry-critical computing skills →
IBM Mainframes: Meet the New IBM Z →

What do you think of when you hear the word “mainframe”? Slow? Expensive? Outdated?
In this video, Rosalind Radcliffe blows up those myths and explains how…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

32 Comments

  1. Key blockers I've observed in real world mainframe use are
    – lack of test, dev, stage environments for staff
    – pre committed over capacity per app for isolation rather than shared resources.
    – 3/4 of app functionality already moved off to sparc and x86 to save mips and license costs. Often with poor security and network integration.
    – legacy close to retirement staff that have given up trying to get basic upgrades, let alone lobby for new workloads.
    – zero access to mainframe except through arcane processes.
    – insane operating approaches like weekly 30 minute downtime on ibm mainframe to reboot and thus clear memory leaks. An approach this enterprise arrived at between their experts, their GSI service team and IBM as a temporary fix 7 years before I observed it still occurring.
    – very poor understanding of what paid utilities, software packages etc are still being used and for what making migrations and consolidation extra confusing.

    However. I am constantly impressed by the technology IBM pumps out both hardware and software but they need to stop pretending that it makes any sense for any but a few huge managed service providers to operate these. Even global sized finance companies hate their mainframes but accept them as cheaper to run than replace in non growth businesses. So they kick the can down the road.

  2. Between 1997 and 2016 I used to work at Miami Dade College. The IT department had an IBM OS/390 with MVS Operating system. I was one of the 25 developers maintining applications with thousands on lines of code written in JCL, Cobol and Natural. The system also had night shift operators running scheduled jobs for the users. Online serving 8 campuses, 36,000 students and 4,000 employees. The system was running smothly until everything was migrated to ERP PeopleSoft working on PCs

  3. Stupid, and I mean that with all genuine respect. The main advantage of the mainframe ideology was a team of people that ensured changes to the database, access, programmes, execution etc etc was a guard and protection from open source being a dumb cut corner from managers that ended up bringing a business to bankruptcy. They pee on the tree at the expense of the existence of the corporation.

    Mainframe is more than a method; it is an ideology of doing things with professionalism and properly…. If you have to demean yourselves to chase the checklists of pc fools you don't even understand your own benefits properly.

  4. What people think is "Cloud" is still so incorrect. It isn't containers or virtualisation. Cloud is the use of commodity applications services provided over the network with a consumption based pricing model. Running your own kit and running containers is no more cloud than using a Cpanel host in 2005.

  5. I appreciate the effort, but you have to know what the myths are before you debunk them. You should interview a developer who grew up with PCs to find that out. I still have no idea how it's structurally different from a huge multicore PC, where the "batch processing" comes in (does it still?), why the AS/400 text interface (still?) seems to be the norm.

  6. People have long had misconceptions about "big iron" computers. Many years ago, I was a computer tech, even before there was such a thing as a "PC". I worked for a telecommunications company and most of our computers were used for message switching. On one occasion, a TV station wanted to do a piece on our computers. However, they didn't think the computers were doing anything, so we had to run diagnostics on the tape stands, to make them look busy. 🙂
    Computers have advanced over the years. Way back in the dark ages, computers were built for either business or science & engineering. IBM's early business computers used fixed point decimal, while the S&E systems used floating point. This also applied to computer languages, with COBOL used for business and FORTRAN for S&E. This was necessary because the computers back then were so limited, they had to be customized for the intended market. Later, as computers became more powerful, they became general purpose. Also, after the introduction of mini-computers and later, personal computers, the mainframes continued for high performance applications. They also evolved from single CPU with I/O processors, to multi-CPU, just like PCs. These days, one of my cousins, a nuclear physicist, uses a "mainframe", actually a multi CPU system, running Linux, in his studies of neutrinos.
    BTW, regarding the 3270 green screen, I used to work at IBM Canada in the late 90s, as an OS/2 product specialist. In that position, I used to support an app called "Personal Communications", which provided 3270 and 5250 emulation, along with telnet.

  7. IBM should look to what DEC (RIP) did that kept VMS/OpenVMS alive as long as it has been… The Hobbyist Community. I'd love to play with z/OS myself, but IBM has refused to allow access to it via Hercules – not because it won't run on it, but simply licensing. If IBM made the technology accessible to the hobbyist community, they'd probably not have to make videos like this to convince people that z/OS platform is good, they'd have a lot of folks that would do that work for them and for free. I'd be willing to bet that there would be tons of "how-to" videos on z/OS and those extolling it's virtues if it were made available for the hobbyist community… Just my 2 cents worth…

  8. I never understood why IBM was so intent on making mainframes seem affordable. They're not! And that's OK. You're paying (handsomely) for very specific benefits that are only worth it to a small portion of the overall compute market.

    I also don't understand why IBM is intent on throwing every workload on these things. A lot will never work on z series, like commercial applications compiled for x86 for Windows/Linux, so you still have to have that platform running as well along side z in most cases. And your average DC workload, you can put it on an x86 blade system or similar for a fraction of what if would cost to run on z, and you can run a much wider variety of workloads on it to boot.

    I don't hate mainframes, but they're great for only a small niche of workloads. For the rest, they're just overly expensive.

  9. I retired from a large telecommunications company about 9 years ago. I was, basically, a network systems "programmer". During my career, spanning 30+ years, I supported the communications software on the IBM mainframes (and actually IBM compatible mainframes like Amdahl…until they went out of business) and experienced the transition(s) from TCAM, VTAM/SNA, TCP/IP to Z/OS Communication Server. We did have a few mainframes that ran VM, but those were "minor" system. These VM systems could run tons of VM type servers, which could support tons of websites. However, the main systems that I supported were the billing systems. I remember that the company did toy with the idea to migrating to a distributed (many multiple smaller CPU) systems, but these systems were too slow to handle the volume of programs that were run on the mainframes to produce the bills, and this was a 24 hour/6-7 day/52 week operations billing cycles. So the only systems powerful enough to handle this were IBM mainframes. The company had to continuously upgrade to the latest IBM mainframes, which costs millions and millions of dollars, but that was necessary to be able to process the bills. Some people may scoff at the idea of the "old school" mainframes, but there were really no alternatives. I do not know what has really happened in the past 9 years, but that probably still continues.

  10. I've done software development for 20 something years, but never mainframe. Recently, I started the Zexplore tutorial and learning more about the hardware. It's a whole other universe! The hardware throughput and flexibility of configuration is absolutely bonkers. Now It feels like commodity hardware has basically been asleep for the last decade.

  11. There's still a skills shortage for classic "Z" as we are all old (or dead) but I guess this video is aimed at getting new customers on z hardware. In 1982 I was told "COBOL only has 5 years left" Nope.

  12. Nice marketing. They aren't "myths". Show of hands: how many people are submitting python jobs to their mainframe? (not running scripts in a linux image on the MF.) how many are still running COBOL (or FORTRAN) that was originally written decades ago?

    Most people run mainframes because they always have. The applications / workloads they run can't be retooled for anything else. (time, money, proven reliability, etc. etc.) [COBOL is really hard to translate into anything sensible.] The claims of "40:1", power and space efficiency are "lies, damned lies, and statistics." For the cost of one modern (z16) MF — and licenses — I can build a small data center. (the last one… the power cost less than half what IBM charges for a single z/OS VM.)

  13. My first 10 years in the industry were spent on big mainframes in the banking industry back in the 1980's doing transaction processing with ATM's and branch automation (and more). We had virtualization and distributed systems long before PC's even had viable networking. It's been interesting watching it go full circle over the last 40 years. A room full of rack mounted individual machines always seemed unnecessarily complex compared to a room full of a couple of mainframes.

  14. She, at least, has the skill to write with her left hand from right to left in mirror.
    Perhaps that is needed to be an expert on Mainframes as well.
    My biggest concern, no matter how unarguably awesome those systems are: Vendor lock in.
    Your entire system is in the hand of one and only one company.
    And companies sometimes make stupid decisions.
    BWT: I'd love to work on a z system.

  15. In 1980 the IBM 3033 cost $3,000,000. That was with ONE CPU. IBM talked about channels but never gave a good explanation of what they were. I presume they were I/O processors taking that workload off of the single CPU. It was possible to get 2 CPUs on the 3033 but then the price went up significantly.

    Mainframes no longer exist people just keep using the word. IBM did not use the word 'minicomputer', that was for scum like, DEC and Data General.

    The things called mainframes today are massive collections of microprocessors with great attention and effort put into reliability and redundancy. The true Big Iron is gone.

    There was a benchmark program in the Jan 1983 BYTE Magazine that tested lots of computers and languages. The 3033 running assembly language beat all comers. I rewrote that program years ago in 'C'. I recently purchased a used Dell Optiplex with a Core i7. I estimate it is about 70 times as powerful as a 3033 and that is ignoring the inefficiency of the compiled code.

  16. I was a developer and DB2 DBA on Z/OS for many years and a big fan of mainframes, BUT it makes no sense at all to be running modern workloads on them which can be run far cheaper on commodity boxes in the cloud. Mainframes are great for running monolithic Cobol/PL/1/Assembler applications using batch jobs or OLTP (CICS/IMS) against say DB2 relational databases. That stack is super efficient and the I/O optimisations in Z/OS and the hardware make it sing for those workloads. That's where you do get the 40x efficiency (seriously) and it might even make sense economically (probably not though). But modern workloads are very CPU intensive compared with Cobol and very inefficient. All that object oriented code and web service calls and XML and JSON, most of it is shunting data around memory without doing anything. But on modern machines costing peanuts, it doesn't matter, what does is the abstraction benefits of using modern software techniques. I'm sure the Tellum chip is super performant and well integrated but at the end of the day a mainframe cpu core on a the same silicon process won't be that different performance wise from a cloud cpu. Sure, the MCM is highly integrated with it's 250 odd cores, but it's so much more expensive – it's irrelevant to non locked in customers. With the new world of AI inference acceleration coming fast, which is even more of a CPU burden there's no way I would want to pay a massive premium for my AI calculations.

  17. Mainframes are "old" in about the same way cars are "old". The mainframe architecture has been around since the 1950s (that's over 70 years now), but the newest systems are completely technologically modern. The reason for the enduring misconception of mainframes being old is far, FAR more people are familiar with PC-based Linux systems than mainframes (and it's always been that way ever since Linux came out in the early 1990 and became much more prevalent by the late 1990s). What helps educate people about IBM's offerings is the ongoing work with zXplore and Z-Days, and Zowe in VSC. The Linux-optimized systems need demonstrator accounts (like the LinuxONE over at Marist). to allow people to play around with the system. The zD&T system helps, but the single-user insistence for the student license is a handicap (real mainframes in the real world are never single-user, except maybe at IPL) as application development is always going to be multiuser. It helps to IBM could enhance user training with the possibility of a PC-scale "toy mainframe" that anyone can have. The "toy mainframe" does enough of the real mainframe can do to be educational, but compared to a real mainfame it just does much less and much more slowly (because at its core it's a PC with emulators – a bit like a Linux box or Windows box running Hercules Hyperion.. And also as far as mainframes IBM has enjoyed decades of generous exposure – everyone knows what an "IBM mainframe" is – – Far fewer are aware of Unisys (now a subsidiary of SAIC) or Groupe Bull (France, bought GE's mainframe business a few decades ago). Be out there, be friendly, be very available, support students and hobbyists (there are mainframe hobbyists) along with your installed base, and be friendly. Train them to be IBM customers and maximally allow educational materials. Those people wind up in data centers debating whether to rack up the upgrades with IBM, Dell, or HP. Have a great IBM day over there, also remember busy elves are happy elves!

  18. When your application absolutely, positively needs to run without interruption. You can add/swap CPU, disks, memory and any other components on the fly. Yes, it is expensive.

Leave a Reply