Written version: morethanmoore.substack.com/p/the-future-of-big-iron-telum-ii-and
@D.u.d.e.rАй бұрын
It took me a while to watch this pretty deep dive of the IBM's Z platform and Telum CPU, however if was more than worth it👍It's pretty clear and obvious why IBM dominates in the field of high availability and zero downtime. Telum and Z mainframe is one of the kind of "beast" where beside other important features latency, error minimization and correction with redundancy/failover play key roles. Thx for the interview and I hope that your audio recording HW will only improve in the future😉
@esra_erimez2 ай бұрын
I'm shocked that I understood a lot of this conversation. I think it is a testimony to Ian's interview capabilites.
@capability-snob2 ай бұрын
IBM are totally hiring! Let's hope they can find a replacement for the manager who laid everyone off, assuming they could be replaced with AI.
@lbgstzockt84932 ай бұрын
I wonder what working for them would be like, my intuition tells me it's a slow, big company similar to a government institution, but maybe I am wrong.
@cryptocsguy92822 ай бұрын
@@lbgstzockt8493 I bet you're righting judging by their utter failure to keep up with newer tech companies like Microsoft , apple , IBM ect and their exit from consumer hardware 20 years ago but idk since I don't work for them
@henrikoldcorn2 ай бұрын
@@cryptocsguy9282IBM has more than 250k employees, they seem to be doing just fine. I don’t know _what_ they’re doing, but clearly someone values it.
@TechTechPotato2 ай бұрын
It's mostly consulting
@capability-snob2 ай бұрын
@@TechTechPotato Hi - not sure if you picked this up - I was referring to the news item earlier this year about IBM laying off a significant percentage of their workforce and relocating those positions to India. Most of those positions are about software consulting, yes, but the message I think we're supposed to take away is that IBM are hiring mostly software engineering consultants in India. Could be that I missed something specific on the hardware side.
@tristan72162 ай бұрын
Mainframes are a whole other world. 320 MB of cache?! I took a tour of fishkill when I was in high school in the 80s. They were making these liquid cooled multilayer ceramic modules with chips in them instead of circuit boards. A whole other world.
@wolpumba40992 ай бұрын
*IBM's Telum II Processor and Spyre AI Accelerator: A Conversation with Dr. Christian Jacobi* * *0:00** Introduction:* Discussion about IBM's role in enterprise computing with its Z architecture, focusing on the new Telum II processor and Spyre AI accelerator. * *2:10** Dr. Jacobi's Background:* Dr. Jacobi, IBM Fellow and CTO of Systems Development, discusses his 22-year career at IBM, starting with the Cell processor and including leadership roles in Z14 and Telum development. * *3:00** IBM Fellow:* The title signifies the highest technical level at IBM, with responsibilities for advising on broad technical direction. * *7:30** Z Architecture Ethos:* Focus on high availability, security, and scalability, emphasizing a design-for-purpose approach tailored to mission-critical workloads. * *8:40** High Availability:* Defined as "eight nines" of availability (99.9999%), translating to approximately one hour of downtime every 11,400 years. * *9:10** Monolithic Design:* Shift from multi-chip modules to a monolithic design for Telum, driven by efficiency gains and the ability to integrate more features like AI acceleration and post-quantum security. * *11:00** Virtual Cache Hierarchy:* Telum utilizes a virtual cache hierarchy, leveraging underutilized L2 cache as virtual L3 and L4, improving effective cache capacity. * *12:10** Core Design for Reliability:* Emphasis on error detection and recovery mechanisms built into the core design, including redundant cache line support and architectural state checkpoints. * *14:50** Virtual Cache Performance:* Virtual cache design eliminates the need for replicating cache lines multiple times, leading to efficiency gains. Telum II increases total L2 SRAM from 256MB to 360MB. * *16:20** Integrated AI:* The integrated AI accelerator is designed to address customer needs for infusing AI into transaction processing at millisecond latency. * *18:10** AI Utilization:* The centralized AI accelerator offers more compute capacity compared to a distributed approach, efficiently serving the needs of individual cores as required. * *19:50** Customer & Research Collaboration:* AI integration was driven by customer demand and collaboration with data scientists and application developers, along with insights from IBM Research. * *21:10** AI Model Types:* Telum supports both smaller, low-latency models for real-time inference and larger language models (LLMs), often used in ensemble methods for improved accuracy. * *23:50** Built-in DPU:* The integrated DPU handles IO, cryptography, and connects to the Spyre AI accelerator, enhancing performance and enabling expansion of AI capabilities. It has direct access to memory and has its own L2 cache. * *27:40** Spyre AI Accelerator:* A second-generation AI chip optimized for LLMs, supporting use cases like code assist and general admin assistance within a secure environment. Up to eight can be clustered. * *30:00** Expanded AI Performance:* The Spyre accelerator, particularly in clustered configurations, enables larger-scale AI workloads within the Z ecosystem. * *31:40** Samsung Foundry Partnership:* IBM utilizes Samsung's 5nm high-performance process for both Telum II and Spyre, highlighting a positive relationship and successful results. * *32:40** AI in Chip Design:* IBM is exploring the use of AI in chip design for tasks like simulation screening and knowledge management. * *35:10** LinuxONE Response:* Positive market response to LinuxONE, which leverages the Z architecture for Linux-based workloads, with it being the fastest-growing area of the Z business. * *37:00** Key Takeaways:* Telum II and Spyre represent IBM's continued innovation in the enterprise chip space, delivering high-performance, secure, and scalable solutions. IBM is actively hiring. I used gemini-1.5-pro-002 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 23443 Output tokens: 859
@JonathanBroomeАй бұрын
These are incredible. Thank you for bringing these to us even if I am not the target market the concept and development is so fascinating. The things IBM are doing with cache quite frankly blows my mind.
@lahma692 ай бұрын
As always, a very interesting interview with a very interesting person! Thanks for the excellent content Ian.
@LethalBB2 ай бұрын
15 mins in and this is grrrreat! More of this.
@fracturedlife13932 ай бұрын
You always crack me up, must be the way you Telum
@ConsistentlyAwkward2 ай бұрын
Thank you for doing this conversation cuz how IBM is doing ai is really fascinating to me
@Lossmars2 ай бұрын
Note that there is a sound echo issue with your microphone and an overall background noise each time that someone speaks. Otherwise thank you very much for this super interesting interview.
@EyesOfByes2 ай бұрын
IBM Fellow = respect
@freddellmeister2 ай бұрын
Interesting to hear about the AI accellerator development and the argument for shared accellerator as opposed to distributed accellerator ("Power"). You might also want to probe more into the fact that there was no Power announcement at Hotchips this year as usually Power and z are in cadence when it comes to Samsung manufactirung process, and should have launched simultaneously. Whatever portrayed as Power "next gen" might be a simple rebadge of existing generation from CPU and systems perspective.
@foobarf8766Ай бұрын
IBM been producing AI accelerators longer than anyone else, DARPA projects now 10-20 years old, always found it weird the "industry" is running tensor math on register impoverished GPUs, just at high clock speeds. PowerPC probably better for any given ML task. Clearly energy efficiency is not a requirement in any of these AI/ML shops.
@ChrisJackson-js8rd2 ай бұрын
this is one of only a handful of examples of "practical" ai acceleration (ie. actually useful today instead of a slow and power inefficient way to do things better done other ways)
@Razzbow2 ай бұрын
Anything about PPC10?
@billlodhia56402 ай бұрын
What about it? P10 has been around for a couple of years now. P11 is upcoming and OP10 is still in the weeds due to the blobs
@freddellmeister2 ай бұрын
@@billlodhia5640 please share how P11 differs from P10, any changes except for name?
@novantha12 ай бұрын
I’d love to try out some of these IBM chips as a developer, but unfortunately this category of processor is moderately outside the size of my pocket book 😅
@henrikoldcorn2 ай бұрын
18:00 part of the chip, part of the core, part of the chip, part of the core…