iAPX The first time Intel tried to kill x86

  Рет қаралды 159,623

RetroBytes

RetroBytes

Күн бұрын

Пікірлер: 737
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
5:23 Ada was designed for writing highly reliable code that could be used in safety-critical applications. Fun fact: the life-support system on the International Space Station is written in Ada, and runs on ancient Intel 80386 processors. So human lives depend on the reliability of all that.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Indeed that was a design goal for it. Its still in use in alot of military hardware particularly missile gidance.
@JMiskovsky
@JMiskovsky 2 жыл бұрын
@@RetroBytesUK ADA has roots in British MoD project. ADA is used in F-22 too.
@lexacutable
@lexacutable 2 жыл бұрын
the safety critical design and military applications of Ada always seemed like the whole point of it, to me - odd that it wasn't mentioned in the video; I'd have thought this would be a major factor in Intel's decision to design chips around it.
@JMiskovsky
@JMiskovsky 2 жыл бұрын
@@lexacutable yeah the flight computer of F-22 is based on something obscure also. There is military i960. And then there is is x87 arch. (Yes X87).
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
@@lexacutable apparently the critical factor for intel was the growing popularity of Ada in university comp sci departments. They thought that trend would continue out in the wider world and it would become a major language for application development. They where quiet wide of the mark on that one, it turned into more of a nitch language for military applications.
@davidfrischknecht8261
@davidfrischknecht8261 2 жыл бұрын
Actually, Microsoft didn't write DOS. They bought QDOS from Seattle Computer Products after they had told IBM they had an OS for the 8088.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
It also turns out seattle computer products did not write all of it either. Thanks to a law suite many years later we know similarities to CP/M where none accidental.
@petenikolic5244
@petenikolic5244 2 жыл бұрын
@@RetroBytesUK At last someone else that knows the truth of the matter CP/M and dos are brothers
@grey5626
@grey5626 2 жыл бұрын
@@petenikolic5244 nah, not brothers, unless your idea of brotherhood is Cain and Abel? Gary Kildall's CP/M was the star student, while Tim Patterson was the asshole who copied off of Kildall's answers from a leaked beta of CP/M-86 and then sold them to the college drop out robber baron dip it.sh Bill Gates who in turn sold his stolen goods to the unscrupulous IBM. Gary Kildall didn't just run Digital Research, he didn't just host The Computer Chronicles, he also worked at the Naval Postgraduate School in Monterey, California collaborating with military postdoc researchers on advanced research that was often behind doors which required clearance with retinal scanners. It's even been speculated with some credibility that Gary Kildall's untimely death was a murder made to look like an accident.
@akkudakkupl
@akkudakkupl 2 жыл бұрын
Ahh you mean CP/M86
@tsclly2377
@tsclly2377 2 жыл бұрын
and .. most likely, a large purchase order and integration with a very big US Agency (of clowns)
@sundhaug92
@sundhaug92 2 жыл бұрын
4:42 Ada didn't program the difference engine, which was built after her death, because that wasn't programmable. The difference engines works by repeatedly calculating using finite differences. Ada wrote a program for the analytical engine, a more advanced Babbage design that would've been the first programmable computing machine.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
I know but I could not find an image of the analytical engine with an appropriate copyright license that ment I could use it.
@sundhaug92
@sundhaug92 2 жыл бұрын
@@RetroBytesUK I understand, but if I didn't hear wrong you also called it the difference engine
@frankwalder3608
@frankwalder3608 2 жыл бұрын
Your comment is as informative and interesting as his video, inspiring me to query my favorite search engine. Has the Analytical Engine been built, and was Ada’s program(s) run on it?
@sundhaug92
@sundhaug92 2 жыл бұрын
@@frankwalder3608 afaik not yet, but there have been talks of doing it (building it would take years, it's huge, Babbage was constantly refining his designs, and iirc even the difference engine built is just the calculator part, not the printer)
@wishusknight3009
@wishusknight3009 2 жыл бұрын
@@sundhaug92 The printer of the difference engine was built if i recall. it was in somewhat simplified form from the original design though. And the analytical engine has been constructed in emulation as proof of concept. That is about it so far. Several attemps at starting a mechanical build have met with money issues. This is not a device a machinist could make in his spare time over a number of years. I imagine it would take a team of at least a dozen ppl.
@MrPir84free
@MrPir84free 2 жыл бұрын
That programming reminds me very much like a "professional" I used to know ; he worked on a Navy base, and wrote his own code. Professional, in this case, meant he received a paycheck.. The dataset came from COBOL - very structured, each column with a specific set of codes/data meaning something.. He wrote some interpreted Basic code to break down the dataset. It went like this: He would first convert the ascii character into a number. There's 26 lower case characters, and 26 uppercase characters, and 10 digits ( zero thru nine ). So first he converted the single ascii character into a number. Then he used 62 If ... then statements to make his comparison. Mind you, it had to go thru ALL 62 if then statements. This was done for several columns of data PER line, so this really stacked up on instructions carried out. Then based upon the output, he'd reconvert the ascii code back to a character or digit, using another 62 if then statements. And there was like 160 columns of data for each line of the report he was processing.. With a single report consisting of a few thousand lines all the way up to a few hundred thousand rows. Needless to say, with all of the if then statements ( not if -then - else ) it was painstakingly slow. A small report might take 2 to 3 hours to process; larger reports- well, we're talking an all day affair; with ZERO output till it was completed; so for most that ran it, they thought that the code had crashed. In comparison, I wrote a separate routine using Quick Basic, from scratch, not even knowing his code existed, using on...gosub and on .. goto statements, with complete report taking about 1 1/2 minutes, with the every tenth line output to screen so one knew that it was actually working. Only later did I find out his code already existed, but no one used due to how long it took to run. Prior to this, the office in question had 8 to 10 people fully engaged, 5 to 6 days a week, 8 to 10 hours a day, manually processing these reports, reading the computer printouts, and transferring the needed rows to another report, by hand, on electric typewriters. Yes, this is what they did... as stupid as that seems...
@lucasrem
@lucasrem Жыл бұрын
NAVY + COBOL are you Korean War guy, older ?
@HappyBeezerStudios
@HappyBeezerStudios 10 ай бұрын
@@lucasremCould be COBOL-2023, the most recent release :) Yfs, the old stuff like COBOL, Lisp, Fortran, ALGOL etc are still updated and used.
@dlewis9760
@dlewis9760 9 ай бұрын
@@HappyBeezerStudios I just retired in Oct 2023. 40+ years programming, almost all in Bank Biz. You get stuff from vendors and clients that's obscure and obsolete. It was probably in 2021 I got a record layout to go with the data. The record layout was written in COBOL. A buddy at works says "I've got that record layout. You won't believe what it's written in". He was correct. I actually do believe it. The amount of stuff still written around 80 characters is ridiculous. NACHA or ACH records is written around 94 character records. There were 96 column cards that never took off. IBM pushed them for the System/3. I always wondered if the 94 character record layouts were based on that. Stick a carriage return/line feed on the end and you are in business.
@serifini2469
@serifini2469 2 жыл бұрын
I'm guessing one of the subsequent attempts at intel shooting itself in the foot would be the 1989 release of the i860 processor. For this one they decided that the 432 was such a disaster that what they needed to do was the exact opposite - to implement a RISC that would make any other RISC chip look like a VAX. So we ended up with a processor where the pipelines and delays were all visible to the programmer. If you issued an instruction the result would only appear in the result register a couple of instructions later and for a jump you had to remember that the following sequential instructions were still in the pipeline and would get executed anyway. Plus context switch state saves were again the responsibility of the programmer. This state also included the pipelines (yes there were several) and saving and restoring them were your job. All this meant that the code to do a context switch to handle an interrupt ran into several hundred instructions. Not great when one of the use cases touted was real time systems. Again intel expected compiler writers to save the day with the same results as before. On paper, and for some carefully chosen and crafted assembly code examples, the processor performance was blistering. For everyday use less so, and debugging was a nightmare.
@treelineresearch3387
@treelineresearch3387 2 жыл бұрын
That kinda explains why I only ever saw i860 end up in things like graphics processors and printer/typesetter engines, things that were really more like DSP and stream processing tasks. If you're transforming geometry in a RealityEngine you can afford to hand-optimize everything, and you have a bounded set of operations you need to do fast rather than general computation.
@roadrash1021
@roadrash1021 2 жыл бұрын
I worked at two of the companies that made extensive use of the i860 as a math engine. Yeah, you could make it scream for the time, especially if you're into scraping your knuckles in the hardware, but the general impression was what a PITA.
@X_Baron
@X_Baron 2 жыл бұрын
Intel sure has a long history of thinking they can handle compilers and failing miserably at it.
@pizzablender
@pizzablender 2 жыл бұрын
I remember that from the perspective of exception handling, that the handler needed to correct and drain all those instructions. FWIW that idea was tried later in more RISC processors in a simpler way, for example "the instruction behind a JUMP always gets executed, just swap them". Simple and good, but newer CPU implementations wouldn't have that slot. So again this was not a future proof idea.
@heinzk023
@heinzk023 2 жыл бұрын
Let me guess: Being a compiler developer at Intel must have been a nightmare. CPU design guy: "Ah, well, our pipelines are a little, ehm, "complex", but I'm sure you compiler guys will take care." Compiler guy: "Oh no, not again"
@antonnym214
@antonnym214 2 жыл бұрын
The 68000 was a nice chip! It has hardware multiply and divide. An assembly programmer's dream. I know, because I coded in Z-80 and 8080 Assembly. Good reporting!
@christopheroliver148
@christopheroliver148 2 жыл бұрын
Are you me?
@peterbrowne3268
@peterbrowne3268 Жыл бұрын
The 68000 CPU powered the very first Apple Macintosh released in 1984.
@4lpha0ne
@4lpha0ne Жыл бұрын
Yeah, it had so many registers compared to x86, nice addressing modes and useful instructions (as you mentioned, just that mul and div were microcoded until later 68k family versions appeared), and a 32b register width.
@vikiai4241
@vikiai4241 Жыл бұрын
I recall reading somewhere that the IBM engineers designing the PC were quite interested in the 68000, but it wasn't going to be released (beyond engineering samples) in time for their deadline.
@earx23
@earx23 8 ай бұрын
It was. Intel 8086 had it too, but on 68000 you had it on all 8 data registers. On Intel just on a single register. Everything had to be moved around a lot more. Intel was well behind Motorola.. I consider the 486 the first CPU where they actually had caught up a little, but the 68060 still bested the Pentium. Why? General purpose registers, baby.
@steveunderwood3683
@steveunderwood3683 2 жыл бұрын
The 8086/8088 was a stopgap stretch of the 8 bit 8080, started as a student project in the middle of iAPX development. iAPX was taking too long, and interesting 16 bit devices were being developed by their competitors. They badly needed that stopgap. A lot of the most successful CPU designs were only intended as stopgap measures, while the grand plan took shape. The video doesn't mention the other, and more interesting, Intel design that went nowhere - the i860. Conceptually, that had good potential for HPC type applications, but the cost structure never really worked.
@alanmoore2197
@alanmoore2197 2 жыл бұрын
It also doesn't mention that the i960 embedded processor family was created by defeaturing 432 based designs - and this went on to be a volume product for Intel in computer peripherals (especially printers). At the time the 432 was in development the x86 didn't exist as a product family and intel was best known as a memory company. Hindsight is easy - but at the time Intel was exploring many possible future product lines. The i960 family pioneered superscalar implementation which later appeared in x86 products. The i750 family implemented SIMD processing for media that later evolved into MMX. You have to consider all these products as contributing microarchitecture ideas and proofs of concept that could be applied later across other product lines. Even the dead ends yielded fruit.
@80s_Gamr
@80s_Gamr Жыл бұрын
He said at the beginning of the video that he was only going to talk about Intel's first attempt at something different than x86.
@fajajara
@fajajara Жыл бұрын
@@alanmoore2197 …and in F-22 Raptor.
@charleshines2142
@charleshines2142 Жыл бұрын
iAPX must have taken too long from trying to fix is inherent flaws, It seems they dodged the bullet when they developed the 8086 and 8088. Then they had the terrible Itanium. I guess the best thing it had was a somewhat nice name. It almost sounds like something on the periodic table.
@HappyBeezerStudios
@HappyBeezerStudios 10 ай бұрын
Interestingly NetBurst and the Pentium 4 also were sort of a stopgap because their IA-64 design wasn't ready for consumer markets. Yes, we only got the Pentium 4 because Itanium was too expensive to bring it to home users and because with later models they removed the IA-32 hardware.
@Vanders456
@Vanders456 2 жыл бұрын
Intel building the iAPX: Oh no this massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good *sucks* Intel 20 years later: What if we built a massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good?
@scottlarson1548
@scottlarson1548 2 жыл бұрын
Didn't Intel also build a RISC processor that... relied on the compiler to be good?
@acorredorv
@acorredorv 2 жыл бұрын
You could almost just change the audio on the video and replace iAPX with IA64 and it would be the same.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
The IA64 was not quite as high level as APX was. And the difference is it could actually perform exceptionally well in a few things, if not the best in market. But outside of a few nitch workloads where it shined it was rather pedestrian in some rather common and ubiquitous server workloads, this and its slow x86 compatibility mode was its biggest problem. In the area's it was lousy, it couldn't even out perform the best of x86 at the time. And part way through its development Intel seemed to lose interest and relegated it to older process nodes and didn't give it a large development team to iterate on it over generations. It got a few of the AMD64 features ported to it and what not, but it was left languishing after Itanium2.
@DorperSystems
@DorperSystems Жыл бұрын
@@scottlarson1548 all RISC processors rely on the compiler to be good.
@scottlarson1548
@scottlarson1548 Жыл бұрын
@@DorperSystems And CISC processors don't need good compilers?
@jecelassumpcaojr890
@jecelassumpcaojr890 2 жыл бұрын
One more thing that really hurt the 432's performance was the 16 bit packet bus. This was a 32 bit processor, but used only 16 lines for everything. Reading a word from memory meant sending the command and part of the address, then the rest of the address, then getting half of the data and finally getting the rest of the data. There were no user visible registers so each instruction meant 3 or 4 memory accesses, all over these highly multiplexed lines.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
It was also how poorly IP blocks were arranged on the two packages requiring upto multiple jumps across both chips to complete an instruction. Someone I knew surmised a single chip solution with some tweaks could have had IPC higher than a 486! He saw quite a lot of potential in it and remarked it was too bad intel pulled the plug so soon.
@HappyBeezerStudios
@HappyBeezerStudios 10 ай бұрын
To be fair, that kind of limitation didn't hurt the IBM PC, The chosen 8088 was a 16-bit CPU on an 8-bit bus
@wishusknight3009
@wishusknight3009 7 ай бұрын
@@HappyBeezerStudios The 8088 had a register stack though, which will make some difference. Note the 8086 only had a few % performance advantage over the 8088. It really couldn't use the bandwidth up.
@IanSlothieRolfe
@IanSlothieRolfe 2 жыл бұрын
Back in 1980 I was off to University to do a Computer Science degree, and was also building a computer at the same time - I]d been mucking about with digital electronics for a number of years and with the confidence of a teenager that didn't know if he had the ability to do so, I had designed several processor cards based on the TMS9900, 6809 and others (all on paper, untested!) and had read about the iAPX 432 in several trade magazines, and was quite excited by the information I was seeing, although that was marketing mostly. I tried getting more technical data through the Universiity (because Intel wouldn't talk to hobbyists!) but found information thin on the ground, and then six months or so later all the news was basically saying what a lemon the architecture was, so I lost interest as it appears the rest of the world did. About a decade later my homebrew computer booted up with its 6809 processor, I played with it for a few weeks, then it sat in storage 20 years, because "real" computers didn't require me to write operating systems, compilers etc :D
@johnallen1901
@johnallen1901 2 жыл бұрын
Around 1989 I was building my own home-brew computer around a NEC V30 (8086 clone), an Intel 82786 graphics coprocessor, and three SID chips for sound, because why not? Everything was hand-wired and soldered, and even with tons of discrete logic chips, I managed to get it operating at 10 MHz. Initially equipped with 64K of static RAM, it was my senior project in college. Instead of the iAPX, I was interested in the National Semiconductor 32000 CPUs, and my college roommate and I had big plans to design and build a computer around that platform. A 32032 CPU/MMU/FPU setup was about as VAX-like as you could get, with it's full 32 bit architecture and highly orthogonal CISC instruction set. Unfortunately computer engineering was not yet a thing at our college, so we were on our own and simply didn't have the resources to get that project off the ground.
@Mnnvint
@Mnnvint 2 жыл бұрын
@@johnallen1901 That sounds impressive! It's actually not the first time I see people talking about their late 80s homebrew computers in youtube comments... if you (or any of the others) still had it and could show it off in a video, that would be beyond cool. Three SIDs at 10mhz, yes please :)
@javabeanz8549
@javabeanz8549 2 жыл бұрын
​@@johnallen1901 I thought that the V30 was the 80186 or 80188 instruction and package compatible CPU?
@johnallen1901
@johnallen1901 2 жыл бұрын
@@javabeanz8549 Yes, the V20 and V30 were pin compatible with the 8088 and 8086, and both CPUs included the new 80186 instructions and faster execution.
@javabeanz8549
@javabeanz8549 2 жыл бұрын
@@johnallen1901 Gotcha! 8086 replacement. I never did see many of the 80186/80188 except some specs on industrial equipment. I did buy a V20 chip for my first XT clone, from the Frys in Sunnyvale, back in the day. The performance boost was noticable.
@brandonm750
@brandonm750 2 жыл бұрын
Have you ever heard the tragedy of iAPX the Slow? I thought not, it's not a story Intel would tell. Very great video, loved it.
@tschak909
@tschak909 2 жыл бұрын
It wasn't just that the processor returned by value. Values were literally wrapped in system objects, each with all sorts of context wrapped around it. You put all this together, and you'd have a subroutine exit take hundreds of nanoseconds. This is when all the system designers who WERE interested in this, tell the Intel Rep, "This is garbage." and leave the room.
@samiraperi467
@samiraperi467 2 жыл бұрын
Holy fuck that's stupid.
@WildBikerBill
@WildBikerBill 2 жыл бұрын
Correction: "...subroutine exit take hundreds of microseconds." Clock rates and memory speeds were many orders of magnitude slower than today.
@ChannelSho
@ChannelSho 2 жыл бұрын
Allegedly the "X" stood for arCHItecture, as in the Greek letter. Feels like it's grasping for straws though.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Oh that really is grasping is'nt it.
@Autotrope
@Autotrope 2 жыл бұрын
Ha I was thinking maybe the "ect" in architecture sounded a little like an "x"
@AttilaAsztalos
@AttilaAsztalos Жыл бұрын
Well, technically speaking, no more tenuous than the "X" in CHristmas...
@teddy4782
@teddy4782 Жыл бұрын
I always assumed the "x" was basically just a wildcard. 386, 486, 586, etc .....just became x86 because the underlying architecture was the same (or at least the lineage).
@AaronOfMpls
@AaronOfMpls Жыл бұрын
@@AttilaAsztalos or in the academic preprint site _arXive._
@tschak909
@tschak909 2 жыл бұрын
Intel would use the term iAPX286 to refer to the 80286, along with the 432 in sales literature. Intel had intended the 286 for small to medium time sharing systems (think the ALTOS systems), and did not have personal computer use anywhere on the radar. It was IBM's use in the AT that changed this strategy.
@herrbonk3635
@herrbonk3635 2 жыл бұрын
Yes, a short while they did. I have 8086 and 80286 datasheets with and without "iAPX".
@tschak909
@tschak909 2 жыл бұрын
IBM ultimately chose the 8088 because it meant that the I/O work that had been done on the Datamaster/23 could be lifted and brought over. (No joke, this was the tie-breaker)
@colejohnson66
@colejohnson66 Жыл бұрын
I thought the deal breaker was sourcing the M68k? There were multiple reasons.
@tschak909
@tschak909 Жыл бұрын
@@colejohnson66 The 68K was discarded early on in planning, because not only was there no possibility of a second source (Motorola explicitly never did second source arrangements), there was no production silicon for the 68000 yet (even though the 68000 was announced in 1979, the first engineering samples didn't appear until Feb 1980, and production took another half a year to ramp up). (a lot of ideas were discarded very early on in planning, including the idea to buy Atari and use their 800 system as a basis for the IBM PC, although a soft tooling mock-up was made and photographed) :)
@tschak909
@tschak909 Жыл бұрын
@@colejohnson66 Production samples of the M68K weren't available until Feb 1980, with the production channels only reaching capacity in September 1980. This would have been a serious problem with an August 1981 launch. Motorola's second sourcing also took a bit of time, because of the process changes to go high speed NMOS. Hitachi wouldn't start providing a second source to the 68000 until a year after the PC's launch.
@HappyBeezerStudios
@HappyBeezerStudios 10 ай бұрын
And they moved the S-100 bus into the upper segment. Imagine if the S-100 (and some successors) would be used in modern x86 PCs. And there was an addon card for 8086 on S-100
@reaperinsaltbrine5211
@reaperinsaltbrine5211 9 ай бұрын
@@tschak909Actually the opposite: Intel never was big on the idea of second sourcing, and it was IBM and the US govt. (who wanted a more stable supply) that made them to agree with AMD.Motorola at the time sold the 68k manufacturing rights to everyone who was willing to manufacture it. Hitachi, Signetics, Philips, etc. Even IBM licensed it and used it's microarchitecture to build embedded controllers (like for high end HDDs) and the lower end 390s (I believe they used 2 cpu dies with separate microcode in them). I think that the main reason IBM chose the 8086/8088 is because all the preexisting 8080/z80 code that was easy to binary xlat to it. The second is cost and business poilcy: IBM actually partially OWNED Intel at the time.
@leeselectronicwidgets
@leeselectronicwidgets 2 жыл бұрын
Ha, great video. I love the hilarious background footage... that tape drive at 2 mins in looks like it's creasing the hell out of the tape. And lots of Pet and TRS-80 footage too!
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
That tape drive at 2mins, is a prop from a film. While its spinning the traffic light control system in an italian city is going nuts. Never let Benny Hill near your main frame.
@leeselectronicwidgets
@leeselectronicwidgets 2 жыл бұрын
@@RetroBytesUK Haha!
@criggie
@criggie 2 жыл бұрын
Yeah looks like we can see through the ferrous material and see clear patches in the carrier backing. I think its just on show, but eating that tape slowly.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
Someone I knew in my area had a type ofAXP development machine. And he remarked its performance was very under rated, and that it had IPC higher than a 386 when coded properly. And estimated a single chip derivative with tweaks could out perform a 486 or 68040 in IPC. Sadly he developed dementia and passed away about 10 years ago. And I have never been able to track the hardware down or what happened to it. He had one of the most exotic collections I had ever seen. And he was a very brilliant man before mental illness overtook him. I only got to meet him at the tale end of his lucidity which was such a shame.
@RonCromberge
@RonCromberge 2 жыл бұрын
You forgot the 80186 processor! Not used in a PC. But it was a huge failure couldn’t jump back out of protected mode. This needs a reboot!
@orinokonx01
@orinokonx01 2 жыл бұрын
The 80286 couldn't return to real mode from protected mode without a reboot, either. Bill Gates hated that...
@ScottHenion
@ScottHenion 8 ай бұрын
The 80186 did not have a protected mode. It was an 8086 with interrupt and bus controller chips brought on die. They were intended for embedded systems although some PC's used them (Tandy) but had compatibility issues due to the different interrupt controller. I worked on systems that used up to 16 80186's and '88's and wrote a lot of code for them in C and ASM. Variants of the 80186 still were made after the 286 and 386 were basically retired. They were common in automotive ECU's, printers, automation and control systems.
@k5sss
@k5sss 2 ай бұрын
@@orinokonx01Sure it could, you just had to triple fault the CPU!
@everTriumph
@everTriumph 15 күн бұрын
@@orinokonx01 Shouldn't need to. Only reason for real mode on a machine capable of multi-user, multi-tasking, possibly virtually memory is to set up the machine prior to launching its protected mode programs. Returning to real mod would introduce a massive security hole.
@Gurdia
@Gurdia 2 жыл бұрын
I've seen a bunch of videos on itanium. But this is the first time I've ever even heard of iAPX. There's not a lot of videos on these lesser known topics from the early days of home computing so I appreciate the history videos you've done.
@boblangill6209
@boblangill6209 2 жыл бұрын
I attended a presentation in Portland Oregon where Intel engineers were extolling features of iAPX 432. I remember they discussed how its hardware supported garbage collection addressed a key aspect of running programs written in high level languages. During the Q&A afterwards, someone asked about its reported disappointing performance numbers. Their response didn't refute or address that issue.
@fsfs555
@fsfs555 2 жыл бұрын
An expensive arch with an inordinate dependency on an a magic compiler that never worked? It's the Itanium's granddaddy. x86 was never intended to do what it ended up doing, which is why Intel kept trying to replace it. Too bad the bandaids were so effective as to render most attempts futile. The i860 was Intel's next attempt after the iAPX, a pure RISC design this time, but this also failed for a variety of reasons. The APX wasn't unique in using multi-chip CPUs (multi-chip and discrete logic-based CPU modules were common in mainframes such as the System/360) but in minis and micros they weren't ideal and they certainly would've needed to consolidate to single-chip implementations ASAP to keep up with chips (in both performance and price) such as the 68k and especially the RISC designs that were less than a decade off. Also, this wasn't the only arch that tried to run a high-level language directly: AT&T's CRISP-based Hobbit chips ran C (or they did until development was canceled).
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
I did not think many poeple would have heard of Hobbit, so I went for the lisp machine as an example as people at least go to see them in comp sci labs.
@Membrane556
@Membrane556 2 жыл бұрын
The i860 did find use in a lot of raid controllers and other applications. One thing that killed Itanium was AMD making the X86-64 extensions and the Opteron line.
@fsfs555
@fsfs555 2 жыл бұрын
@@RetroBytesUK I'm a little bit of a Mac nerd. The Hobbit was pursued for potential use in the Newton, but Hobbit was overpriced and also sucked so it was canned in favor of DEC's StrongARM. Then Be was going to use it in the BeBox but when the Hobbit project was cancelled (because Apple wasn't interested in forking over the millions of dollars demanded by AT&T to continue development), they switched to the PPC 603.
@fsfs555
@fsfs555 2 жыл бұрын
@@Membrane556 The i960 was fairly widely used in high-end RAID controllers because apparently it was really good at XORing. RetroBytes has a video on the channel about the Itanium that's worth a watch if you haven't already. It was a huge factor to be sure but there was more to it than just the introduction of x86-64 (including that ia64 was too expensive, too slow, and had dreadful x86 emulation).
@douggrove4686
@douggrove4686 2 жыл бұрын
@@Membrane556 ah, that was the i960. The i960 had an XOR op code. Raid does a lot of XOR....
@a500
@a500 2 жыл бұрын
Yet again a super interesting video. Thank you. I had no idea that this processor existed.
@lcarliner
@lcarliner Жыл бұрын
Burroughs was the first to feature compiler friendly hardware architecture in the B5000 model. it also was the first to feature virtual memory architecture. As a result, huge teams of software developer were not needed to implement Algol-60 and other major compilers with great reliability and efficiency. Eventually, Burroughs was able to implement the architecture in IC chips.
@MrWoohoo
@MrWoohoo 2 жыл бұрын
You should do an episode on the Intel 860 and 960. I’m remember them being interesting processors but never saw anything use them
@cathyfarcks1242
@cathyfarcks1242 2 жыл бұрын
Yes that would be interesting. I know SGI used them. They got quite a bit of use as coprocessors I think
@asm2750
@asm2750 2 жыл бұрын
i960 was used in the slot machines of the mid 90s. I believe it was also used in the F-16 avionics.
@stanfordlightfoot7079
@stanfordlightfoot7079 2 жыл бұрын
I860 was used in the ASCI Red System @ Sandia NL, 1st machine to break TFLOPS barrier.
@grantstevens5
@grantstevens5 2 жыл бұрын
"The Second Time Intel Tried to Kill the x86". Slots in nicely between this video and the Itanic one.
@hye181
@hye181 Жыл бұрын
the i860 was the original target platform for Windows NT, microsoft was gonna build an i860 PC
@stephenjacks8196
@stephenjacks8196 2 жыл бұрын
Correction: the IBM 5100 Basic Computer already existed and used an i8085 chip which uses the same bus as i8088. It was not a random choice to pick the 8088.
@JonMasters
@JonMasters 2 жыл бұрын
The iAPX was actually one of the first commercial capability architectures. See contemporary efforts such as CHERI. You might consider a future video on that aspect of the architecture. While it failed, it was ahead of its time in other respects.
@paulblundell8033
@paulblundell8033 2 жыл бұрын
Intel did do another O/S called RMX (stood for Real-time Muti-tasking Executive ) was used extensively with their range of multibus boards. It was powerful and popular in control systems ( like factory automation ) There was also ISIS which was used on the Intel development systems. I use to service these and were a major tool for BT, Marconi’s, IBM, and anyone wanting to develop a system to use a microprocessor as they had In Circuit Emulators ( ICE ) units that enabled developers to test their code before they had proper hardware or to test their hardware as you could remove the processor and run and debug your code instruction by instruction. During the early 80’s there was a massive rush to replacement legacy analogue systems with digital equipment controlled by a microprocessor and intel had development systems, their ICE units and various compilers ( machine code level, PL/M, Pascal, C, to name a few ) so a hardware engineer could develop his code and test it quicker than most of the competitors. Fond memories going to factories where they were putting their first microprocessor in vending machines, petrol pumps, cars, etc. One guy developing his coke dispensing machine wanted a way of networking all the machines ( no internet then ) and have them as a super-computer calculating Pi as they would be spending most of their life doing nothing.
@orinokonx01
@orinokonx01 2 жыл бұрын
I've got an Intel System 310 running iRMX 86! It's actually been rebadged by High Integrity Systems, and has three custom multibus boards which are the core iAPX 432 implementation by them. The iRMX 86 side works well enough, but the Interface Processor on the iAPX 432 processor board refuses to initialise, and I suspect it is due to faulty RAM. Very interesting architecture, and very interesting company. They appear to have banked their future on the architecture being successful!
@paulblundell8033
@paulblundell8033 2 жыл бұрын
@@orinokonx01 I would of had all the service manuals for this back in the day. They all got thrown away on a house move. I think ( and this is a big guess ) that HIS were involved with the railways but I use to fix 310’s in care homes, airports, railways, motorway control rooms to name a few. IRMX was also available as 286, 386 , etc There was a strong rumour that IBM considered it as the O/S for their PS/2 range instead of OS/2, which sort of makes some sense as it would break the reliance on Microsoft ( who were beginning to get a little too powerful in the industry !) and IBM had historical ties with Intel. When I joined Intel in 1984 IBM bought a percentage of the company to keep it afloat with a growing pressure of Japanese chip producers. Using iRMX would of also made sense as it was not limited on memory size like standard DOS and used protected mode as standard so was pretty robust ( had to be for mission critical systems !) I specialised in Xenix running on the 310 and it got me all round Europe supporting the systems back in the 80’s. 😊
@ModusPonenz
@ModusPonenz 2 жыл бұрын
I remember RMX and ISIS. They ran on multibus based computers, Intel's version of the S100 bus. ISIS had a whole set of editors, assemblers and emulators. While working at Intel, I used to write 8085 code, assemble it, and write the machine code into EPROMs. The EPROMs were installed into main board of 8085 based systems that tested Intel memory products. As bugs were fixed and features added to the code, I'd remove the EPROM from the system, erase it under an Ultra Violet light. Edit the code and re-assemble, and re-program the EPROM with the new code. Although I never wrote code myself on the RMX systems, I worked alongside some folks who did. It was all in PL/M. I recall that RMX ran on either the System 310 System 360 or System 380 boxes - again they were multibus based and had various options for cpu cards, memory, IO, and later, networking. Later I put together a lab that used Xenix on the various multibus boxes. I was trying to acquire additional hardware to build up the lab and noticed in our internal warehouse inventory some unused hardware that might be useful. It was pallet after pallet of scrapped iAPX 432 boxes. They also used multibus. I was able to repurpose that hardware (without the iPAX 432 cpu boards) and turn them into 80286 based boxes running Xenix.
@paulblundell8033
@paulblundell8033 2 жыл бұрын
@@ModusPonenz I worked on systems up to Multibus II. It was needed for the 32 bit processors. I left Intel in 1991 when they put my group up for sale. The PC architecture killed the need for their own service engineers. Great times though, could book myself on any of their courses, did 8085 assembler through to 386 architecture.
@everTriumph
@everTriumph 15 күн бұрын
@@ModusPonenz I used a MDS system. Quite a performance machine. 8080 at 2Mhz and ISIS II, assembler and PLM80.
@goodgoodstuff
@goodgoodstuff 2 жыл бұрын
Having done ASM for a 286 and a 68000 at university, I have to say, at the time I preferred the 68000.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Me too if I'm being honest.
@lawrencemanning
@lawrencemanning 2 жыл бұрын
@@RetroBytesUK Ah if only the 68K was released a few months prior, IBM would probably have picked it for the PC. Such a shame, as the x86 is pretty horrible from an ISA POV. FWIW the 68K has... 68,000 transistors. As referenced by Wikipdiea. Cool video.
@DosGamerMan
@DosGamerMan 2 жыл бұрын
@@RetroBytesUK All those general purpose 32 bit registers. So good.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
As somebody who had done assembly language on the old PDP-11, it was pretty clear where Motorola’s inspiration came from. ;)
@Membrane556
@Membrane556 2 жыл бұрын
@@lawrencemanning The 68K was more equivalent to the 80286 than the 8086.
@complexacious
@complexacious 2 жыл бұрын
The main takeaway I got from this video was that for all the faults of the CPU, the compiler itself was much much worse; using hugely inefficient CPU instructions to do simple things when the CPU had far better ways of doing it built in. Much like the Itanium how much more kindly would we be looking back at it had the compiler writers actually done a really good job? Hell, the "compiler had to organise the instructions" idea is in use today and has been since the Pentium. How much different really is that to the Itanium's model? I just think that Intel has a history of hiring the "right" people but putting them with the "wrong" teams. They have engineers who come up with really interesting concepts, but then when it comes to implementation it seems like they just fall over and pair it with some really stupid management. Meanwhile on the x86 side they add something that sounds suspiciously like a nerfed version of what the other team was doing and then they kill the interesting project or allow it to fail and then put billions of dollars into assuring that the x86 line doesn't even when faced with arguably superior competition from other CPUs. I just remember that when RISC came about it was considered slow and dreadfully inefficient and it was some time before the advantages of simpler CPUs started to make sense. If the money and backing had been behind language specific CISC instead of assembly focused CISC would we be looking back at x86/68k and even ARM and thinking "boy, what a weird idea that compilers would always have had to turn complex statements into many cpu instructions instead of the CPU just doing it itself. CPUs would have to run at 4GHz just to have enough instruction throughput to resolve all these virtual function calls! LOL!"
@IsaacClancy
@IsaacClancy 2 жыл бұрын
c++ virtual function calls are usually only two instructions in x86 (plus what is needed to pass arguments and save volatile registers that are needed after the call if any). This is almost certainly bound by memory latency and mispredicted indirect jumps rather than by instruction decoding.
@jaybrown6350
@jaybrown6350 2 жыл бұрын
Kinda. the 8088 was a special modified 8086 because Intel had the 8086 16-bit CPU but it didn't yet have a 16-bit support chipset for the 8086. Intel modified the 8086 to the 8088 to work with the 8-bit support chips for the 8080.
@jaybrown6350
@jaybrown6350 2 жыл бұрын
, That's exactly what I said.
@flynnfaust6004
@flynnfaust6004 2 жыл бұрын
The X in "Architecture" is both silent and invisible.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
🤣
@Waccoon
@Waccoon 2 жыл бұрын
It's hard to get across how bonkers iAPX is unless you read the programming documentation, which I did a few years ago. No only are instructions variable length in true CISC fashion, but they don't have to align to byte boundaries and can be arbitrary bit lengths. That's just nuts. Even the high-level theory of operation is hard to understand. All of that so they could keep the addressing paths limited to 16-bit (because all that 64K segmentation stuff in x86 worked out so well, didn't it?) I'm not a fan of RISC, as I feel more compromise between the RISC and CISC world might be pretty effective, but iAPX really takes the cake in terms of complexity, and it's no wonder why RISC got so much attention in the mid 80's. It's a shame the video didn't mention i860 and i960, which were Intel's attempts at making RISC designs which were a bit quirky and complicated in their own rights. Of course, those processors saw some success in military and embedded applications, and I don't believe they were meant to compete with x86, let alone replace it. One thing worth mentioning: the 68000 processor used about 40K transistors for the logic, but also used about 35K more for the microcode. The transistor density of microcode is much higher than logic, but it's misleading to just chop the microcode out of the transistor count. That said, the 68000 was also a very complex processor, and in the long term probably would not have been able to outperform x86. That's a shame, as I liked programming 68K.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Have you seen the timing of some of those instructions that the part that really suprised me. We are all used to variable instruction times, but this is on a very different scale to other cisc processors. The 68060 shows how they could can continued the line, with complex instructions being broken down in a decode phase into simpler internal instructions for execution, just like intel did with the pentium and later chips. I think the fact they could not get the same economies of scale as intel ment they had to narrow down to one cpu architecture and chose powerpc. That was probably the right move back then, and powerpc had a good life before it ran out of road and space in the market.
@Waccoon
@Waccoon 2 жыл бұрын
@@RetroBytesUK There are two main issues with 68K that make it complicated to extend the architecture. First, 68K can access memory more than once per instruction, and it also has a fixed 16-bit opcode space limited to 32-bit operations. The only way to properly extend 68K to 64-bit is to either make an alternate ISA or resort to opcode pages. It's doable, but I doubt a modern 68K machine would really have a simpler core than x86. If the 68000 had been designed with 64-bit support from the start, then things might have worked out differently. One of these days I should look into Motorola's attempt at RISC, the 88000. I've read that it was pretty good, but for some reason nobody used it.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
@@Waccoon I believe the reason it struggled was that it kept being delayed time and time again, then was available in very low quantities. So those designing machines around it moved on to another cpu.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
7:58 I don’t think the splitting into multiple chips was really a performance or reliability problem back then. Remember that the “big” computers used quite large circuit boards with lots of ECL chips, each with quite a low transistor count. And they still had the performance edge on the little micros back then. Of course, chip counts and pin counts would have added to the cost.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Cost should never be under estimated in success of a product. Placing that aside, the big board cpus at this point where starting to come to and end. Propagation and settling time had become the significant factor in limiting performance for them. DEC had been very clever in how they grouped logic together in single ICs to avoid the worst consiquences. Intel had apparently had been less clever in how functionallity was spit over the packages, so more or less every insturction involved crossing packages. You should see the instruction latency on some of the instructions, some of them are massive, and most of that delay is apparently crossing the packages some times many many times. Also it stopped Intel just increasing the clock rate to get round performance issues due to the increased noise sensitivity on the chip interconnect. Motorola could nock out higher clock rate 68k chips, not an option for Intel.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
@@RetroBytesUK Motorola would try this same concept with the 88000. It was several chips to have full functionality and was a bit of a bear to program. Though its downfall was mostly cost and lack of backwards compatibility.
@billymania11
@billymania11 2 жыл бұрын
The big issue was latency going off chip. The government knew this and sponsored the VLSI project. The breakthroughs and insights gained allowed 32 bit (and later 64 bit) microprocessors to really flourish. The next big step will be 3d chips (many logic layers on top of each other.) The big challenge is heat and the unbelievable complexity in circuitry. Assuming these chips get built, the performance will be truly mind-blowing.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
@@billymania11 3d design has been in mainstream use to some extent for about 10-12 years already starting at 22nm. Albeit early designs were pretty primitive to what we may think of as a 3d lattice of circuits. And wafer stacking has been a thing for several years now. Mostly in the flash and memory space.
@PassiveSmoking
@PassiveSmoking 2 жыл бұрын
To be fair to Intel, relying on a single cash cow for your revenue is not a good strategy from a business standpoint. I mean everybody knows that sooner or later x86 must come to an end as a popular architecture, and the rise of ARM has shown that maybe the x86 days are numbered, especially with it starting to make inroads into what has traditionally been Intel's home turf. But iAPX is a classic case of Second System Syndrome, where you have a successful but quirky and anachronistic product on the market that had a lot of design compromises, and you want the opportunity to basically design it again, only do it right this time, throw in all the bells and whistles you couldn't include in the original product, etc. The engineers get to have a field day, but the resulting product is rarely what anybody in the marketplace actually wants and often under-performs compared to the product it was meant to replace.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Your right its not good to be dependent on a single product line. At theat point in time Intel did have a whole bunch of product lines, it just most of them where not processors. They where things like floppy drive controller chips, and in 70's to early 80's ram chips too. I think they did feel like x86 was a sub par processor architecture, and that your right second system syndrome kick in with iAPX, they also had a good case of third and forth system syndrome after that.
@framebuffers
@framebuffers 2 жыл бұрын
Seems like they didn’t learn their lessons of “letting the compiler do the thing” when they did Itanium.
@MoultrieGeek
@MoultrieGeek 2 жыл бұрын
I was thinking along similar lines, "been there, done that, failed hard".
@adul00
@adul00 Жыл бұрын
I have a slight feeling, that it may be kind of the opposite approach. iAPX intended to make writing compiler(s) for it easy. Itanium made writing efficient compilers for it impossible.
@reaperinsaltbrine5211
@reaperinsaltbrine5211 9 ай бұрын
To be fair it is at least as much HP's fault: They got enamored with WLIW when they bought Apollo and got their PRISM architecture with it. HP already was working on a 3way LIW replacement for the PA. Which is a shame, because PA was a very nice and powerful architecture :/
@MonochromeWench
@MonochromeWench 2 жыл бұрын
iAPX was the sort of CISC that RISC was created to combat and that makes modern cisc vs risc arguments look kind of silly
@Δημήτρης-θ7θ
@Δημήτρης-θ7θ 2 жыл бұрын
This. People say things like "x86 is CISC garbage!", and I am like "boy, you haven't seen true CISC".
@wishusknight3009
@wishusknight3009 2 жыл бұрын
All modern cpus are RISC now pretty much. X86 is mostly micro-coded with a simple internal execution core and an unwieldy behemoth of a decode stage.
@Carewolf
@Carewolf 2 жыл бұрын
@@wishusknight3009 Even RISC instruction sets are micro-coded to something smaller these days.
@wishusknight3009
@wishusknight3009 2 жыл бұрын
@@Carewolf I think you're right. There might be some purpose built applications where the ISA isn't mostly microcoded in to the cpu. But fromwhat I am seeing most SOC's used in cell phones for example microcode most of the ISA if not all of it. Ones like apple that have more of a controlled ecosystem and a good team of designers may be more native but its just a guess from me.
@JashankJeremy
@JashankJeremy 2 жыл бұрын
I've been toying with the idea of building an iAPX 432 emulator for a few years - if only to get my head around how weird the architecture was - but I hadn't known that it was broadly untouched in the emulator space! Perhaps it's time to revisit that project with some more motivation.
@kensmith5694
@kensmith5694 2 жыл бұрын
If you do it in verilog using icarus, you can later port it onto some giant FPGA.
@Membrane556
@Membrane556 2 жыл бұрын
One fact that drove the decision on using the 8088 in the 5150 PC was that they had experience with using Intel's 8085 CPU in the Datamaster and the 8086 in the Displaywriter.
@herrbonk3635
@herrbonk3635 2 жыл бұрын
Yes, and also that the 68000 was too new, expensive, and lacked a second source at the time. (But they also contemplated the Z80, as well as their own processor, among others.)
@wcg66
@wcg66 2 жыл бұрын
My first job was programming Ada for Ready System’s VRTX operating system, targeting Motorola 68000 VME boards. That’s is some retro tech soup. Killing the x86 would have done us a favour really, the fact we are stuck on it in 2022 is a bit of tech failure IMO. I’m glad there is real competition now from ARM-based CPUs.
@marksterling8286
@marksterling8286 2 жыл бұрын
Loved the video, never got my hands on one, but brought back some fond memories about the NEC v20, I had a Hyundai xt clone and a friend that had an original ibm 5150, who was convinced that because it originally cost more and was ibm that it was faster than my Hyundai, until the day we got a copy of Norton si. (May have been something similar) The friendship dynamic changed that day.
@CommandLineCowboy
@CommandLineCowboy 2 жыл бұрын
1:23 that tape unit looks wrong. There doesn't seem to be any devices to soften the pull of the tape up reel. On full height units there are suction channels that descend either side of the head and capstans. The capstans can rapidly shuttle the tape across the head and you'd see the the loop in the channels go up and down. When the tape on either channel would shorten to the top of the channel then the reels would spin and equalize the length of tape in the channels. So the only tension on the tape was from the suction of air and not a mechanical pull from the reels that at high speed would stretch the tape. Slower units might have a simple idler? whigh was on a spring suspension. You'd see the idler bouncing back an forward and the reels intermittently spinning. Looking at the unit shown either its really old and slow or its a movie/TV prop.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
It is indeed from a movie, one of the first crime capers to feature computer hacking. Its took the term cliffhanger very literally for its ending too.
@steevf
@steevf 2 жыл бұрын
@@RetroBytesUK Oh thank god. The way that tape looked like it was getting twisted and jerked was stressing me out. HAHAHA. Still I hate it when movie props don't look right either.
@computer_toucher
@computer_toucher 2 жыл бұрын
So completely different from Itanium, which basically went for "well the compilers will sort it out eventually"?
@wishusknight3009
@wishusknight3009 2 жыл бұрын
The i860 shares more issues with Itanium than 432 does i think.
@Conenion
@Conenion 2 жыл бұрын
> went for "well the compilers will sort it out eventually"? Right. The sweet spot seems to be the middle ground: Either superscalar RISC, or CISC to RISC-like translation which is what Intel does since the Pentiom Pro (AMD shortly after).
@bradsmith7219
@bradsmith7219 2 жыл бұрын
Woo hoo! Thanks for these videos, they're great! You're my favorite retro-computing youtuber.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Thanks Brad, thats nice of you to say.
@douglasdobson8110
@douglasdobson8110 2 жыл бұрын
do a video on the evolution of Cyrix . . . I'm diggin' this techy stuff . . .
@markg735
@markg735 Жыл бұрын
Intel actually did offer up another operating system during that era. It was called iRMX and was a real-time OS that used lots of those x86 hardware multitasking features.
@hansvetter8653
@hansvetter8653 Жыл бұрын
I had read some Intel Manuals about iAPX432 back in 1985 during my work time as a hardware development engineer. I couldn't find any value argument for the management to invest in that Intel product line.
@TheSulross
@TheSulross 2 жыл бұрын
well, the 8086 was designed with high level language support - it had a stack register and the BP stack frame register and it could do 16-bit multiply in registers. And the 80186 and 80286 added the push and pop instructions to make it more convenient and code compact to preserve register state and the call stack. Any one that has tried to write a C compiler for the MOS 6502 can testify to what a brilliant CPU the Intel 8088/86 was by comparison. And all the rival 8-bit processors were resorting to the kludge of bank switching to get past the 64K address space limitation, which is relly clunky to support in code. Intel had the brillance of adding segment registers thereby making it performant and relatively easy to write programs that used more than 64K for code or for data. Effectively Intell in circa 1976/77 had already designed a CPU that was friendly for high level language support. Would take until around 1983 where languages like C compilers were respectable enough to garner professional use and begin to make software more economical to creare vs writing entirely in assembly language, as had hither to been derigour for micro computer software
@lawrencemanning
@lawrencemanning 2 жыл бұрын
Though it was late to the 8 bit party, the 6809 is a far better 8 bit. Though they all trounce the 6502, which is horrid and was only popular because it was cheap.
@CarlosPerezChavez
@CarlosPerezChavez 2 жыл бұрын
I like your insights, thank you for this.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
pusha popa are also very useful for assembly programers as is having a stack register. C did a good job of fitting in well with everything that suited asm too. That was of course not by accident, as it was intended as a systems language to begin with. A lot of high level languages where no where near as machine friendly as C, in comp sci circles at the time languages like Ada where seen as the futrue many did not predict the popularity of C, as it grew with the spread of Unix in that late 70's early 80's period.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
Intel’s segmentation scheme was really horrible. It led to the need for “memory models” like “small”, “large” and all the rest of it, each with its own odd quirks and limitations and performance issues, and none of them compatible with each other. Meanwhile, Motorola made the right long-term decision, by designing the 68000 as a cut-down 32-bit processor, with a full, flat 32-bit address space. That cost it in performance in the short term, but it did make upward software compatibility almost painless. Consider that Apple’s Macintosh PCs made the transition to 32-bit in the late 1980s, while Microsoft was still struggling with this in the 1990s.
@lawrencemanning
@lawrencemanning 2 жыл бұрын
@@lawrencedoliveiro9104 yup. Just a shame that Motorola was a few months later with the 68000. IBM did want the 68k but it wasn’t ready. x86, in all it’s forms but especially up to the 286 is a crummy ISA. If you ever did windows 3.1 programming in C, well you have my condolences. Backwards compatibly, ie looking backwards, was always intel’s focus. Segmentation was a direct result of this, vs the 68000 which was a design for the future and it’s flat 32 bit memory model etc.
@ByteMeCompletely
@ByteMeCompletely 2 жыл бұрын
Someone at IBM should have tried harder to contact Gary Killdall. The z80 or the Motorola 68000 would have made a better PC.
@christopheroliver148
@christopheroliver148 2 жыл бұрын
I'm hoping you meant to write z8000. Having written z80 stuff under CP/M, I'll state that the z80 had much of the issues the 8080 had: a paucity of registers (even given the alternate set and IX/IY) with dedicated function to a few. I.e. instructions which favored register A or the HL pair. Both the 68k and the Z8000 were clearly thought out as minicomputer processors, and had a far better ISA, addressing, and a architecture spec'd MMU,. (Actually two varieties of MMU in the case of the Zilog.)
@kennethng8346
@kennethng8346 2 жыл бұрын
The Ada language was a safe bet back then because it was being bushed by the US Department of Defense, which was the origin of a lot of software back then. IBM's decision to go with the Intel processor architecture was because Intel let AMD second source the processor. IBM's standard was that no part could have a single source. Some of the rumors around the 432 was that Intel was pushing it to freeze AMD out of the processor market.
@BuckTravis
@BuckTravis Жыл бұрын
Correct. Ada and the i432 were the chosen code and platform for the F15. Programmers could wright a ton of code in Ada and once it worked the programmers would be able to clean up the code in assembly language.
@tconiam
@tconiam Жыл бұрын
Unfortunately for Ada, most of the early compilers were buggy and slow. Ada was pushing the compiler (and PC) technology limits of the day. Today, AdaCore's Gnat Ada compiler is just another language on top of the whole GCC compilation system. The only real drawback to modern Ada is the lack of massive standard libraries of code like Java has.
@k5sss
@k5sss 2 ай бұрын
Pushing out AMD was also a goal of Itanic. Intel never learns that competition keeps them sharp.
@DinHamburg
@DinHamburg 2 жыл бұрын
@RetroBytes - when you run out of topics, here is one: MMUs . Early microprocessors had some registers and some physical adress-width. Then they found out, that they might do something like 'virtual memory' and/or memory-protection. The processor families Z8000 and 60000 had separate chips, which acted as Memory Management Units. They did translation of some logical adress to a physical adress. when the requested data was not in physical memory, they aborted the instruction, called a trap-handler, which loaded the memory page from disk and restarted/continued the trapped instruction. some architectures could just restart/redo the instruction, but some required to flush and reload a lot of internal state. Maybe you can make a video about the various variants, which were the easy ones, which were the tricky ones, how is it done today (nobody has a separate MMU).
@connclark2154
@connclark2154 2 жыл бұрын
You should do a video on the intel i860 Risc processor. It was a worthy effort for a high end 3D workstation but it kind of fell on its face because the compilers sucked.
@thefenlanddefencesystem5080
@thefenlanddefencesystem5080 2 жыл бұрын
Rekursiv might be worth a look at, too. Though I don't advise dredging the Forth and Clyde canal to try and find it, that's a retro-acquisition too far.
@kevinbarry71
@kevinbarry71 2 жыл бұрын
Thanks for this video. Another thing people often forget these days; back then Intel was heavily involved in the manufacturer of commodity DRAM chips
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
Your right they did ram chips, they also did a lot of controller chips too. I think most 8bit micros that had a disk system in the 80s used an intel disk controller. Intel was also still a general chip fab back then, and you could contract with them to fabicate your own custom asic.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
A key point is that the x86 processor family was not their biggest product then.
@herrbonk3635
@herrbonk3635 2 жыл бұрын
@@lawrencedoliveiro9104 The iAPX project started in 1975, three years before the 8086 (the first x86).
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
3:39 Did you say that the VAX had an IF-THEN instruction in hardware? There’s nothing like that I can recall. It did have an elaborate range of conditional branch and add-one-and-branch and subract-one-and-branch looping instructions, which seemed designed to directly represent FORTRAN FOR-loops. And who can forget the polynomial-evaluation instruction?
@jonathanbuzzard1376
@jonathanbuzzard1376 2 жыл бұрын
The polynomial evaluation instruction is basically a maths co-processor and entirely sensible. It enables one to accelerate the calculation of trigonometric and logarithmic functions. Saying it is silly would be like complaining that an x87 maths co-processor had an instruction to calculate a square root. All you have done is shown your complete ignorance of mathematics. I would note that the ARM CPU has a whole slew of do something and branch instructions so again not a stupid idea. In fact it sticks in my mind you can't branch without doing an add; you can always add zero. Pretty popular processor architecture the ARM last time I checked.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
@@jonathanbuzzard1376 It “accelerates” nothing. That’s the point.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 жыл бұрын
@@lawrencedoliveiro9104 Oh deary deary me. Next you will be telling me that hardware floating point accelerates nothing. However given that hardware floating point does indeed make a huge difference when you have floating point arithmetic to do the claim that a polynomial-evaluation instruction does not accelerate anything simply displays a complete and total ignorance of how you calculate a whole bunch of mathematical functions. You can do it with specific instructions for trigonometric, hyperbolic, logarthmic functions as the x87 maths co-processors did, or as the VAX did you could do a polynomial evaluation instruction that can be used with the appropriate Taylor series expansion for what you want to calculate. Noting that a Taylor series is just a polynomial. I suggest you read the wikipedia article on Taylor series en.wikipedia.org/wiki/Taylor_series TL;DR a machine code instruction to do a polynomial expansion sounds like the height of CISC. The reality is that it is nothing more than a generic floating point maths accelerator for trig/log/hyperbolic functions and completely sensible given that most RISC CPU's have hardware floating point instructions too. The problem is that too many people are ignorant of exactly how you calculate a range of mathematical functions. I would further note the Babbage Differential Engine was nothing more than a mechanical device for calculating polynomial functions.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
@@jonathanbuzzard1376 The VAX *had* hardware floating-point.
@jonathanbuzzard1376
@jonathanbuzzard1376 2 жыл бұрын
@@lawrencedoliveiro9104 Duh the polynomial-evaluation instruction *WAS* part of the hardware floating point. What is so hard to get about that? All the trig/log/hypobolic functions are calculated as a Taylor series expansion which is a polynomial function. Like I said if you had read and understood you could either have specific instructions for each mathematical function *OR* like the VAX did have a general purpose polynomial instruction that can be used for any of the given mathematical functions calculated using a Taylor series expansion, so sin,cos,tan,ln,arcsin,arccos,arctan at a minimum. In actual fact the polynomial evaluation instruction is probably better than specific instructions because you can use it to do Fourier series as well.
@Clavichordist
@Clavichordist 2 жыл бұрын
Very interesting stuff. I'm glad I skipped this chapter in Intel's life! When I used to repair Ontel terminals, I came across one CPU board that had 8008s on it! That was their oldest product I came across. The others were based around 8080 and and 8085 CPUs not counting their Amiga CP/M 2.0 computer which had the requisite Z80.
@DryPaperHammerBro
@DryPaperHammerBro 2 жыл бұрын
Wait, 8085 or 8086?
@Clavichordist
@Clavichordist 2 жыл бұрын
@@DryPaperHammerBro Yes. 8085s and yes, 8086s as well which I forgot to mention. Really old ca. 1980-81 and earlier equipment. The had static memory cards with up to 64K of RAM on some models. There were platter drives, and Shugart floppy drives available as well as word-mover-controllers, PIO and other I/O cards. Their customers included Walgreens Drugstores, Lockeed-Martin, Standard Register, Control Data, and many others. I learned more working on this stuff than I did in my tech and engineering classes. Amazing and fun to work on actually.
@DryPaperHammerBro
@DryPaperHammerBro 2 жыл бұрын
@@Clavichordist I only knew of the 86
@samiraperi467
@samiraperi467 2 жыл бұрын
Amiga CP/M 2.0 computer? Does that anything to have with CBM? I mean, there was the C-128 that had a Z80 for running CP/M, but that wasn't an Amiga.
@grey5626
@grey5626 2 жыл бұрын
@@samiraperi467 yeah, as far as I know the Amiga (previously known as Hi-Toro) was ALWAYS an MC68000 design, Jay Miner purportedly had flight simulators in mind when he created it, and using something to take advantage of the recently invented low cost floppy drives (which is why the Amigas have the brilliant Fast vs Chip RAM and RAM disk paradigms, so as to load as much code into memory to speed up application performance while mitigating the slower speeds of floppy drives without having to rely on taping out large ROMs which are much more expensive).
@LabyrinthMike
@LabyrinthMike 2 жыл бұрын
You commented (mocked) Intel for using iAPX on the 286 chip name. Most CISC machines are implemented using microcode. What if they used the APX technology to implement the x286 by simply replacing the microcode with one that implemented the x86 instruction set plus some enhanced instructions. I don't know that this happened, but it wouldn't have surprised me.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
The 432 its such an odd design with its system objects etc that I doubt it would have been possible to implement that via micro code on a 286. They are vastly different architectures.
@jecelassumpcaojr890
@jecelassumpcaojr890 2 жыл бұрын
@@RetroBytesUK back then nobody called it "A P X" but just "four, three, two" instead because Intel did use the iAPX brand for everything between the 186 and the 386, only dropping it when the 486 came along. The 286 was indeed a 432-lite and does include some very complex objects that the x86 have to support to this day. In fact, it was the desire to play around with one of these features (call task) on the 386 that made Linus start his OS project.
@flippert0
@flippert0 11 ай бұрын
Btw, the 'X' from APX stems from the "chi" in "Ar*chi*tecture" interpreted as Greek letter "X" ("chi").
@cal2127
@cal2127 2 жыл бұрын
love the channel.these deep dives on obscure cpus are great. any chance you could do a deep dive on the i860 and the ncube hypercube?
@borisgalos6967
@borisgalos6967 Жыл бұрын
The point of the iAPX 432 was to design a clean, safe, reliable architecture. Had it happened a decade later without the need for backward compatibility, it would have been a superb architecture. Its problem was that it was too advanced for the time and its day never came. The same was true for their Ada compiler. It was, again, designed to be clean and safe at the cost of speed.
@minutemanqvs
@minutemanqvs Жыл бұрын
This channel is awesome, so much history I even didn’t know about.
@sammoore2242
@sammoore2242 2 жыл бұрын
As a collector of weird old cpus, a 432 would be the jewel of my collection - of course I'm never going to see one. For 'failed intel x86 killers' I have to settle for the i860 (and maybe an i960) which is still plenty weird, but which sold enough to actually turn up on ebay every once in a while.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
You stand a chance of finding one of those. I have a raid controller or two with an i960 used for handling the raid calculations.
2 жыл бұрын
i960 that weird or odd? My 486 133Mhz systems has a DAC960 RAID controller card (prom a PPRO system I was told) that runs the i960 and has 16Mb of ram (max 32 I think). Also Dec Alpha NT 4.0 drivers for it so I could put it in my PWS 500AU if I wanted to. Its way to fast for my 486 PCi buss BUT I do load the DOS games on 486 lans the fastest, first in on a MP season of Doom2 or Duke3D so there is that XD
@DennisPejcha
@DennisPejcha 2 жыл бұрын
The original i960 design was essentially a RISC version of the i432. It was originally intended to be used to make computer systems with parallel processing and/or fault tolerance. The project was eventually abandoned, but the core of the i960, stripped of almost all of the advanced features became popular as an I/O device controller for a while.
@O.Shawabkeh
@O.Shawabkeh 2 жыл бұрын
Don't miss the channel 'CPU Galaxy', in one video he showed a gigantic collection of cpus.
@codewizard58
@codewizard58 2 жыл бұрын
At Imperial College we bought an iAPX432 and SBC86-12 with a Matrox 512K multibus system. Managed to run OPL432 the Smalltalk style language. When I left Imperial, they sold me the system and I eventually put the iAPX432 on ebay.
@rfvtgbzhn
@rfvtgbzhn Жыл бұрын
8:01 propagation delay doesn't become relevant before you reach the order of magnitude of 1 GHz (at 1 GHz a signal at lightspeed travels 30 cm during 1 clock cycle, in a copper wire still at least 20 cm). The iAPX 432 ran at a maximum clockspeed of 8 MHz.
@tschak909
@tschak909 2 жыл бұрын
The iAPX-432 was literally the Burroughs B5000 system shrunk down into VLSI. Ultimately, process size, the unbelievably slow subroutine call times, and the absolutely horrible system (bring-up) software support killed this thing.
@christopheroliver148
@christopheroliver148 2 жыл бұрын
Interesting. I thought the big part of the B series was HLL specific microcode programmability. Wasn't i432 a fixed ISA?
@jecelassumpcaojr890
@jecelassumpcaojr890 2 жыл бұрын
@@christopheroliver148 The B1700 was the line with the per language reconfigurable microcode. The B5000 series were fixed Algol computers. The 432 was an evolution of that, being a proper capability based machine. It is interesting that while the Ada and 432 were both started in 1975, the language itself was only defined a few years later and was officially released in 1980. The 432 supported object-oriented programming, which Ada only adopted many years later.
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
Not literally. The Borroughs system allowed you to load your own microcode, which was sort of like a RISCy assembly code. Sort of.
@christopheroliver148
@christopheroliver148 2 жыл бұрын
@@jecelassumpcaojr890 Understood. I had some brief time on a B1000 series machine where a friend worked.
@christopheroliver148
@christopheroliver148 2 жыл бұрын
@@davidvomlehn4495 As Jecel rightly points out, those were later generation Burroughs architectures. I had brief time on a B1900. The shop had full source too, but I was just goofing around with FORTRAN. The shop was mainly a COBOL shop though. I wish I had a nice emulation for this. Hercules is somewhat fun, but a late generation Burroughs emu would be funner.
@tschak909
@tschak909 2 жыл бұрын
also dude. Intel wrote several operating systems: * ISIS - for 8008's and paper tape * ISIS-II for 8080/8085's and floppy disk (hard disk added later) * ISIS-III for 8086's with floppy and or hard disk * ISIS-IV a networked operating system for iMDX-430 Networked Development Systems AND * iRMX a preemptively multitasking operating system used in all sorts of embedded applications (and was also the kernel for the GRiD Compass)
@stephenjacks8196
@stephenjacks8196 2 жыл бұрын
FYI I was at Microsoft when they started using Intel's Vtune. Its purpose is to "RISCify" CISC code for faster execution. Example: it removed i286 "Bounds" a long multibyte instruction that stalled pipelines. Note the rise of buffer overflow exploits from this time. (slowly, but i386 executes that instruction. A 32 bit version of the 286 had been considered. Like most Server chips at the time, it was x86 compatible to boot bios then switched to non-86 32bit protected mode. Unfortunately the i286 protected mode was more difficult to program. MIPS, Alpha, PowerPC, all supported ARC boot.
@DigitalViscosity
@DigitalViscosity 2 жыл бұрын
40 OR in the US Military as a systems developer, we still use ADA new libraries are written in ADA and some of the best security libraries are in ADA.
@jeffsadowski
@jeffsadowski 2 жыл бұрын
Seems like a lot of similarities between the APX and Itanium from what I heard of the Itanium. Itanium brought about my first look at EFI boot process and that turned out useful for me. I had Windows 2000, HPUX and linux running on the Itanium. The Itanium was faster at some operations. I had an SGI server with Itaniums that I stuck linux on and had it as a camera server running motion on around 2006 or so.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
There are alot of parallels between iAPX and Itanium, and also i860 it seams Intel was doomed to keep creating similar problems for its self.
@thatonekevin3919
@thatonekevin3919 2 жыл бұрын
IA64 failed because of the drawbacks of VLIW. The performance you squeeze out of that is dependent on very complex compile-time decisions.
@stefanl5183
@stefanl5183 2 жыл бұрын
@@thatonekevin3919 Yeah, but that's the opposite of what's being described here. From the description here iAPX was to make writing compilers easier. Ia64 needed a more complex, smarter compiler to get the best performance. So, these 2 were complete opposites. Also I think AMD's 64bit extension of x86 and the resistance to abandon the backward compatibility to legacy x86 was part of the reason IA64 never took off.
@alanmoore2197
@alanmoore2197 2 жыл бұрын
These are not similar at all - quite the opposite...
@RachaelSA
@RachaelSA 2 жыл бұрын
Thank you for making these :)
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
15:24 No it wasn’t. The 8087 chip looks like it came out a year earlier. In fact, the 8087 was a key contributor of concepts to IEEE 754.
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
It was a key contribution to 754 but does not eniterly fufill it, iAPX is apparently the first chip(s) that did. Apparently these differences are small and insignificant in almost all cases but are there. If not for those differences you would be spot on about 8087, it just narrowly misses the crown but in it away that really does not matter that much. Also I was really struggling to find somthing nice to say about it at this point so I let it have its very technically first win.
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
Yes, and was an interesting example a marketing jungle warfare. IEEE-754 included a feature whereby bits of precision would be gradually dropped when your exponent reached the minimum value. DEC's VAX architecture didn't implement this, though they implemented the rest of the standard pretty completely. Intel implemented this and thus provided a demonstration that the feature was realistic. It went into the standard and Intel could claim better conformance than DEC, who were left trying to explain why it wasn't important. That was a hard sell to many in the numerical computing world and provided Intel a marketing edge. Standards work may be kind of dry but it has real world consequences.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 жыл бұрын
Gradual underflow was an important feature. Go read some of Prof William Kahan’s writings on this and other numerics topics. They don’t call him ”Mr IEEE754” for nothing.
@80s_Gamr
@80s_Gamr Жыл бұрын
Just to note, Intel didn't jump straight to 80286. For a brief period there was an 80186. I believe it only went into 2 or 3 offerings of PC's but it was a thing at one point.
@digitalarchaeologist5102
@digitalarchaeologist5102 2 жыл бұрын
Interactive UNIX file /bin/ls /bin/ls: iAPX 386 executable Thanks for explanation...
@Da40kOrks
@Da40kOrks 2 жыл бұрын
Having just watched your Itanium video, seems like a similar mindset between the two...pushing the performance onto the compiler instead of silicon
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
No sin there. RISC computers are a key part of the architecture's success. But the simpler architecture of RISC makes it easier to optimize instruction choices and the simplicity of the underlying hardware makes it possible to run fast and to add things like caching, instruction pipelining, and speculative instruction execution. Of course, when the CPU runs screaming fast you have to wait for memory to catch up, but *something* has to be the bottleneck.
@jenniferprime
@jenniferprime 2 жыл бұрын
I love your architecture videos
@mglmouser
@mglmouser 2 жыл бұрын
A bit of precision is required concerning the meaning of “RISC”. It means “Reduced [Complexity] Instruction Set Chip”. Ie, each operations are simpler to decode and execute-more often than not, in a single CPU cycle. Whereas CISC might require multiple CPU cycles to treat a single “complex” instruction. This simplicity generally meant that RISC processors has more instructions than CISC to compensate.
@drigans2065
@drigans2065 2 жыл бұрын
@RetroBytes amusing video. I'd almost forgotten about the i432. The culture of the times was very much high level languages of the Pascal/Algol kind driving design, ADA being the most prominent/newest kid on the block and as I recall there were *no* efficient implementations of ADA and it took forever to compile. Because the ADA language specification was very complex and the validation of it highly rigorous it just took too long to qualify the compilers and my guess is, Intel ran out of *time* to do a decent compiler. Also, ADA was designed to support stringent operating scenarios in the military space. Ultimately the server market was driven by business scenarios which didn't need such stringency, paving the way for C. So wrong hardware for wrong language for what the rest of world wanted...
@johnmckown1267
@johnmckown1267 2 жыл бұрын
Look at the current IBM "z series" mainframe. It has a lot of "conditional" instructions. Equivalent to "load if" instruction.
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
ARMv7 and earlier have lots of conditional instructions but fewer on ARMv8. Interestingly, RISC-V slices this in a different way by packing a conditional jump into 16 bits of a 32-bit word. The other 16 bits hold instructions like move to register, perform arithmetic, etc. The branch doesn't create a bubble in the pipeline when the target is the next word so it runs just as fast instruction. as a conditional. Very slick.
@Chriski1994
@Chriski1994 2 жыл бұрын
Loving the tape drives from the Italian Job
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
I love putting in little bits like that for people to spot.
@theantipope4354
@theantipope4354 Жыл бұрын
The really big reason why ADA was invented was as a standard language for defence applications, as specified by the American DoD. The theory was that it was supposed to be provable (in a CompSci sense), & thus less buggy / more reliable. Hence, there was a *lot* of money behind it.
@petermuller608
@petermuller608 2 жыл бұрын
Isn't returning values from procedures the norm? Like how would you return a pointer to your stack, since the stack will be cleaned during the return from procedure
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
Actually, for small-sized values, the most efficient thing is to return values in registers. If it's too big you have to return some or all of it in the stack. (Even if it's large you can return the first few elements in registers)
@garymartin9777
@garymartin9777 Жыл бұрын
IBM chose the 8088 for business reasons, not engineering. Firstly, it was second sourced. At the time lack of a second source was a deal-breaker. Secondly, between the two sources IBM was more confident the expected volumes could be produced and yielded.which would keep price low.
@JayMoog
@JayMoog Жыл бұрын
Ada is still in regular use, or at least in the form of VHDL, used to describe electronic hardware. Ada is a really cool language that lends some great features to later languages.
@HappyBeezerStudios
@HappyBeezerStudios 10 ай бұрын
Yup, even Intel saw 8086 and it's decendents as something periodic. From iAPX432, to i960, i860 and IA-64
@DataWaveTaGo
@DataWaveTaGo 2 жыл бұрын
At 1:11 and on - that tape unit clip is from a scene in the 1969 movie "The Italian Job". Is it PD now, or what?
@aegisofhonor
@aegisofhonor Жыл бұрын
the Fiesta reference comparing it's alternate naming to Intel's iAPX 286 would be better compared by renaming the Ford Fiesta to the Ford Fiesta Edsel, as the Ford Edsel was essentially Ford's version of the iAPX back in the 1960s.
@klocugh12
@klocugh12 2 жыл бұрын
I had Ada class in college circa 2008, it was mainly about teaching parallel computing concepts.
@orinokonx01
@orinokonx01 2 жыл бұрын
I've got an iAPX 432 system - it consists of three custom multibus boards built by a company called High Integrity Systems, and they are installed into an Intel System 310 that has also been rebadged with the HIS logo on the front. Strangely, HIS left the original parts list sticker on the bottom which lists the original System 310 hardware. The iAPX 432 chips in the system are release 3.2, which I have read was their last revision of the architecture. It boots up using a 8086, into an OS called iRMX 86, and then you are meant to run a program called 'deb432' (a debugger) to then load up iAPX 432 binaries and run them. Unfortunately, the Interface Processor refuses to initialise, and I suspect that is due to faulty RAM on one of the HIS boards. There are ~160 4164 RAM IC's soldered on the board, and I don't currently own a desoldering station. Quite frankly, I'm not even sure I want to attempt to figure out which ICs are faulty... I managed to create a disk image of the MFM hard drive using an adapter designed by David Gesswein. So there is that, at least. Honestly, considering Intel wanted the iAPX 432 to be their first 32 bit architecture back in 1977, and to drive their business forward into the 80s, I suspect the fact that IBM and the clones were so successful, and the fact that the 386 being 32 bit and still capable of running previous x86 binaries were the main reasons the iAPX 432 was killed (speed issues aside...).
@timr8473
@timr8473 2 жыл бұрын
Great video. I worked for a while with the folk who set up "HIS" in 1981. They had worked for a UK R&D lab called STL (best known for optical fibre transmission) on a joint iAPX project with Intel for a few years to develop reliable computer architectures for high reliability systems such as telecomms switches.. When STL pulled the plug they set up on their own to produce iAPX432 boards and develop high integrity systems. I was a late comer to the team and not convinced about the iAPX approach so decided not to join them. The iAPX was actually started quite some time before Ada arrived and intended to provide in hardware the same kind of high integrity that Ada then achieved through its syntax and compiler. It was therefore somewhat redundant putting all that overhead in the architecture, especially when dependent on a compiler that was rushed to market. Despite that, HIS went on to be quite succesful. Ironically I had joined the team after studying at UC Berkeley, though a bit before their work on RISC. I did however return to Berkeley a year or two later for a most interesting conference on computer architectures. Quite heated with UCB and IBM (901 chip?) on one side and iAPX & VAX on the other.
@DouglasCrockfordEsq
@DouglasCrockfordEsq 2 жыл бұрын
The 432 was not intended to kill the 8086. The problems in the 432 caused the development of the 8086. The 432 was intended to kill the 8080 which had inspired competitors like Zilog and Motorola. The 432 was intended to leapfrog way past them. When it became clear that the 432 was going to be very late, Intel made the 8086 in a mad panic. The 8086 was intended to be source code compatible with Z80, while capable of addressing more than 64K by means of clumsy segment registers.
@truckerallikatuk
@truckerallikatuk 2 жыл бұрын
Missed out the 80186, the stepping stone from the 8088/8086 to the 80286. I know a lot of people forget this existed, it wasn't on the market long, and IBM skipped it before coming out with the 80286 based AT.
@chriswareham
@chriswareham 2 жыл бұрын
A colleague claims he owned an IBM PC that shipped with an 80186. It may be that his memory is faulty and it was actually a clone, as I'm certain other manufacturers briefly used that processor.
@truckerallikatuk
@truckerallikatuk 2 жыл бұрын
@@chriswareham Yeah, the one's most to use it here in the UK was RM, who sold a lot of machines to education. A lot of their machines were 80186 based, probably bought cheap as the 286 was out soon after.
@ruben_balea
@ruben_balea 2 жыл бұрын
The 80186/80188 were microcontrollers for embedded systems, they had many auxiliary circuits built in that were not compatible with the auxiliary chips that IBM had used around the 8088, of course they use x86 code but they couldn't run IBM BIOS or any 100% compatible BIOS. There was at least a MS-DOS version adapted to work on them but they still couldn't run programs that used direct access to some of the PC hardware. A few years later Intel released new versions with an integrated 8087 FPU.
@truckerallikatuk
@truckerallikatuk 2 жыл бұрын
@@ruben_balea I've used near PC compatibles with the 80186 chip.
@ruben_balea
@ruben_balea 2 жыл бұрын
@@truckerallikatuk Yeah, I know such things existed, but as far as I know they could not be expanded with PC hardware, for example a VGA or a Sound Blaster
@Conenion
@Conenion 2 жыл бұрын
@13:40 The 286 didn't have a MMU. Don't think Unix would make much sense on such a chip. It certainly wouldn't be much fun. The 386 was the real "makes Unix possible" chip. Linus Torvalds once said, he started Linux because he was interested in how the 386 could handle it (more specifically he was interested how to implement paging with the 386).
@jwmjr59
@jwmjr59 2 жыл бұрын
I'm getting old. I remember all of this. My 1st computer was a 80286 and everyone thought it was great. A few years later, my dream machine was a 486.
@davidvomlehn4495
@davidvomlehn4495 2 жыл бұрын
Old? Whippersnapper! I had an 8080-based Altair 8800 and not even the 8800B shown in the video. Ah, the good old days, may they never return!
@LG-qz8om
@LG-qz8om Жыл бұрын
Hey, when everyone was ditching their 286s in favor of 386 i picked up 5 of them for $100 each (which was quite a bargain price when a PC cost $1000-2000 each.). With one computer running Novell 2.10 and the others networked as Clients i wrote a Medical Insurance Claims processing system that at 50% capacity on one phone line processed 50,000 claims/week and beating competitors who were running on mainframes). At $0.50/claim i was earning $25,000/week with those old 286's. We had to hide them in a closet with a bookshelf in front and told clients that the computers were off-site (leaving it to their imagination was better than revealing $500 worth of computers).
@Jerrec
@Jerrec Жыл бұрын
1:38 the Tape was inserted wrong or winded wrong on the spool. I hope it wasnt damaged. ;-)
@kasimirdenhertog3516
@kasimirdenhertog3516 2 жыл бұрын
The Intel Trinity by Michael S. Malone offers an interesting insight into Intel’s decision making. At first they thought they’d be a memory company, so this CPU thing was a bit of a tangent to begin with. And indeed the IBM deal was pivotal, and wouldn’t have come about if it wasn’t for the great Robert Noyce.
@quantass
@quantass 2 жыл бұрын
Love your content and presentation style.
@colinstu
@colinstu Жыл бұрын
Could you make a vid about the Intel i960?
@hinzster
@hinzster 2 жыл бұрын
They didn't only choose the V20 because it was running a bit faster than the 8088 (which is still slower than comparable processors, in tradition with the 8080 being a bit worse than the z80 and quite a bit slower than the 6502 at the same frequency), but because it include an 8080 emulation mode you could run the *real* CP/M on - because CP/M-86 was just as much a hack to run CP/M on the 8086 line of processors as QDOS, later renamed to MS-DOS, was. Also, IBM chose the 8086 (well, 8088 to save on databus width and thus memory chips needed) because there were different suppliers, unlike the 68000 which was made by Motorola, and only by Motorola. That tactic was something IBM often used, at least when there were external suppliers of hardware, there had to be at least two. So that same method was repeated when they introduced their RISC-line of workstations based on the PowerPC architecture, which they would never have used if there weren't at least two suppliers of PowerPC processors. Incidentally, for the x86 architecture this demand for two suppliers was how AMD entered that segment.
@5urg3x
@5urg3x 2 жыл бұрын
Great video once again, I enjoyed it. I never knew about this though I had heard of ADA. Never really knew what it was, just heard of it in passing.
@lawrencemanning
@lawrencemanning 2 жыл бұрын
It’s a nice language. Ada was the DoDs attempt to force their suppliers into producing software in one well designed language, vs the dozens there suppliers were using. VHDL was there attempt at the same for describing hardware. Both have very similar syntax. I learned Ada at university (1995) and VHDL much later, and they are both very verbose, ridged languages, since these are actually good attributes. It’s a shame that each aren’t more popular.
@dustinm2717
@dustinm2717 2 жыл бұрын
@@lawrencemanning yeah, i wish something more strict become more popular in the past, but instead we got C and C++ being the base of basically everything, with their questionable design aspects still haunting us to this day I wonder how many major security scares would have never happened if C/C++ was less popular and something more rigid was used to implement a lot of important libraries instead (probably a lot considering how many can be traced back to things like buffer overflows in C)
@RetroBytesUK
@RetroBytesUK 2 жыл бұрын
I think one of the reasons the MoD liked it was exactly what you onlined. A lot less room for certain classes of errors. I remember being interviewed for one job after uni they where very keen that I'd done development work in Modula-2 and some ada, as a lot of the code they where working on was written in Ada for missle guidance. I was less keen once I found about the missle guidance part of things. C was very much a systems language that became very popular. Its very well suited for its original task, far less well suited for more or less everything else. However given how well c code performed when compiled given it did things in away that was closer to how the hardware did it seamed to just take over. Once we nolonger needed to worry about that sort of thing so much it was to late it was the defacto standard for most things.
@jurepecar9092
@jurepecar9092 2 жыл бұрын
What about the i960? Especially interesting is the military variant i960mx with memory tagging.
@hubbsllc
@hubbsllc 2 жыл бұрын
I remember that CPU. The first really big server I ever bought (six-way PPro/200) had a three-channel SCSI card with one of those for each channel.
@jurepecar9092
@jurepecar9092 2 жыл бұрын
@@hubbsllc Nice, I remember it from scsi and raid controllers too. I rediscovered it when I read stories about F-22 cpu shortage and then dig a bit into it what that cpu is ... quite fascinating tech, but very niche.
@SignumCruxis0
@SignumCruxis0 2 жыл бұрын
You deserve more subs! Great informative video.
@rhoddavies782
@rhoddavies782 2 жыл бұрын
I thought the iAPX86 and iAPX88 (the "normal" bus interfaces for the iAPX series) were quite successful ?
Arcnet - It was a contender
28:02
RetroBytes
Рет қаралды 124 М.
DEC Alpha
30:39
RetroBytes
Рет қаралды 286 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 40 МЛН
Farmer narrowly escapes tiger attack
00:20
CTV News
Рет қаралды 12 МЛН
Secret History: Apple's first attempt at making a CPU
20:26
RetroBytes
Рет қаралды 75 М.
What Once Saved Intel
30:14
Asianometry
Рет қаралды 139 М.
E-ISA vs MCA: When IBM Lost the lead in PC market
17:27
RetroBytes
Рет қаралды 60 М.
Netscape it's rise, fall, and eventual revenge
21:07
RetroBytes
Рет қаралды 154 М.
The History of Zilog & Z80
35:14
RetroBytes
Рет қаралды 43 М.
This 6502 is as old as me and I test it
27:35
Adrian's Digital Basement
Рет қаралды 276 М.
The history of SPARC, its not just a Sun thing
41:36
RetroBytes
Рет қаралды 189 М.
The NeXT Video
25:25
RetroBytes
Рет қаралды 196 М.
The Only Famous Motherboard
1:03:23
Cathode Ray Dude - CRD
Рет қаралды 100 М.
The Transputer: A parallel future
32:56
RetroBytes
Рет қаралды 170 М.