I work as a network architect and one of my customers had just one of them in each DC. We were working to add a redundant 7010 to each DC, but that was still weeks away. One day a plastic bag that was floating about in the DC was sucked into the intake of the 7010. This caused it to overheat and shutdown and the customer promptly lost their primary DC. 😂 Lesson quickly learnt about plastic bags and redundancy.
@PlaywithJunkАй бұрын
That's one of the reasons why datacenters are so picky about cardboard boxes and other waste. Normally you're not allowed to take that inside. We have to unpack smaller items outside.
@kellymoses8566Ай бұрын
That was on them for not installing a redundant pair initially. These kind of core switches are always installed as redundant pairs because of how utterly catastrophic a failure is.
@tomking60062 ай бұрын
I'm in awe at how deep those module cards are.
@jimfulgham68662 ай бұрын
I have four of these switches in the 2 Data Centers where I am responsible for the hardware. When fully populated, there is a LOT of fiber to sort through!
@beefchicken2 ай бұрын
Ahh memories. In 2008 I was managing the network for a datacenter that was one of the first in Canada to use Nexus gear. The Nexus line had only just been introduced in January 2008, and NX-OS felt a little half-baked. In one minor-version-increment update they changed the default value of a core config flag; that earned me my worst outage ever. I should have spotted the change in the release notes, but it blew my mind that they made a breaking change to the config syntax in a minor update. NX-OS is based on Linux. It runs an iOS-like command interpreter, but access to a regular shell was possible. You definitely could run Doom on it. The supervisor boards in my 2008 Nexus 7010 had a separate “lights out” management computer that I think also ran Linux. It was used to coordinate software upgrades, and to manage the main supervisor config in the event it got messed up and couldn’t boot. I don’t see that module installed in your supervisor board, maybe they dropped the option later on.
@cardboardpigАй бұрын
A little half baked? It barely went in the oven haha.
@Vladimir_Varavva2 ай бұрын
Classic PWJ format, loveit❤
@RBLevin2 ай бұрын
Totally! These are my faves or my fave channel.
@samjones4327Ай бұрын
Very cool video! Thanx 4 sharing this beautiful beast with us!!
@ChristopherWoods2 ай бұрын
I work in a large org full of networking as critical infrastructure, so you get a bit blasé working around this level of enterprise network hardware. What they can do at the densities they're built at is truly astonishing really, but the Cisco stuff is being outclassed in some regards by other brands. We're adopting a lot of Arista for some networks and use cases, and we went through a Juniper phase. But nobody ever got fited for buying Cisco (even if sometimes it isn't quite the right SKU for a specific taak, resulting in loads of workarounds or compromises on system design 😅) I'm glad we have Mikrotik and Unifi and other software-based router solutions to play around with at home nowadays. I'd hate to have a Catalyst or Nexus running my power bill up to insane levels nowadays 😁
@hariranormal55842 ай бұрын
Yeah, seems like many people are leaving the Cisco boat fast, most seem to end up with Arista nowadays
@kellymoses8566Ай бұрын
@@hariranormal5584 Cisco switches are still pretty decent but they are overpriced
@logikgrАй бұрын
Yeah, they lost their edge with the rise of SDN; not to mention their "SMART" licensing can be another pain point.
@justjoe73132 ай бұрын
The racing circuit could be the old Paul Ricard before the new chicanes were built :)
@leocelente2 ай бұрын
Interesting thermal design, it seems that instead of laying the large modules like a normal server rack, flipping them on the side allows for that up-down air movement instead of the common front-back.
@kellymoses8566Ай бұрын
The new data center networking standard is VXLAN spine leaf networks. VXLAN lets you run L3 routing everywhere and not have to use STP while still being able to tunnel L2 vlans if needed. It also allows for over 16 million VXLAN virtual networks. Spine leaf topology combined with 200, 400, or even 800 Gb spine switches and leaf switches with matching uplink ports allows for a lot of cross sectional bandwidth that can be increased by just installing a new spine and connecting it to every leaf. It also creates incredibly consistent latency and jitter. If you have the money and the need you can use giant chassis switches as your spine and connect to a LOT of leaf switches.
@GothGuy8852 ай бұрын
very interesting ! you find the coolest stuff to tear down, and/or explore! 😃thanks for posting!! 👍
@Torkum732 ай бұрын
The large opening at the front of the fan tray is for the airflow from the fans below. The two vertikal mounted ones. They suck air from below and pump it up. If you hide your stuff in there you will block airflow 🙂
@s8wc32 ай бұрын
I love the old school Cisco serif font on parts of the machine. It would match the 2600 router that gets used with it!
@brauchmernetАй бұрын
Ahh, just make the rack full with an 12k. I love their vfd’s…
@powerspec882 ай бұрын
Wow, we got 2 of these still in use at our datacenter! We are finally going to replacing them within the next few months!
@chrisridesbicycles2 ай бұрын
seems these things are not used for very long. Do you replace them for power consumption or space saving? I would not expect them to fail after 10 years.
@powerspec88Ай бұрын
@@chrisridesbicycles changing venders! We are moving away from Cisco.
@GeoffSeeley2 ай бұрын
@10:42 Hey! There is a race track and race car on the board! I don't recognize the track layout so it must be an older grand prix track. Man, this thing is a beast!
@GeoffSeeley2 ай бұрын
Ah! It's Circuit Paul Ricard in France.
@АндрейАндреевич-з7тАй бұрын
midplane!!1 thats the thing you really want from these switches. You can handle 1Tb x 1Tb non-blocking in single rack. Good luck trying to do the same with all these new fancy 9500-something gear. TierOnes still definitely in need of hese switches
@brookerobertson2951Ай бұрын
The power supply can output 50v at 120a perfect for an ebike. I can get a full charged in less than 4 minutes 😂
@PlaywithJunkАй бұрын
And 10 minutes later you can explain to the fire chief what happened... 🙂
@diamaunt2782Ай бұрын
after too much poking around on the web, it's Paul Ricard Original Grand Prix Circuit (1970-2001) before they added a chicane on the back straight.
@icovadaАй бұрын
Funny you start the video with a thunderstorm. The first time I saw a 7010 back in 2013 it was delivered in the rain and the pallet wouldn't fit through the door so we had to unpack it outside
@alexdichi2 ай бұрын
Incredible!
@timun4493Ай бұрын
i want to see a teardown of those power supplies
@ChipGuy2 ай бұрын
4:50: There is an empty 3rd slot that has the size of a power supply module. Is that really an unused slot for yet another power supply?
@Gr33nMamba2 ай бұрын
Yes it is.
@steven447992 ай бұрын
Sometimes these units were used for entire floors and then you would have uplinks running up the building to a switch unit per floor, the switch cards could do 30w of POE per port max which would be about 11.5kw though that’s wildly more than using all the ports for IP phones would draw.
@hafo821Ай бұрын
"big fan of a big fans" 🤣
@conkerconk3Ай бұрын
I'd buy this just knowing an engineer doodled race cars on those cards
@PlaywithJunkАй бұрын
Now you know what to look for on Ebay... 🙂
@douro202 ай бұрын
A series of switches which at one time was a bit notorious for its buggy software. If you don't know what I am talking about you should see Felix "FX" Lindner's series of talks about that.
@MarekKnapek2 ай бұрын
When your redundant power supply has redundant power supply in it. You know, just in case you want to supply some redundant power.
@douro202 ай бұрын
Next time I'd like to see an F5 BIG IP load balancer/software-defined router and its large amount of extremely dense FPGAs.
@allezvenga7617Ай бұрын
Thanks for your sharing
@jimsvideos72012 ай бұрын
A switch (in this sense) that surely cost as much as a modest house, and still no redundant CMOS battery?
@ChrisSmith-tc4dfАй бұрын
The lifecycle of these devices is less than the lifetime of the battery.
@projectartichoke2 ай бұрын
Indianapolis Motor Speedway... Cisco is a partner of the NTT Indycar Series and the Indianapolis Motor Speedway. They supply IT equipment for Penske Entertainment.
@diamaunt2782Ай бұрын
Indy is an oval.
@RoyHess6662 ай бұрын
ITRIS One AG!
@Rob22 ай бұрын
It is good that it has those straps to tie it down, otherwise it would fly away 😀
@DrFrank-xj9bc2 ай бұрын
Hello, very interesting device, what a monster.. What is the status of this switch? Is it defect, or simply outdated, and will it be scrapped? I guess, the location is not your private space, so where is this video shot? Recently, after a concert in the "Batschkapp", I passed by one of the many data processing centers here in Frankfurt / Main, a huge, massive building w/o any windows, a double fence for security reasons, and no sign outside indicating its purpose, or the company behind it.. I guess that this building is full of such switches, as well. It's got several big power stations outside (supply and/or back-up, I guess), and now, after watching your video, I can imagine, why these are so big. The Batschkapp is a famous concert hall here, and is heated by the waste heat of the data center.
@MicheIIePucca2 ай бұрын
I guess the way I look at the light visibility issue, most network administrators work away from the datacentre and remotely manage them. So viewing the lights may not be as important. That said, Cisco occasionally makes some dumb design decisions.
@jackt9411Ай бұрын
Interesting video but not being familiar with these items, I would have liked to see a layman's explanation of what this 'switch' module does when in service.
@PlaywithJunkАй бұрын
Well it's basically just a huge ethernet switch with hundreds of ports. The ports can be configured into one giant switch or into several independent virtual switches. I hope you know what an ethernet switch is....
@snowsnootАй бұрын
Be careful touching the terminals on the PSU after removing it where it was energized, the capacitors can hold charge and deliver a shock.
@PlaywithJunkАй бұрын
The "shock" from 50V DC would not frighten me much.... except I short it with a screwdriver *flashbang*
@movax20h2 ай бұрын
Nice video. Essentially obsolete now (the 10 slot model can be replaced by 1U switch with 48x4x25G essentially, at a fraction of cost and power), the 18 slot one still has some uses probably. But newer models from 9500 series of course are new cool stuff. Still, pretty niche stuff. Expensive, and still has scalability limits. Big datacenters usually build distributed switching fabric from smaller 48 or 96 port switches. But telco, government, and some business still do like them for some reasons. I do not like them, due to the cost, licenses, and meh management, etc.
@wiziek2 ай бұрын
Yea but this wasn't always the case and there are still very big devices like those, they just have up to tens/hundreds of 100gbs or 400gbs ports.
@АндрейАндреевич-з7тАй бұрын
Whelp, if you can scrape your whole downstreams, good luck. N7K is a switch for a fabrick of switches downwards, and they are not x86_64-DPDK anywhere. They are hard-working, not some jellyfish that can wait for SW_upgrade_NFV_IoT_MarketingBullShit_pricetag0.12
@f.k.b.162 ай бұрын
Advertising... The false front looks like a Meraki AP 😂
@douro202 ай бұрын
Cisco bought Meraki in 2012.
@rabidbigdogАй бұрын
I always thought Cisco was a software and training scam, but the hardware looks kinda interesting. Maybe.
@logikgrАй бұрын
You are thinking of CompTIA A+. Cisco certifications have always been legit in the industry, great way to make money (100K+) without a college degree/trade, or was, not sure, I retired from IT 10 years ago.
@l3p32 ай бұрын
Why would a switch have a high performance cpu in its supervisor board? What is the supervisor board doing? I thought the 99% of work is done by asic chips on the other 3 boards and crossbar.
@justjoe73132 ай бұрын
ASICs (well, TCAMs) need instructions what to do too and something has to calculate BGP routes on top of it, this is Layer3 switch :) There is MUCH MUCH MUCH more to a switch then what you have at home built into the widi router :)
@TomStorey962 ай бұрын
Forwarding is done by ASICs, but something has to control those ASICs and program them with forwarding tables and what not. I don't know if these supervisors can do it, but there looks to be a bit of a trend towards supervisor cards being able to run various applications as well. So sticking in a high performance CPU gives you some extra headroom for that kind of thing. Modern routing engine cards for e.g. Juniper MX routers and I think SRX5800 firewalls run a Linux hypervisor with JunOS as a guest. I believe I've also seen the same thing on some QFX switches.
@f.k.b.162 ай бұрын
It's going to have to run DOOM at some point obviously
@ickipoo2 ай бұрын
The ASICs are pretty much pattern matching and queuing engines: "if you see this combination of bits, then put the packet in that queue". If an unknown combination of bits is seen, the ASIC passes it to the supervisor, which does a route table lookup and then updates the ASIC pattern matching tables, so future packets can be forwarded without involving the supervisor. The ASICs end up with the patterns for all the most recently seen routes, but the fast pattern tables are limited and these routers are sold for backbone use and are expected to be able to handle the full global BGP route tables, which are up around a million routes for IPv4 alone. Each of these routes represents a set of constantly changing paths and costs, so there's quite a bit of data and processing involved.
@movax20h2 ай бұрын
Usually monitoring, control, updating routing tables, QoS setup on flows, etc. These switches were designed around 2012, and other probably didn't work too well.
@randyswenson9575Ай бұрын
Obsolete before the first one was installed. It was so lacking in CPU and memory....
@sapperlott2 ай бұрын
Looks pretty similar to the Juniper EX9214
@LinusJohansson-yu7cyАй бұрын
No WIFI in it? 🤷♂️ Maybe that's why it got thrown out. 😉
@PlaywithJunkАй бұрын
For a few 10'000$ more you can get a Cisco wlan controller and some access points...
@AdamL-i3qАй бұрын
White noise for days!
@АндрейАндреевич-з7тАй бұрын
Big ROADM plus smth as big as CRS-3, smth as large as ASR9K, smt as large as Nexus7K, and infrastructure capable to fit it all it. [ BUT BAM VVVRRRRRAAAAAAM THANK YOU MAN ] Here we go google/alphabet uses their own asic cmos chip designs for thier needs )))
@PlaywithJunkАй бұрын
I'm not sure if I understand what you want to say...
@АндрейАндреевич-з7тАй бұрын
@PlaywithJunk google/alphabet dont use this cisco stuff, that what i mean
@PlaywithJunkАй бұрын
@@АндрейАндреевич-з7т Ah..OK! I guess it is too expensive too. I saw a video of a Google datacenter and they even make the servers themself. Just a bare board with a bracket to hold a disk drive. I guess when you need 100'000 servers, it pays off to be creative... 🙂
@АндрейАндреевич-з7тАй бұрын
@@PlaywithJunk Yes exactly you get the point just right