Wow. Two takeaways from this video. The first is as ever, that your content and your commitment to your international viewers is through the roof. Thank you for the English content Roman. It's one thing to do a de-lidding video in two languages but this in unreal. The second is this place! Amazing attention to detail in every single thing. Just fascinating. I have also learned that perhaps my hardware is not as top-tier as I thought /s
@wandersons.santos3389 Жыл бұрын
Valeu!
@osgrov2 жыл бұрын
This is fantastic! Wow, I'm blown away by that site. I work in a datacenter myself, or at least I thought it was before I saw this.. Lol. This is a completely different level than what I'm used to. Really looking forwards to part 2. :) Keep it up Roman, this was an amazing video.
@MIK33EY2 жыл бұрын
It’s mind blowing isn’t it. Things I’d never thought of are in this installation eg. Four random routed fibre links that don’t cross & purified water.
@MrMartinSchou2 жыл бұрын
It's not too surprising. WalMart was/is one of the largest users of data centers in the world. These types of businesses end up creating an enormous amount of data, and they need to know what to do with it.
@rstidman2 жыл бұрын
@@MrMartinSchou Der Bauer is a German supremacist.
@aaronhartwig80072 жыл бұрын
Haha. I was exactly the same. I work in an Data Centre and this one was totally next level!
@stang98062 жыл бұрын
i work in a data center as well, but seeing them handle the parts with no gloves made me wince a bit
@Murphistic2 жыл бұрын
OK, hands down this is the most awesome data centre tour I've ever found on KZbin. My mind was blown multiple times: the non-crossing infrastructure, spill gate for battery acid and the reduced oxygen level.
@yocobicus2 жыл бұрын
Thank you so much for everyone who made this project possible. I've been waiting for a data center walkthrough for the last 5 1/2 years that had this much in-depth knowledge.
@izzieb2 жыл бұрын
Dr Oetker has an IT business?!! They really do absolutely everything, not just pizzas.
@devilboner2 жыл бұрын
Just imagine; their company cantina is just frozen pizzas and plastic cups of chocolate mousse EVERY DAY!
@izzieb2 жыл бұрын
@@devilboner I'll take the mousse, but I'll leave their pizzas.
@cnst26572 жыл бұрын
Not just any pizzas, they have fishstick pizzas.
@der8auer-en2 жыл бұрын
Their cantina was a restaurant where they cook fresh for everyone :D Actually pretty impressive, too
@AlexKidd4Fun2 жыл бұрын
Just think.. In the USA, Amazon used to just be a book store. 😉
@Azeal2 жыл бұрын
it's so fascinating how many different technologies (and of course therefore technological experts) have to work together perfectly to make an operation like this work so flawlessly. Awesome video!
@BlueRice2 жыл бұрын
some manufacture or companies made their own technologies solely for themself. designing something clever and works good that they need. i find that awesome.
@SpuriousECG2 жыл бұрын
That was a great tour, amazing to see into a modern datacenter.
@TimmyXaero2 жыл бұрын
you've outdone yourself on this video. thoroughly enjoyed the tour, very interesting all of the aspect to make a stable room for the servers to run smoothly. thank you for the very unique chance to see behind the scenes of a data-center. danke Roman. ;)
@eldergeektromeo98682 жыл бұрын
Roman: Thank You again for the peek behind the curtain! Fascinating! And Mahalo to the crew at the data center as well! Just excellent in every way!
@JazekFTW2 жыл бұрын
Its just amazing that you let us watch how this industry works, ty Roman for releasing this for the international community
@johngermain51462 жыл бұрын
Thanks for the tour, it reminds me of all the equipment I used to work on in my career.
@hyperionxxxxx2 жыл бұрын
Congrats on getting in there to see and touch all those things and thank you for sharing it with us Roman. Seeing the datacenter up close was awesome, seeing your excitement as you described it all, even better.
@diegofernandez47892 жыл бұрын
Thanks for the tour. Love the details you took care about. Can't wait for the continuation.
@DJSammy69.2 жыл бұрын
Most fascinating info about datacenter! Just amazing video! Mad props to Roman for doing this in eng and de!!
@andrekz91382 жыл бұрын
I like shows like "How It's Made", but this video is even deeper than that! Major datacenters are a huge engineering achievement.
@PitboyHarmony12 жыл бұрын
Thats out of this world, even when compared to other data centres that i have seen tours of. Best. Content. Ever.
@Elysiann2 жыл бұрын
The gas suppression and aspirating systems was great to see for me. I work in that part of the fire industry here in Australia, and was interested in how ( and what brand) the systems are used overseas . I have set up similar systems for data centres here.
@Kenia-sn1cg2 жыл бұрын
Is it complex to do the installation? I am in engineering student and I am hugely interested in working in such fields
@Elysiann2 жыл бұрын
@@Kenia-sn1cg Depending on the local requirements , standards, and authorising bodies, it's a hard field to get into. Here in Australia, there is a lot of licencing and training to install gas suppression systems like this. Even small systems ( single tank of suppresion agent) require a lot of licencing to install, test, commision and maintain. Depending on the agent used, and volumes, and also age of the system, there is a lot of regulation. Some older system used ODP ( ozone depleteing) gases, so if accidently discharged , or even intentionally discharged ( like in a fire instance), there is a lot of paperwork to inform the relevent enviromental agencies. Most suppression systems use non ODP gases these days, usually carbon dioxide, nitrogen or nitrogen mixed with other inert gases like argon, or other synthetic agents . There is some skill required in designing gas suppression systems. You'll need to take in to account factors like, room size, temperature, room pressure, gas dispersal rates, type of equipement in room, types of fire likely to occur ( electrical, paper, plastics, etc) and many more. As for the asprirating systems, most of them usually operate in a similar manner. The general manner they work by , is to constantly sample the air via capillary tubes, or sampling points , through a laser particle reader. This reader will measure obscuration percentage of the air. Some systems can measure as low as 0.01% , by contrast, a smoke detector will normally alarm at 6-8%. So having a dust free enviroment is very important. The design of an aspirating system is generally easier, but there are still factors to consider. In most cases, the aspirating systems will be part of larger smoke/fire management system used to trigger the release of the suppression system.
@markaerovtec2 жыл бұрын
Stunning video Roman. Totally amazed at the complexity of the facility.
@Ichigo_Kuro_s2 жыл бұрын
I like how everything there is clean and organized
@larscramer94112 жыл бұрын
Top notch content. Extremely interesting. Thank you for this glimpse into cutting edge enterprise stuff.
@deeb20112 жыл бұрын
I would like to thank you truly and sincerly for this video.. I watched it from start to finish with full attention. I am soon starting a new Sales job at a DC company. Quite excited about it because It's going to be my first expirience in the DC industry. This video is very educational and helped me so much understand the nature of this business. Again, thank you so much for all the hard effort you have put to help people like me. I appreciate it and I wish you more success! Looking forward for part 2.
@TheNewTimeNetwork2 жыл бұрын
Always a pleasure to look inside a datacenter. I had the chance to look inside a very small one in the basement of an IT service company in Cologne with my CompSci class back in school. We had a bit of extra luck, because the operator decided to reschedule the test run of one diesel generator (by one or two days) so we could witness it live in the gen room. Sadly, no blade servers, POWER or z-Series there, just standard 19inch Intel x86 servers, Ethernet, and fiber access. Eagerly awaiting part 2.
@chimey20102 жыл бұрын
This is one of the best videos I've seen. SO freaking cool they let you have that level of access. What an incredible place!
@Doctor_X2 жыл бұрын
this was great! We have been using ibm power for decades. We have both Z and power. Rotating out power 8. Right now i am implementing 2x 1080s and will be migrating from 980s to the 1080s. Running IBMi, AIX, and RHEL.
@Vakcoh3 ай бұрын
That was great Roman. Thank you for the time an effort that went into this, and all your videos.!!
@droknron2 жыл бұрын
Amazing video. I love these tours you're doing Roman.
@Jules_Diplopia2 жыл бұрын
I just love the attention to detail. If I had stayed in the industry after 2001, that is the kind of place that I would have wanted to be working in. Thanks so much for the tour. Loved it. Oh and Dr Oetker make the best pizzas, we have them here in the Netherlands too. The last section with the backup tape drives, I would also want a further backup of in a separate city. Back in the day I argued for data to be backed up from Manchester, Cardiff, Glasgow and London in each of the other centres. My bosses thought I was being OTT.
@AJ_UK_LIVE2 жыл бұрын
There is no such thing as OTT in a datacentre. You have to not only prepare for the obvious, but also the unlikely! Everyone always moans at I.T. when things go wrong, but usually it is because they did not allow enough resources in the first place.
@MIK33EY2 жыл бұрын
I have to disagree with you on the pizzas affirmation. All frozen pizzas are like eating pieces of hot cardboard. Dr Oekter, Chicago Town, Goodfellas, etc.… they’re all terrible.
@Jules_Diplopia2 жыл бұрын
@@AJ_UK_LIVE True. After I left, the company concerned forgot to keep backups up to date. They suffered a major data loss.
@AJ_UK_LIVE2 жыл бұрын
@@Jules_Diplopia A little schadenfreude there for you I'm sure.
@MIK33EY2 жыл бұрын
@@leeroyjenkins0 Still doesn’t change the fact that they’re like eating cardboard and don’t even get me started on what comes out of the oven when compared to the packaging imagery. 😂😂
@guiorgy2 жыл бұрын
5:32 As someone working in the water industry, my guess is that those are for water softening. Hard water causes the formation os Scale deposits on anything the water touches (the insides of the whole cooling system in this case), which reduces the effectivenes of heat transfer and ultimately may even leed to damage to the pipeing. This is especially problematic in this case, since the hotter the water the more and faster the Scaling. Depending on how hot the water gets, they may also be using a degassing system to remove oxygen from the water, since in hot water oxygen becomes very corrosive.
@MIK33EY2 жыл бұрын
London water is so hard it kills kettles if you don’t descale once ever couple of months, unless you’re like me and use bottled water cause you’ve been put off drinking water boiled in London kettles pretty soon after arriving after seeing what happens.
@crisnmaryfam73442 жыл бұрын
absolutely. We have well water on my property here in the states, and we have to use a (much smaller) similar device to soften our water. Otherwise it clogs up the pipes and other stuff with Iron rust build up. Turns everything yellow or brown. I would imagine they need to remove ANY minerals and biological stuff from any water used to cool something this large. Otherwise they would be tearing the cooling down frequently trying to clean the crud out of it. The same could be seen in someones desktop pc they water cooled, and ignorantly used plain tap water inside with no additives. The cpu and Gpu blocks fins will clog up super fast.
@ZonkedCompanion2 жыл бұрын
Only thing I would add is; there is no salt in a water softener. The salt in a softener system is used to create a brine which then back washes the negatively charged resin beads inside the cylinders on a regular cycle. It's the resin beads which do all the work.
@user-up4vd5ov8x2 жыл бұрын
It's a water softener. Commonly used here in the states for reverse osmosis, boiler feedwater, and city water/industrial water
@guiorgy2 жыл бұрын
@@ZonkedCompanion "there is no salt in a water softener", yes, but also kinda no. It's true that the resin does the "work", but the way it softens the water is through "ion exchange", i.e. Calcium ions (the cause of scaling) from the water are exchanged with the salt ions (Na for example) from the resin, thus you could say that there's salt in the softener. When you wash the softener with the salt brine, the Calcium ions trapped in the resin get removed and replaced with salt ions, i.e. resin "regenerates".
@NostalgiaOC2 жыл бұрын
What an amazing video! Very very cool! Thanks Roman.
@barney90082 жыл бұрын
truly facinating would love to see more dc tours
@ChaJ672 жыл бұрын
Data centers are always fun to go through. Some things I have come across which could make for better, more efficient data centers than the one you went through, and I am sure you have thought about some of this with the videos you have made, but some of it may be new to you: 1. 380V DC standard for power delivery. This is +=190V nominal, 300V to 425V range to the servers. What this standard does is after the transformer step down to 480V AC, do one conversion to 380V DC nominal and have those batteries hooked up in series and parallel to equal 380V DC nominal. From here the power goes over power rails more efficiently, as DC travels over the wires more efficiently than AC, straight to the servers. The server power supplies have basically half of the hardware in them as they skip the AC/DC conversion, which takes a lot of power electronics to efficiently and cleanly convert AC to DC, instead going straight from 380V DC to 12V DC. This is far more efficient and uses far less hardware than traditional AC power delivery to the server. Something like 30% space savings in data centers that do this (and I spell out data center long to not confuse with direct current) while getting a big boost in efficiency. You mention the efficiency of data centers at the end of the video, which is an important metric and this is a way to raise the actual efficiency and even save on costs when this tech is done at scale. 2. Considering how much Germany relies on renewable energy, something else that I think should be done, especially when doing a 380V DC standard data center build, is to swap out the batteries in the battery room with LFP batteries and build for at least 4 hours of storage. As you may have noticed, the space used in the battery room is not that efficient and they built around the notion of lead acid batteries spilling, which you will also see in old telco buildings where they built everything to run on 48V DC, which takes some crazy big bus bars to do at that voltage level. The idea of having these hours of storage is you can balance against the green energy grid and may even stop taking power from the grid for a while when the electricity prices / demand is the highest. At least when there is variable pricing involved / you make a deal with the power company to help balance the grid, such a setup can save / make a data center a lot of money as you will already have the conversion hardware and the big power use case, just add more batteries to your battery room. At this those diesel generators don't need to run as much during a power outage and can have a much longer grace period to warm up, saving on electricity to keep them warm when you have a battery system with hours of storage; you just make sure to have a certain minimum storage to give the generators more time to warm up. LFP batteries in the data center is also a good thing as modern LFP batteries will last for decades. Also by the end of the year (2022), Germany will have a large LFP battery manufacturing facility in operation run by CATL, one of the biggest names in LFP battery manufacturing. So you will likely make the batteries used in the data center in Germany. 3. Getting to where your expertise comes in, liquid cooling the high powered components in servers with a negative pressure liquid cooling loop. A number of data centers do this, especially for supercomputers, and it is extremely efficient and ironically uses a lot less water than the system you showed. The reason for this is air is a very poor carrier of heat, so you have to cool the air to a certain low temperature, much lower than the temperature of the components you are cooling, or else the server hardware in the racks will get too hot because the delta-T (change in temperature) with air is high. With direct liquid to the hot running components using water blocks, you can run the coolant at much higher temperatures as the delta-T is much lower. These much higher temperatures get into a hot summer day where it is say 37C outside is cool enough to not need any extra cooling measures such as evaporative cooling nor air conditioners. Some data centers get down into a PUE of 1.1 where this one you specify as 1.35 and the bar is 2.0. So yeah, efficiency can be better, granted this one is pretty good, granted they use a lot of water, which gets to be a problem in some places where there is not enough water to go around. This problem is getting worse with global climate change, so this thinking about evaporative cooling has to be changed unfortunately. 4. A number of data centers are moving to back of rack radiators. 5. Also in your wheelhouse, use of liquid metal thermal transfer compound and high W/mK thermal pads. The idea being the more efficiently you transfer the heat to the heat sinks with a smaller delta-T (change in temperature) between the die and heat sink, the less you have to work to keep the die at or below its max target temperature. Data centers are built around keeping the components down to a certain target temperature at max load and when you throw in all of the inefficiencies of low end thermal transfer compounds, IHS's (Internal Heat Spreaders), air cooling, and heat buildup as you go through long, high powered servers, you end up spending a lot of energy and often water to reach that target inlet temperature. Also those super noisy server fans use a tonne of energy to spin that fast and get into significant air friction heating, so if you carry most of the heat away with high density water blocks where you don't have to work as hard to move the more heat dense liquid coolant around, you can use much slower, more efficient fans for the remaining lower powered air cooled components. Anything you can do to allow the target temperature to be higher reduces your PUE and/or water consumption. 6. Shifting gears a little, use of ZFS RAIDZ in the data center. While I have used ZFS RAIDZ level 2 on Solaris in the data center primarily on mechanical drives with SSD caching drives, ZFS under Linux and FreeBSD has gotten a lot better in recent years and supports TRIM on SSDs. RAID controllers do not support TRIM. If you have ever done SSD RAID arrays, those SSDs take a beating when used with hardware RAID controllers, especially as hardware RAID does not support TRIM and in general uses the SSDs in a very write intensive fashion. ZFS is setup a lot more intelligently in terms of how much writing you do or I should say it is one of its optimizations at a slight cost elsewhere (space usage), and TRIM support is icing on the cake greatly reducing write amplification. I would venture to say that ZFS is a more reliable and flexible storage system than hardware RAID based on my experience with it in the data center and my usage of it under Linux and FreeBSD. The thing is where a data center may go for super expensive 10 DWDP (Drive Writes Per Day) SSDs when using hardware RAID controllers, they may find they can get the exact same job down with much less costly 1 DWDP drives when using ZFS RAIDZ. I mean the improvement you will see with ZFS RAIDZ level 2 over RAID 6 is huge. As a bit of a side note on this RAID level, with RAID 1 mirrors sometimes the mirror fails before you can rebuild it, causing data lose and RAID 10 just amplifies this problem by adding more mirrors to an array. In a data center there are enough drives to where you will see this happen, it is a guarantee when you are dealing with this many drives. RAID 5 also amplifies this problem as you add more drives to the array. RAID 6 is a lot better at not losing your data to random physical drive failures as you still have redundancy with a single drive failure, so the occasional second drive failure and just sector loss doesn't kill you, granted in a data center as soon as a drive starts losing sectors, you replace it right away. (At least any good admin would.) RAIDZ level 2 is basically ZFS's version of this, except a lot better in terms of data integrity and recovery capabilities. Standard practice where I worked is to have 8 drive vdevs and just add more vdevs to a zpool when increasing storage. In other words you have an 8 drive array with 2 of the drives for redundancy and then you just add more of these arrays into a single logical 'drive' / storage pool to get to the desired storage size. If you have ever dealt with hardware RAID enough, you start finding there are ways to lose data in a more traditional overall setup where there should be ways to make it better and not lose that data. ZFS RAIDZ level 2 is a great answer to these issues. There is a lot to explain here, but this is something you can read about and then this already long post doesn't need to get a lot longer and you can see why ZFS RAIDZ with direct access to the drives is just better. It is also cheaper as you don't have to spend money on all of these fancy RAID controllers, instead just need simple HBAs (Host Bus Adapters) to access the drives. 7. I was a bit surprised with all of that fiber you saw, there weren't any specialized high speed fiber connections such as InfiniBand. (Maybe you saw InfiniBand, but just didn't know what it was?) I suppose this is a thing when you go to a bank's data center, they are going to be a bit more conservative in how they setup things than say a scientific supercomputing center and their hardware suppliers are going to be a bit more traditional in their offerings, so you just won't see some of these ideas to make the data center even more efficient than the setup you saw. The most radical stuff tends to happen with hyperscale data centers. It is just these are even harder to get into as the operators tend to be a bit more secretive on how they do things.
@LtdJorge Жыл бұрын
As much as I love ZFS, it's not an FS made for scale. If you want to scale ZFS, you have to rely on a cluster FS like Gluster, or Ceph on top of it. These DELL EMC machines you saw are NOT using RAID controllers. They either use their own specific cards to do everything on ASICs or they use software to control it. Also, those are NVMe SSDs, the amount of memory bandwidth needed to support them is massive. Those machines are probably using some kind of DPU that connects the fiber and the storage directly, without even going throught the CPU.
@prashanthb65214 күн бұрын
380VDC power lines and ZFS/Ceph is my favourite points/expectations about future Data Centers.
@stanbrow2 жыл бұрын
Really enjoyed this. Thank you so much, and I hope you gave the owners a huge thank you, getting this level of access is pretty much unheard of.
@pamdemonia2 жыл бұрын
As an electrician doing a lot of commercial renovations and new construction in San Francisco, I've seen a lot of much smaller server rooms, but this is next level! Very impressed with the electrical work (power in particular) , btw. Very clean and neat. Thanks for the absolutely impressive tour! And thanks to the company for giving you such access.
@vladmihai3062 жыл бұрын
german style
@fg-zm2yu2 жыл бұрын
This video is great. I have been sharing it to my colleagues that need to go to Power Plant sites, you have "mini data rooms" over there and you have the same cooling, fire detection and energy backup systems. Thank you for the detailed tour!
@kwisin13372 жыл бұрын
Love that kind of content. Please try to find more companies that will showcase thier hard work.
@johngermain51462 жыл бұрын
I used to do maintenance testing on Data Center Power Systems. We had been using bigger and bigger batteries and had similar issues with the hazardous properties of them. Now, A 1.2 MWatt Diesel Generator (Caterpillar,, Cummins, Generac....like the one you showed) provides a 2nd power source which when used with a transfer switch allows the use of smaller and smaller batteries which provide power only long enough to start the generator. After the loss of Mains Power, with the Generator started, the transfer Switch switches to the Generator. The Data Center Power is derived from those UPS systems like the ones you showed. whose Inverter is always running off the batteries which themselves are charged either from the Mains or Generator.
@psychosis73252 жыл бұрын
That fan spin up 😮 That SSD storage 😅 LTTs $1m unboxing looks quaint all of a sudden.
@AtanasPaunoff2 жыл бұрын
Ha ha that's what i thought too :) Also I want to see Linus reaction... He experienced an orgasm when saw like 100GB/s though so what about 1,5TB/s :D
@TylerBrigham2 жыл бұрын
Wow what a high end facility. These guys know what they are doing, and to think this started in the food industry
@kevinheneghan92592 жыл бұрын
Have 14 years Data Center Experience working as an operating engineer taking care of equipment. Great video and explanation of everything. Really enjoy seeing different Data Centers.
@andydataguy2 жыл бұрын
Amazing video!! So grateful for the gents who allowed you to tour and record the facility 🙌🏾🙏🏾
@metallurgico2 жыл бұрын
another amazing video.. nice to see this kind of insight perspectives!
@jamesdk54172 жыл бұрын
It amazed me how little the infrastructure in a new data centre has changed from one I worked in over 20 years ago in Australia.
@Marin3r1012 жыл бұрын
Hahahahahaha australia.... lmao. You are funny.
@BravoCharleses2 жыл бұрын
The computer technology has moved along at a blistering pace but HVAC has not changed all that much.
@jamesdk54172 жыл бұрын
@@Marin3r101 I am so sorry your meds are not working. Best of luck for the future.
@adityadivine97502 жыл бұрын
@@Marin3r101 I didn't get the joke probably. I'm Indian though. We've one of the world biggest data centres in the world for 1.4 billion population.
@ohkay89392 жыл бұрын
That was really really cool. Thank you for sharing this. I worked at IBM back in the 90s, and we had a comparatively miniature tape robot connected to the mainframes. Insane to think they're storing terabytes of data on tape now. I'd love to know how they made that data relatively quickly accessible - it wasn't fun with 100MB.
@cederian2 жыл бұрын
LTO12 will be up to 144TB! Insanely cost effective
@charliestevenson10292 жыл бұрын
I've worked with tape since the 1970s. Tape technology has leapt ahead of disk in terms of storage density. Fuji demonstrated the practicality of 185TB in LTO tape package back in 2015. If you think of the surface area you have to write on in 2000 ft of 1/2" tape, compared to the platters on a disk you get the idea.
@ohkay89392 жыл бұрын
@@charliestevenson1029 I get the storage density thing, what I'm curious about is the latency accessing the specific data you want. Restores of particular files like I said, were not fun on tapes holding much less data than these do just because of the time it would take the tape to get to where it needed to be.
@mycosys2 жыл бұрын
@@ohkay8939 Its called cold storage for a reason. Its mainly intended for worst case. Its even on a different site. Speeds have ofc improved with data density but thousands of feet of tape is still thousands of feet of tape, theres only so fast you can move it. If you buy even fairly small cold storage these days the restore times are still 4+ hours. Why would it need to be quickly accessible when they have layers of SSD fabric?
@charliestevenson10292 жыл бұрын
@@mycosys It's called HSM - Hierarchical Storage Management where you have different layers of access by speed required. Spinning disks are very expensive to buy and run, so you don't keep rarely accessed data 'online' , but you might keep a stub - just enough so the end user can start accessing the file quickly. The robot kicks in to get the tape with the rest of the data. Since 1985 most tape drives have recorded in a serpentine fashion, so it's not a sequential access to the bit of data you want, it's a combination of horizontal and vertical movement. LTO8 for example records 5160 tracks on 1/2" tape. Worst case is a seek to end of the physical tape. Data is (lossless) compressed, throughput in excess of 240MB/sec. Not all tape applications are for HSM, most is for offline backup security, often with remote robot infrastructure linked by fibre. Check out IBM's technical tape publications. The problem is, so many people think 'tape' is old and slow. The fact is, all large scale data centres use it - they have to. If you see inside any Google data centre, sure you see racks and racks of servers and disk, but you also see robotic tape libraries. Where I used to work, we had petabytes and petabytes of data on nearline tape (it was seismic processing), it didn't make economic sense to have everything on spinning disk.
@jacobreuter2 жыл бұрын
This is amazing! Can't believe I'm just now finding this channel. You deserve a few hundred thousand more subscribers IMO
@abdiazizabdihasan5533 Жыл бұрын
just WOW . thanks for taking us in this great tour.i appreciate the efoort you had put in this video. really enjoyed the video. sehr informativ auch
@ChEd19802 жыл бұрын
Very cool to see inside this DC. The company I work for have equipment in two datacentres and I am part of the DC team so get to visit often and I'm impressed with some of the stuff they have going on here that is not seen in a regular DC such as the oxygen reduction stuff, I didn't know that was a thing either!
@Salty4eva2 жыл бұрын
Awesome video. I miss my data center days working for EMC and this brought back lots of memories.
@darkcnight2 жыл бұрын
this video was very interesting. even though I know nothing about data centers, you explained everything in a way I can understand. thank you
@snarfsnarf58242 жыл бұрын
I always appreciate the effort you put into publishing your videos on both your main and your EN channel, but this video goes above and beyond. Doing the full tour once must have been an ordeal to organize, but convincing them to let you go over everything twice really takes your content that extra mile.
@der8auer-en2 жыл бұрын
😬 thanks. Yes was indeed a lot of effort 😁
@ioanripea90572 жыл бұрын
Awesome tour, impressive the amount of information shared! Did not even imagine such complex and effective redundant cooling exist. German infrastructure tech at it’s peak not to speak about the servers side! It’s at the level of banks/telecom data centers terms of redundancy/safety. I wonder if they have geographic redundancies as well eg synced across country. Definitely I will look differently at the I Dr Oetker pudding in supermarket after this :)
@ravenfeeder18922 жыл бұрын
Great Video! I could only wish that the DC's I've been in were that well equipped and organised.
@anthonyc4172 жыл бұрын
This facility is insane. Very cool content Roman.
@Wimpzilla2 жыл бұрын
Nice and educative, thank you and thank you to the IT data center dudes.
@nicknorthcutt7680 Жыл бұрын
Wow i am impressed at how professional and advanced everything is inside this data center.
@tarfeef_42682 жыл бұрын
Okay this is awesome. I absolutely love this kind of content, the engineering challenges and super cool tech in high end datacentres is just so cool
@wiebowesterhof2 жыл бұрын
That's some amazing setup. Appreciate the detailed video, this is the type of stuff you just can't get access to unless you're in high-end hosting or if you work for a place like that typically. The redundancy in every aspect, mildly out of my $ range, but very, very impressive how they managed to do this with the power efficiency stated.
@truculenttabasco2 жыл бұрын
These videos are amazing Roman, great work. Thanks for taking the time to share all this.
@nickolp-it7bo2 жыл бұрын
Incredible video and as always narrated with precision. Mind blown with what goes into these places. I'm wondering what the insurance figure would be to rebuild????
@KenMcAllister_NC2 жыл бұрын
Cool video man, appreciate tat you took the time to run through the infrastructure side before hitting the tech. Most people don't realize how much engineering goes into housing these things. I work in the hyperscale arena, but those smaller enterprise/hosting sites will always have a special place in my heart!
@joonasrissanen43142 жыл бұрын
Super interesting and detailed tour! Thanks :)
@BaronsDeKalb2 жыл бұрын
That was the cleanest, best laid out, DCIM, SCIF, I have ever seen. Great Content :)
@BikerBearMTB Жыл бұрын
Wow thank you! This was amazing to see such an immense complex piece of kit. Absolute privilege so see and understand through this video
@YTHandlesWereAMistake2 жыл бұрын
This is amazing for a hardware/engineering nerd, one thing is, if possible, a 3d model of how its laid out (even approximately) or perhaps a layered plan of the building would be extremely interesting to see as well so it can be visually laid out as a building and not as separate rooms in the mind of the viewer.
@suntzu14092 жыл бұрын
It would this full datacenter to make that 3D model Jfc
@Thebadbeaver92 жыл бұрын
@@suntzu1409 I can scan my entire house, have it processed and done in 5 minutes. ON MY IPHONE. What are you talking about
@suntzu14092 жыл бұрын
@@Thebadbeaver9 It was a joke
@Thebadbeaver92 жыл бұрын
@@suntzu1409 because people usually end their jokes with "jesus fucking christ" 🤦 JFC
@suntzu14092 жыл бұрын
@@Thebadbeaver9 "Jesus fucking christ" What the, uhhhh, fuck did you just bring upon this cursed land
@zekefloyd39182 жыл бұрын
this is amazing content! This DC is very impressive compared to some I have seen in Australia.
@mrbones6662 жыл бұрын
Mind blowing stuff. Thanks guys.
@Sir_Uncle_Ned2 жыл бұрын
I’ve been to a data centre here in Australia and it’s a similar story with their infrastructure, redundancy is the key word. They didn’t just heat the generators though, they convert the generator to a grid driven motor to keep the engine spinning, so if they are needed then all they need for starting is opening the fuel valve, and the motor automatically switches back to a generator.
@bjornroesbeke2 жыл бұрын
I'd be interested to see how they've implemented that electrically. It would probably shave (just) a couple of seconds off the time that UPS'es need to keep the load up, whilst using heaps of power to keep that motor running. There's little to no cost saving on UPS'es because the two nets would still need to be switched in and out, and that doesn't happen in less than a couple of seconds.
@lynxg46412 жыл бұрын
Should have commented 1/2 way through the video when my brain was still functioning properly :-O Roman, don't know what to say, first of all, amazing how up front and honest you are telling everyone how the videos will be done. When I heard you say that the 2nd video would be more in detail of the actual servers I was like, "Well that'll be the one to watch", but then I started watching this and my jaw just dropped. Honestly, I think you should re-edit this and split this into 3 or 4 parts, because my brain was already melting from the infrastructure and then you got into the servers and it completely melted, this is so, so insane.
@der8auer-en2 жыл бұрын
Haha thanks 😁 I first thought about splitting it up but when I watched it first it didn't feel like an hour so I left it as one part 😁
@sirmonkey19852 жыл бұрын
@@der8auer-en thank you for not splitting it up but also smart move transitioning to the part where you talked about the redundancy, gave us a few minutes to actually decompress all the information before diving back in.
@lynxg46412 жыл бұрын
Man, I had to go and just lie down, cover my eyes and let me brain train to come to terms with all that, then try to stop visualizing it and comprehending it. That one server uses more power in a 1/2 day than I do in a month for my entire small house :-O
@emilantonio0072 жыл бұрын
This is a great video. I was surprise impressed with all the technology behind to run data center. And the way you explain everything was fantastic. Thank you
@cannesahs2 жыл бұрын
Thank you making the video. I didn't expect to see mainframes there
@HakanCezayirli2 жыл бұрын
Wow. This is the best video in your chanel. very impressed. best regards from turkey
@jackskywalker2332 жыл бұрын
Amazing piece of engineering, there's a single flaw in the entire datacenter, the Emergency Stop sign on the OxyReduct is in english 😄 Great video!
@Marcel1984nl2 жыл бұрын
This is really what a datacenter should used to be, everything (mirrored) as a complete (backup) installation/ system. Good video btw, you take much effort to explain almost everything about the datacenter and it's installation.
@dr.krunkenstein54662 жыл бұрын
I'm someone who has lived in enterprise data centers for almost twenty years now, and have spent nearly all of it with enterprise storage arrays like the ones in the video. That PowerMax 8000 is a monster capable of pushing 14M I/Os per second (assuming all reads from cache, so a hero number). While it is an active/active array where all ports can service any IO, it can also do active/active arrays with another one in a metro distance, so an entire array could fail without host disruption. Isilon is no joke either. This data center is very tidy and well put together. I appreciate the in-depth tour into a place most people don't get to see.
@magicmulder Жыл бұрын
44:00 I got myself one of those blue LED front panels on eBay and put it on my mid range server rack - looks gorgeous. :)
@NarekAvetisyan2 жыл бұрын
This was absolutely fascinating! My hats off to the people who build this.
@rekleif2 жыл бұрын
This is the coolest video you have made to date. Nothing else come close to this imho. Wow, just wow. Thank you for this. To se you geek out like that was awesome. I bet this was one of the coolest thing you have done in a while Roman? Love from Norway.
@der8auer-en2 жыл бұрын
Yes I also enjoyed this a lot :D
@Nobe_Oddy2 жыл бұрын
I've seen a few data centers in videos before, including the last one that you showed us (which I thought was HUGE!!) but this place is TOP NOTCH!!!! I'm gonna have to say it TOPS the data center that 'Serve the Home' showed us in Arizona.... There were some differences but OEDIV knocks it OUT OF THE PARK!!!! The only thank that I saw in the AZ center that I thought was better than this one was the security systems... BUT I am thinking that OEDIV didn't want to put theire on video, which makes complete sense. No matter, this place was AMAZING!!!! I am SO HAPPY that you shared this with us!! I just LOVE your channel and the AMAZING CONTENT you share with us!!! THANK YOU DER8AUER!!!! I CAN'T WAIT for the next part of this journey!!! SEE YOU THERE!!! :D
@Adude12832 жыл бұрын
Fascinating! In the same lane as the big IBM power10 system, I would recommend checking out a HPE Superdome Flex system if you ever have a chance. A very neat ccNUMA x86 Intel system that scales to 32 socket (1792 cpu cores ht enabled) and 48TB memory. It's main market is SAP-HANA like the Power10, but it also has decent room for IO expansion and so sees use in the HPC space as well. Thanks for the tour!
@Moodieblue2 жыл бұрын
i loved watching this vid - so amazing
@borexola2 жыл бұрын
Very Enlightening. Thank you :)
@P34chFuzz2 жыл бұрын
Thank you for producing these in both English and German!
@TheBoltcranck2 жыл бұрын
nicely efficient building , awesome video.
@scottcol232 жыл бұрын
WOW what an amazing video. This place in on the cutting edge of tech which I love seeing. Its really rare to get to see inside a center like this. thank you for taking the time to so us around. All of those IBM Power E1080 nodes! .Each costing about $335k USD for each node. looks like there were 4 nodes in just that one rack. A 256gb DDR4 memory card cost $10K so the 16TB option in those servers cost $630K per node. So at the very minimum each rack with 4 nodes cost $3.4 million USD... I work in a data center, while not quite at this level we do have the HPE Flex 280s.
@ryanpaaz2 жыл бұрын
Interesting video. Even more interesting is the timing. Linus just did a tour of IBM in NY and was able to take apart a mainframe which looked similar to the power 10 box. Is IBM on a PR campaign at the moment?
@der8auer-en2 жыл бұрын
That was a coincidence. My video was not coordinated with IBM. I dont even have a direct IBM contact
@poldelepel2 жыл бұрын
The Internet knows Dr Oetker from their pizza with fishsticks on it!
@yonson_racing2 жыл бұрын
Wow, awesome content, thanks!!!
@RaaynML2 жыл бұрын
Thank you so much for going through this whole datacenter, this is so interesting
@NeonMinnen2 жыл бұрын
I don't usually comment on videos, but this was amazing. Thank you for this great content. Seeing those IBM Power E's got me hyped.
@Gavisama2 жыл бұрын
Thanks a lot for this amazing video. I remember those electrical tape backups from when I worked at an ISP's datacenter back in 2004, pretty sure the capacity 18 years ago was MUCH lower tho. 🤣
@samjones43272 жыл бұрын
Thank you very much for this video tour! You're correct in that's it was incredible how the scale of computing and storage capacity is run in this datacenter! I hope you keep these videos coming and cheers to you!
@Morne_Smith2 жыл бұрын
Impressive... It make the Data Centers here in South Africa pale in comparison. I used to work in Dcs in South Africa...and then the tech was impressive, but this is just bonkers! Great video!!
@aanset12 жыл бұрын
So Cool, So Clean, and very organized DC...
@nlhans19902 жыл бұрын
Amazing. This looks thoroughly well designed, and so clean. Virtually everything thought of. Clearly targeting critical customers, and not some average Joe that wants to host his Wordpress website on a 10$/mo VPS.
@hstrinzel2 жыл бұрын
Very impressive indeed! Well done, Dr. Oetker and Roman, thank you for the video!
@o0Hotiron0o2 жыл бұрын
What a tour, great work. thank you.
@TylerBrigham2 жыл бұрын
This is super cool. Im a fan of this sort of content 👌👍
@MayaYa2 жыл бұрын
every time you step out of the datacenter it suddenly feels so peaceful
@ethanshenfeld81412 жыл бұрын
I took a tour of a Big Ten University here in America and while a lot of the things they had where similar, the university data center was no where near as robust or impressive as this. This is some serious bleeding edge tech and it was really cool to see a tour. I would like to buy a shirt or baseball cap or something to support more of this content. Maybe try and work with LTT to produce merch?
@jairunet2 жыл бұрын
Awesome work, definitely put together with a high availability mentality. Thank you and keep it up.