Put your VHDx on an ReFS array and format that VHDx using NTFS then you can do all the Data Deduplication you want (within the VHD). Your VM isn't going to have access to the physical drive anyway so why sweat the ReFS restriction? This way you can have the best of both worlds.
@hdtvkeith16044 жыл бұрын
While this was Veen focused it was one of the best ReFS explanations I have seen. Great video.
@realkingofantarctica4 жыл бұрын
Senjougahara best girl
@JasonTurner5 жыл бұрын
Wow! I'm not sure how I've missed this functionality of ReFS! Can't wait to test this out in lab!
@WoziBeatz5 жыл бұрын
This is amazing
@ImNotADeeJay5 жыл бұрын
I am a Storage Admin and I am alive four years after
@matingarastudios5 жыл бұрын
Brilliant stuff
@allabakashshaik14095 жыл бұрын
is there any issues while migrating vms from hypervservers cluster to nutanix cluster in the same failover cluster?, any hardware specs that is obstacle while migrating?
@gautamssinha2606 жыл бұрын
very nice presentation
@lateralival6 жыл бұрын
This is like a comedy night for nerds. Fantastic talking skills!
@lateralival6 жыл бұрын
"A proper back-breaker, if you ever had to move it!" haha, that's brilliant!
@saravanan-subramanian6 жыл бұрын
Love this guy's passion for teaching and sharing knowledge with others. His PluralSight courses are awesome as well.
@raghub58566 жыл бұрын
Nutanix do have a storage only node now..
@venkatreddykummita52516 жыл бұрын
Nice presentation
@farooquem1007 жыл бұрын
Video more focus on Nikita then on prentation...:)
@JeevanBManoj7 жыл бұрын
Very nice presentation. Was looking for some insight into how the nutanix hci systems work.
@billcouper12897 жыл бұрын
15:36 'cloud' DING!
@RealAgriTech7 жыл бұрын
Awesome Presentation for the Nutanix product, Nice product for Data center solution.
@anujprivate7 жыл бұрын
A well designed SAN can beat nutanix any day....
@patelkrunal3116 жыл бұрын
You obviously don't get it. You wrote your comment a year ago, what's your answer now?
@Solidfire7 жыл бұрын
NDVP for the win!
@SPeri67 жыл бұрын
NICE TALK IMPRESSIVE
@wysefavor7 жыл бұрын
The Storage Admin is resurrected as a cloud Admin!
@adela.al-khateeb72098 жыл бұрын
Nigel is a Brilliant Evangelist.
@fear_less_20208 жыл бұрын
Brilliant presentation, so young yet so professional ! All the best Nikita, I Iook up to you.
@windocks-com8 жыл бұрын
For those interested in Windows and SQL Server containers, WinDocks supports SnapCenter, SnapManger, and ONTAP snapshots and clones www.windocks.com
@TheSchwiz8 жыл бұрын
Where can we get the POC Cookbook mentioned in this preso?
@MrReidster8 жыл бұрын
I love the WhatMatrix idea but what about new generation storage solutions? Like VM Aware Storage?
@norgietriestovlog46928 жыл бұрын
This is really helpful. Nice.
@thecloudtherapist8 жыл бұрын
It's a bit of a shame that (based on the architecture diagram in this presentation) if the CVM on a node fails, the entire node (which is still working perfectly fine otherwise) is knocked out of the cluster. Would've been nice if there was a way to have a two CVMs, per node. Furthermore, are the VMs replicated to more than one node (as well as their data) or is it just the read/writes of data that is replicated?
@randomnickname7216 жыл бұрын
In case CVM fail, all IO from hypervisor will be redirected to CVM on another node automatically. This mechanism is used for HA as well as during CVM upgrade.
@kristofdespiegeleer81688 жыл бұрын
Fantastic talk, I agree with almost all of it (-:, but some personal sentiments: I do not believe the future is all flash (at least not with current techno). I believe in a combination between flash (low latency, random access capable media) and larger capacity media which are cheaper but also slower. Smarter software is required though. I believe the Kinetic disk idea is not necessary awful, for large scale out storage systems a block device interface to a disk is potentially a too low level access method, filesystems are used on top which don't scale well for certain workloads e.g. try to put 100 million small files on a disk, the performance will be awefull. Having a disk to take care of this complexity and writing the data in a more organized way (is like a filesystem embedded in the disk) is not a bad idea. The current implementation is not good enough though: it's way too slow and the rest methods too limited and more important there is no good software to use it well. The counter argument to my statement above is put the disks in large disk arrays and do all of this in software which indeed is possible but a lot of CPU power is required & much better software than what is available today needs to be created. Pushing some of that work to the disk using embedded CPU's, why not, but I do agree we are not there yet today. Lots of disks in storage chassis today is the better option. Frankly object storage is great but in general way too limited and SLOW. It's a very hard problem to map a filesystem (POSIX) onto an object stor, it's very hard to maintain consistency while providing scalability & performance. Many tried, I haven't see the right implementation yet. Imagine a new type of storage system where the software takes care of performance (caching), scale out & redundancy (no more snapshots or replication please, there are better ways to get to the same result more effective) using different storage media types: first access layer = NVME interconnected fabric (low latency, super fast), 2nd layer = thousands of slow capacity disks but reliable & capable of storing millions of objects per disk. This would improve performance and lower the cost. Software has to be smart enough to do this properly though and that very few companies do. Forward looking error correcting codes could be used on both layers to get to required reliability. I believe wrong is: - QOS to guarantee good enough performance: if there are enough IOPS in a system then QOS is only relevant for very specific workloads, a storage system should be fast enough - tiering (HSM) of storage over multiple systems I've seen a petabyte size storage system which can do 2.000.000 real IOPS (not faked caching only workloads) on 12U using above described approach having a reliability better than 3 copies and this using nothing else than standard off the shelf components (not all flash though because this would be too expensive and less dense). my 5 cents
@jayaujay8 жыл бұрын
is there a copy of the powerpoint somewhere?
@triendinh81418 жыл бұрын
his representation is completely baloney and not realistic. He should go back to school to be re-trained and take a course of public speaking. It's a piece of scrap
@UberVike3 жыл бұрын
Actually most of what he said is accurate. Sounds to me like you probably put critical backups on a buffalo nas.
@larrysmith44868 жыл бұрын
Great Job man
@rayonstorage95488 жыл бұрын
Thanks for the commenth, 3DX is still too new to see it in any storage product over the next 12 months. At Micron's storage event this month, the main person in charge of the technology held up a wafer of 3DX but no parts are available yet as far as I know. So it's late next year at the earliest before we see anything from this. And likely to be 2 years or more. However, non-volatile memory is starting to make some inroads, with Intel and Microsoft providing hardware and software support for it in their latest boards-software. So we can see the ecosystem starting to change to support technology like 3DX it's just getting it from the lab, to vendors and into products is a long road for a radical new technology... Ray
@itsmaname33848 жыл бұрын
When will we see 3D Xpoint in our local store this year. I mean they have talking about it since last year and I have been craving for this technology as I have been thinking of buying high end computer or laptop for myself. But I don't want to purchase something then see something kickass got released and understand my money got wasted.
@ryereelfishn9 жыл бұрын
Nice work CW
@JohnMartinIT9 жыл бұрын
**NetApp Employee Opinions are my own** Nice work, I agree with pretty much everything you've said, especially around the misinformation that is spread around fairly liberally in the name of making a sale. Some of the proof points and data you suggested asking for would be annoying to arrange, but if all the vendors were held to the same standard it would improve the industry immensely. I know I've done some death by PPT over the last couple of decades, partly, or perhaps primarily because that's what's expected. It would be great if customers started pushing back, vocally, as it would make a big difference, and frankly I'd LOVE it if a customer had a solid agenda for a vendor meeting that was focussed around discussing their needs and the best ways to solve them rather than "whats the roadmap for your widgets". One minor point of correction though .. while I agree with your overall point about vendor-to-vendor FUD being a waste of time, When NetApp did the comparison of FAS vs Clarion it was to the then current model of shipping EMC midrange storage*, and the matchup was remarkably even in terms of hardware specs** . I'm not sure who told you the comparison was to an EMC array that two generations old, more misinformation perhaps. Personally I primarily used that benchmark result during some customer meetings when I'd get the whole "EMC just told me that your performance degrades over time and they have given me internal benchmark white paper they've done which proves it". The "Benchmark" involved a misconfiguration of the FAS so bad that it bordered on malicious rather than just misunderstanding, Debunking that kind of stuff takes time and expertise, and requires that the customer actually wants to spend the time to understand how to structure a representative storage performance benchmark. The easiest way of addressing it was to show the comparative results of a well respected benchmark, which, despite its flaws, SPC-1 is still one of the best SAN storage benchmarks out there. As to why NetApp did it, dare I say it ... "they started it first", which kind of goes to how childish some of the conversations we allowed ourselves to be dragged into, and it's well past time that all the vendors(and dare I say it, some customers) behaved with a little more maturity. Thanks John * When the test was submitted in January 2008 the CX3-40 was their current midrange model (about 20 months into its life), the CX-4 didn't come out for another 8 months until August 2008. ** EMC CX-3 2x2.8Ghz Intel 4.0GB RAM NetApp FAS3040 2x2.6GHz AMD 4.0GB RAM (plus 512K of NVRAM which isn't part of cache) only the back end interconnects to shelves were significantly better on the FAS which had more to do with NetApp's recommended field deployment (Backend MPHA) to address some interconnect failure paranoia rather than performance.
@AntalVee9 жыл бұрын
I was there that day, still wondering what Howard was trying to say or sell. I totally missed his message or the whole preso was just a bunch of statements. Please enlighten me!