Image Title

Search Results for Xpoint:

Keynote Analysis | Micron Insight 2019


 

>> Announcer: Live from San Francisco, it's theCUBE. Covering Micron Insight 2019. (upbeat music) Brought to you by Micron. >> Hi, everybody, welcome to Pier 27 in San Francisco. My name is Dave Vellante and I'm with my co-host, David Floyer. And you're watching theCUBE, the leader in live tech coverage. This is our coverage of Micron Insight 2019, #microninsight. David, I love this show because, well, of course we're going to talk about Micron and memories and DRAMs and NANDs and all that techy stuff. We're also going to sort of set the tone on this day. It's a really thought leadership day and we talk a lot about AI and Edge and the big mega trends and superpowers, the cloud, mobile, that are really affecting demand and it all starts with data. So, Micron is a company that we're going to talk about and talk about in detail. But what are you seeing, David, as the big trends that are driving demand for bits? >> For bits. Well, let's start with the Edge that you were talking about. The Edge is growing and it's going to grow very, very strongly indeed. It's going to grow with smaller processes, it's the ARM processors at the Edge doing inference processing, capturing the data, and wanting to do that capturing of the data and the processing of that data as close to the origin of that data as possible. So memory and all of the, the NAND is moving out to the Edge itself. And it's going to be lots of smaller processes as opposed to the lots of big processes. >> Let me ask you a question. We've been following these markets for many, many years and, of course, when we started in the business it was all mainframe, and that was really what drove the consumption of data, and then the PC changed that. >> David: Took over, yep. >> And then that, you used to count markets. We used to do that all the time, and there was much more data going to the laptops and desktops, the Internet began to change that and of course, cloud sort of re-centralized a lot of the spending, and a lot of the buying power. Do you see, is it a pendulum swing again, is it that dramatic? Or do you see it as different? >> Like all big trends, the center still remains. So, the center now is cloud. Still mainframes is part of that cloud. That has to remain, and that is just much more economical for large-scale processing. That's the most economical. However, also the economics of it is that moving data is very expensive. It's very expensive in terms of the effort and it also, when you move data, you lose context. So, if you want the best context, and if you want to do things in real time, you want to process that data in real time as close to where it was produced as possible. So, yes, there will be a very big swing in the amount of processing and the amount of important processing that happens at the Edge. >> So, from the standpoint of things like NAND and flash, Steve Jobs changed everything when they decided to put flash inside of the iPhone. >> Actually not the iPhone. >> In the iPod, actually. >> iPod, yes. >> That drove massive massive, that was the beginning, the dam breaking, and what happened is that volumes went through the roof, cost went down, and that's really when you first predicted way back in the early part of this decade that NAND and flash would affect spinning disc, and it clearly has. Pricing maybe hasn't come down as fast as we thought because of supply constraints. But, nonetheless, it's happening. And now the prices are coming down more. You've seen somewhat of an oversupply in NAND. Prices have come down pretty substantially. And there's elasticity. Ever since we've been following this market, you've seen when prices drop, people buy more. At the same time, you saw like Pure Storage last quarter said, well, the prices dropped faster than we thought, it actually hurt our revenue. Because it just happened so fast in the middle of the quarter, that it hurt pricing overall for the subsystems, but nonetheless, that's the trend that we see happening. It feels like there's a new wave or a new step function of consumption going on with regard to flash. What are you seeing? >> Yes, flash was always about performance before, and there were two constraints to flash, in terms of its impact on the whole industry. The first was that the protocols that were used in flash were the old fashioned protocols that were used for HDD. Now, those have improved enormously with NVMe, et cetera, and those have got much, much better. So, that increases the demand for that flash. The usefulness of flash is now much better. And the second is, in terms of, that's high performance, there's high-capacity flash, and now flash is growing in two dimensions. It's growing in the number of layers, but it's growing from SLC to MLC to TLC to QLC in terms of the number of bits that it can pack into it. >> So, those all have cost implications on the cost per bit, obviously? >> Sure. Both of those are reducing the cost per bit, and making it available for different markets. So the capacity market, now as the prices come down, mean that it's going to take a bigger bite into the HDDs. In data center, it's going to become the norm just to have flash only. >> Micron's a little bit late to NVMe, but they're now hopping on board. Actually, you've made the comment to me in previous discussions, that they've actually timed things pretty well. >> Yeah. >> You kind of didn't want to over-rotate to NVMe. I know Pure was first, but Pure's a relatively small part of the marketplace. It seems like now everybody's going to NVMe. And basically what this does, as you pointed out, it eliminates a lot of the sort of older, slow, over head chatty protocols, and now it's like a bat phone right to the data. What are you seeing in terms of NVMe adoption? Is it now mainstream? >> Yes, we're predicting that in 2019 50% of the drives will be NVMe drives. That's a very rapid change. >> Let's up-level a little bit. We're talking about all of this geeky stuff down here, but what I'm interested in is why we need this. And the obvious question is there's so much more data now but it's also, AI. We talk a lot about the new innovation sandwich of being data plus AI plus cloud, combine those things together and that's really what's driving innovation. How real is AI? I presume we need all this stuff to be able to support these data-driven workloads, but how real is AI? It feels like it's pretty substantive. When we go to a lot of these shows, you hear about digital transformation and all these buzzwords and the Edge and IOT. 'Course, AI's one of the big buzzwords, but it does really actually feel like a superpower to invoke one of Pat Gelsinger's words. >> Yeah, it is. And AI could only operate if there was all that data available, so it's the availability of that data, because the algorithms and AI go back a long way. There's nothing new in that. But AI has now the availability of processing that data, large amounts of data, which makes it much more powerful. And now you're getting AI in things like a cellphone, the amount of AI that goes into recognizing your face is enormous. And it's now practical, everyday things are being done in AI, and it's going from being a niche to being just everyday use. And it's impact longterm is profound. It'll do all the jobs that humans do, many of the jobs that humans do, much more efficiently. Driving a car. It'll be better at driving a car than human beings are. >> Yeah, you see AI everywhere, you're right. Ad serving still stinks, but it's getting better. Fraud detection's getting much, much better. Email is now finishing my sentences for me. Right, you've noticed that in the last year or so. Basically say, oh, I like that choice, boom, I'll take it. And so as much as we hate autocorrect... And so those are some small examples, but what the industry likes to talk about is how it's changing lives, what it's going to do for healthcare, autonomous vehicles. Those are some of the big-picture items. >> David: Really big things. >> Which really haven't kicked in yet, just in terms of, or have they? In terms of consuming demand, for things like DRAM and NAND? >> It's relatively small at the moment but it has the potential to be very large, obviously. >> Dave: Go ahead, finish your thought. >> Because in the next 10 years we're going to see automated cars, it's going to be in pieces. You're going to have the trucks going first, and then other cars later. >> I know you're fairly sanguine and optimistic about autonomous vehicles, I know there are a lot of skeptics out there that talk about, we don't have enough data and we'll see, but we'll talk more about that. But I want to talk about Micron a little bit. Micron's a company, last year they were a $30 billion company, they got $23 billion in revenue this year so dramatic drop in revenues. And that was really due to the change in the supply/demand dynamic. Now, historically, when these things happen the stocks of these companies would just, you could predict it, you'd say, okay, time to sell, 'cause here comes the over-supply. And then when they hit the bottom, time to buy. Micron's done an amazing job of sort of steadying that. Managing its demand and supply balance. Also, obviously doing share buybacks that help the stock price, but the stock price has held up pretty well. So Micron's now a $23 billion company, last year they threw off $17 billion in free cash flow, this year, 13 billion. But still, well over 50% of their revenue's going back to free cash flow, which is quite large. Their market cap's 51 billion, so they're trading at a 2.2X revenue multiple, which is very strong. And they've got a 30% gross margin, right? The PC business, think about that. The DRAM, this is a good business, right? That's a nice business, because they don't have a giant direct sales force, so they don't have that cost, it's all through OEM. It's a fairly efficient business, and they've managed it pretty well. Your thoughts on Micron as a company. >> Yes, they have. They've managed the timing of every new release very well indeed. If you go too early, you over-rotate, then you are struggling to get that out. The costs are higher, and the people who are selling the previous generation are going to do better. But they've always timed it perfectly. >> Yeah, now they're facing some challenges. I talked about the supply/demand imbalance, but they're managing that. China, the tariffs hurt them. Huawei, was a big customer. They can't sell the Huawei anymore. China coming after companies like Micron, really going after consumer flash, building fab capacity to begin with, and then eventually China is going to aim at the higher value enterprise. What are you seeing there? >> I agree with you. They've had to rotate because of this problem with the tariffs that have been put on China. So, what's the reaction? They're going to have to invest. And that, long term, is good news for consumers and good news for everybody else, but it's going to be bad news for other people in the business. >> So, a bunch of announcements today. We can't talk about it, 'cause they're not public yet, but you're going to see some SSD stuff coming out. Maybe some acquisitions announced, you might see some other things around 3D XPoint, which is something that we really haven't talked much about but we will, I know your thoughts on that are it's still kind of niche. Remember the HP Memristor, right? Which is, nobody talks about that anymore. But now Micron's in a different situation. They'll figure out, okay, where that fits, but it's still a niche in your view because it doesn't have the volume. But we're going to be talking about that stuff. But, again, up-leveling the conversation to some of those big mega trends, those superpower drivers, data, AI, IOT, and the Edge, and some of the things that are really driving change, in not only industry but also our lives. So, David, appreciate the insight. David and I will be here all day today. You're watching theCUBE from Micron Insight from San Francisco. We'll be back with our next guest right after this short break. (upbeat music)

Published Date : Oct 24 2019

SUMMARY :

Brought to you by Micron. and the big mega trends and superpowers, the cloud, mobile, and the processing of that data the consumption of data, and then the PC changed that. and desktops, the Internet began to change that of important processing that happens at the Edge. So, from the standpoint of things like NAND and flash, And now the prices are coming down more. So, that increases the demand for that flash. So the capacity market, now as the prices come down, Micron's a little bit late to NVMe, it eliminates a lot of the sort of older, slow, 50% of the drives will be NVMe drives. And the obvious question is there's so much more data now But AI has now the availability of processing that data, Those are some of the big-picture items. but it has the potential to be very large, obviously. Because in the next 10 years that help the stock price, the previous generation are going to do better. I talked about the supply/demand imbalance, but it's going to be bad news for other people in the business. and some of the things that are really driving change,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

David FloyerPERSON

0.99+

DavidPERSON

0.99+

HuaweiORGANIZATION

0.99+

DavePERSON

0.99+

Steve JobsPERSON

0.99+

30%QUANTITY

0.99+

MicronORGANIZATION

0.99+

51 billionQUANTITY

0.99+

2019DATE

0.99+

$17 billionQUANTITY

0.99+

last yearDATE

0.99+

San FranciscoLOCATION

0.99+

two constraintsQUANTITY

0.99+

$23 billionQUANTITY

0.99+

Pat GelsingerPERSON

0.99+

$30 billionQUANTITY

0.99+

13 billionQUANTITY

0.99+

50%QUANTITY

0.99+

this yearDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

EdgeORGANIZATION

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

iPodCOMMERCIAL_ITEM

0.99+

two dimensionsQUANTITY

0.99+

HPORGANIZATION

0.98+

Pier 27LOCATION

0.98+

BothQUANTITY

0.98+

secondQUANTITY

0.98+

Micron InsightORGANIZATION

0.98+

last quarterDATE

0.98+

ChinaORGANIZATION

0.95+

Micron Insight 2019TITLE

0.95+

over 50%QUANTITY

0.92+

next 10 yearsDATE

0.88+

3D XPointTITLE

0.85+

PureCOMMERCIAL_ITEM

0.83+

early part of this decadeDATE

0.77+

2.2X revenueQUANTITY

0.76+

StorageORGANIZATION

0.73+

ChinaLOCATION

0.69+

theCUBEORGANIZATION

0.64+

oneQUANTITY

0.61+

#microninsightORGANIZATION

0.57+

Insight 2019TITLE

0.55+

Derek Dicker, Micron | Micron Insight'18


 

>> Live from San Francisco, it's theCUBE, covering Micron Insight 2018. Brought to you by Micron. >> Welcome back to the Embarcadero everybody here in the heart of San Francisco. Actually at the bay of San Francisco. Golden Gate Bridge is that way, financial district over there, Nob Hill right up the street. You're watching theCUBE, the leader in live tech coverage. I'm Dave Vellante, this is David Floyer, and we're covering the Micron Insight 2018 event. People are starting to filter in. Any minute now we're going to start the keynotes from the executives. A lot of buzz going on, Derek Dicker is here. He's the corporate vice-president and general manager of the storage business unit emerging activity within Micron, great to see you again. >> Thank you very much for having me. It's a pleasure to be here. >> You're very welcome, yeah, so Micron used to be just a straight memory company. We're hearing, we heard at the investor day in May how you guys are diversifying, finding new use cases, new applications, you run the storage business, and of course David Floyer was one of the first, the first, in my opinion, to predict the demise of the hard disk, spinning disk, and it's a tailwind for you guys, but Derek, take us through your business unit, your role, and let's get into it. >> Sure, that sounds great. I appreciate the opportunity again to be here. The storage business unit within Micron is actually comprised across a couple of product areas. Primarily NAND and NAND components, and then also SSDs, solid state drives. As we like to say, and we've talked a bit more about it since Sanjay's arrival, we have a pretty material focus on accelerating what we call high value solutions. It's a big focus of ours, so not only are we developing the core technology in memory and storage, but we're attempting to build more and more products that add value to our customers in the S-System space. But that's generally the storage business focus. Within the company, we have three other business units that focus on compute and networking memory as well as the embedded business unit and then the mobile business unit. >> Talk about some of the big trends that you see, I mean, we've talked about for years, the all-flash data center. We clearly see that in the customers that we work with. Some of the spinning disk guys don't necessarily fully buy into that, but even they have been investing in flash technologies. What are you seeing? >> I tell you, there is no better time, in my opinion, than to be in the memory and storage industry. When you look at what the trends are that are coming out in time. If you go and you stare at how memory and storage has evolved just going back into the 80s or the PC era, a $35 billion average size of the total market. You get into the mobile space, when mobile era started with smart phones, we were looking at a $62 billion-ish, and then in '17 we cleared $120 billion in size of the market, and we actually see a lot of secular trends that are going to continue to take us forward. A couple of things that are particularly noteworthy for us. The first one is the emergence of artificial intelligence, and machine learning, and deep learning. We're going to hear quite a bit about it here at the event. But in terms of a value driver for the consumption of both memory, DRAM, as well as storage, we see it going phenomenally up in content in every server that's purchased out in time. That's one, I think with the evolution of 5G out in time, we're also going to see that smart phone devices are going to end up having more memory to add features like facial recognition we see today, becoming mainstream, multiple cameras, that drives more DRAM content, but then also on top of that, storage is increasing. We're seeing, even today, a terabyte being put into some of the high-end phones, and we know that that's going to waterfall out in time. So I think if you look at this combination of what's happening both in the devices, you look at what's happening in the infrastructure, then you couple that with the processing that needs to happen, it's just an awesome time to be affiliated with memory and storage. >> Yeah, well, I've been following this LAN marketplace for the last, almost 10 years isn't it? More than that. And it's just broken through completely in the last two or three years. What are your thoughts about pushing compute closer and closer to that memory, adding to, for example, the SSDs, the capability of doing smart work? It's very very close to where the data is originally going to be placed? >> It's a great area of quite a bit of R&D work that's going on right now, and I actually think I view this as kind of two stages. One is there's the proliferation of solid state, as you suggested, it's been coming over time. I actually see it increasing dramatically as we look forward, and one of the key technologies that I think is going to enable that is QLC. The fact that we're now at a point where we're putting four bits per cell into devices, SSDs are starting to show up, I think that just creates even more opportunity. And I'll talk a little bit about that in just a minute, but I want to answer your direct question as to how that's changing with AnIML. But I think the ability, once solid state is prolific, to be able to architect systems where you can actually have processing take place closer to the media is a very interesting area. It's right with a ton of research going on right now. People are just starting to implement it. I think there's quite a bit of potential sitting behind it. You know, our focus, of course, is we're deploying, and as quickly as we can, on two vectors. One is, how do we proliferate more solid state into the market as an industry, and the second is how do we add value when we build those solid state drives, so I think it's definitely very viable. >> Let's talk about the significance of QLC. David, your forecasts early on were very aggressive in terms of pricing declines for flash. We kind of, maybe got caught off, a little bit surprised by the-- >> I think we were caught off by the demand. >> Well the demand, but also the supply constraints kept prices up. >> Yeah. >> Okay so, it didn't actually happen as fast. How does QLC change that, Derek, and what's the significance of it? >> Well, the thing that I think is most exciting for us as Micron is we actually ended up delivering the world's first QLC device. It put a terabit of data on a single die, which was unprecedented, but then in addition to that, what we did was we actually built a solid state drive called the 5210 ION. This is a standard drive. It's the worlds first SSD built on the technology, and by being able to develop a solution early on, it allowed us to go engage with customers and find where the right workloads were where we could add the most value. QLC technology actually is perfectly aligned for super read intensive, very read intensive environments, and if you look at what's happening in the data center, we're actually seeing more and more workloads move into more read intensive workloads, and a good chunk of that is just because there's analytics going on. The data's being collected. It's being housed in on place, but as we've talked about quite a bit here at the event, we want to be able to deliver insight out of that data, which means we're going to be reading it quite a bit, and massaging it, and performing analytics on it. And what we're now seeing is what, in the days of the past, was a four to one read to write ratio, we're seeing as high as 5,000 to one and in some cases a million to one. So we get these heavily read intensive workloads coupled with the technology that's optimized for it. It's more power efficient than what rotating media solutions offer in certain workloads, we're starting to see these tremendous values coming out of these early engagements that we're having with customers. >> And does that have implications for longevity, or do you just make an assumption that the read/write ratio is still going to be more write intensive in terms of wear leveling and things like that? How does it change the reliability, if you will, of the technology? >> Actually the beauty is, we're able to deliver an enterprise class SSD with these read/write capabilities that are affiliated with these read intensive solutions, and we can fit within the workloads and the needs that people are talking about. So the drive writes per day that are required in a machine learning infrastructure, we believe we can address with QLC. Same thing with Hadoop style clusters and Ceph clusters. We've actually, as we've gone out and engaged each of our earlier customers, we're able to crank out reference architecture documents that we're now posting to our websites, and we're describing how we can actually leverage this technology to allow us to, in some cases, we'll better optimize where an SSD was used before. But in many, many cases we're actually in the process of displacing hard disk drives. >> So what are the limits of this QLC? How many more bits can we add? How many more layers can we add? >> So, it's actually a great question, David. In terms of what does a roadmap look like. I've been asked in the recent several hours, what the longevity for NAND looks like. And what I'll tell you is this, QLC NAND is just getting its start. What comes after that in terms of bits per cell, I don't think anybody's made any broad claims on. But from a layer stacking perspective, which is kind of the dimension upon which the industry is growing, for the foreseeable future, we see nothing that encumbers us from going substantially higher and higher layer count. Which I think is going to be great for our industry because it's going to allow us to deliver more bits in a given device, and hopefully, that'll allow us to get into markets that, historically, we haven't been able to approach. If you think about the demand elasticity dynamic that occur when we start to bring more and more costs down, the number of applications open up, not unlike the machine learning workloads I just mentioned or Hadoop workloads. We're starting to see more and more thirst and interest for replacing with solid state, just because it's more power efficient, allows for a cost structure that's better, and gives better performance too. >> I'm fond of saying that data's plentiful, insights aren't. You guys are a $30 billion company now. You're making some interesting announcements today that we're going to hear about a little later on that I won't divulge right now, but you're putting your hands in a lot of different places. When you're that size of a company, you can't help but, as you mentioned before, adding more value, becoming more of a systems focus. How do you help the industry go from just raw data to insights? What's your role in that? >> Oh, it's a phenomenal question and this is a major focus of the company. Not just in our business unit, but across all of the different business units in the company. We have a huge focus on sitting down with our customers and getting closer and closer to understanding what their workload needs are, where their paying points are, and then working with them to find solutions, and the beautiful part about it is, as Micron, we're the only company in the world that can combine together a 3D XPoint set of technologies, a NAND set of technologies, a DRAM set of technologies. We go sit down and talk about these challenges with those in mind, plus the emerging memories that we're developing to go develop better and better solutions. But after we're able to come to a solution, we put together a reference architecture, and we deploy it broadly. >> We've been trying to squint through 3D Xpoint and understand the right fit. It seems to us that one of the big advantages of flash was it had the, had this behind it. (laughs) It had the consumer volumes, thank you Steve Jobs. It's unclear whether or not 3D Xpoint will have that, maybe have the same, sort of, cost advantages, but the same time, it sounds like there's new and emerging applications. Like I said, we're trying to figure out. Have you guys figured out yet? You're obviously betting big on the technology. Help us understand where the fit is. >> Sure, I think, you know, if I look back in time, just at the storage hierarchy alone, I don't think the memory hierarchy's any different. You have these portions of the market where you build out hard disk drives, and we had DRAM before, and SSDs came along, and people started asking, not unlike several years back when we talked about the early parts. Hey, how big is this going to get? Cost structures may be prohibitive. But as innovation unfurled, the more time and investment got placed into it, we found new workloads, new use cases we were able to drive costs out, and we ended up slotting in solid state drives squarely. I think this is another tier of memory and storage. That's the beauty of the 3D XP technology. There's both memory semantics and storage semantics that are available for use. I think we're still scratching the surface on the early days, but I love what we're seeing from the customer base that we're engaging and targeting in this space. >> And people will pay up for that performance capability relative to flash. They'll pay down relative to DRAM. Is it, are you seeing a gradience for like the hyperscalers, for example, or is it, maybe the industrial internet? Where are you seeing the. >> It's fair, actually I think, you know, it's probably reasonable to say that, you know, the challenges of inserting a new memory tier into a system requires new programming algorithms, new APIs and interface. There's a lot of ecosystem that needs to be there, as well as, not to mention, you've got to have an ecosystem to go put memory products into a server, for instance, or any other platform. I think we're still early days of enabling all of this. And I also believe we're going to learn more and more where the value of this sits as we put it out there in a cost effective fashion. So I would say that people who control software environments are very, very well suited for this because they can take advantage of some of those challenges without having to have a whole ecosystem in place. I think there's going to be a continued ramp in acceleration as an industry we go build out that ecosystem. >> Well it's been amazing to watch Micron the last several years, I mean, the last several decades. When you were just a pure memory manufacturer which was diversified, you know, gorilla in this space. (laughs) You guys are really an extremely well run company. I mean, your financials have born that out. You're really transparent to the street providing great guidance and congratulations on all of the success. I'm looking forward to watching in the future. >> Oh thank you so much. It's a privilege to be part of the company, and I really appreciate your time today. >> Our pleasure, thanks for coming on theCUBE. All right, keep it right there everybody. We'll be back with our next guest right after this short break. You're watching theCUBE from Micron Insight 2018. (upbeat techno music)

Published Date : Oct 10 2018

SUMMARY :

Brought to you by Micron. here in the heart of San Francisco. It's a pleasure to be here. the first, in my opinion, to predict the demise I appreciate the opportunity again to be here. We clearly see that in the customers that we work with. that are going to continue to take us forward. in the last two or three years. and the second is how do we add value Let's talk about the significance of QLC. Well the demand, but also the supply and what's the significance of it? and in some cases a million to one. Actually the beauty is, we're able to deliver Which I think is going to be great for our industry that we're going to hear about a little later on and the beautiful part about it is, as Micron, It had the consumer volumes, thank you Steve Jobs. from the customer base that we're engaging for that performance capability relative to flash. There's a lot of ecosystem that needs to be there, on all of the success. It's a privilege to be part of the company, We'll be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Dave VellantePERSON

0.99+

Derek DickerPERSON

0.99+

DavidPERSON

0.99+

DerekPERSON

0.99+

MicronORGANIZATION

0.99+

Steve JobsPERSON

0.99+

San FranciscoLOCATION

0.99+

$62 billionQUANTITY

0.99+

$120 billionQUANTITY

0.99+

$35 billionQUANTITY

0.99+

5,000QUANTITY

0.99+

firstQUANTITY

0.99+

Golden Gate BridgeLOCATION

0.99+

two vectorsQUANTITY

0.99+

$30 billionQUANTITY

0.99+

Nob HillLOCATION

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

Micron Insight 2018EVENT

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

SanjayPERSON

0.98+

a millionQUANTITY

0.98+

80sDATE

0.98+

'17DATE

0.97+

fourQUANTITY

0.97+

first oneQUANTITY

0.97+

5210 IONCOMMERCIAL_ITEM

0.97+

threeQUANTITY

0.95+

2018DATE

0.94+

eachQUANTITY

0.94+

almost 10 yearsQUANTITY

0.93+

single dieQUANTITY

0.93+

MayDATE

0.93+

Micron InsightORGANIZATION

0.87+

three yearsQUANTITY

0.87+

first QLCQUANTITY

0.87+

last several decadesDATE

0.85+

four bitsQUANTITY

0.8+

XpointTITLE

0.8+

last several yearsDATE

0.76+

two stagesQUANTITY

0.75+

NANDORGANIZATION

0.73+

first SSDQUANTITY

0.7+

yearsQUANTITY

0.64+

Micron Insight'18ORGANIZATION

0.62+

EmbarcaderoLOCATION

0.62+

twoQUANTITY

0.61+

AnIMLTITLE

0.57+

terabit of dataQUANTITY

0.56+

lastDATE

0.52+

MicronTITLE

0.52+

theCUBEORGANIZATION

0.51+

a minuteQUANTITY

0.51+

5GQUANTITY

0.49+

XPOTHER

0.49+

3DOTHER

0.45+

yearsDATE

0.44+

3DQUANTITY

0.39+

David Hatfield, Pure Storage | Pure Storage Accelerate 2018


 

>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate 2018. Brought to be you by Pure Storage. >> Welcome back to theCUBE, we are live at Pure Storage Accelerate 2018 in San Francisco. I'm Lisa Prince Martin with Dave The Who Vellante, and we're with David Hatfield, or Hat, the president of Purse Storage. Hat, welcome back to theCUBE. >> Thank you Lisa, great to be here. Thanks for being here. How fun is this? >> The orange is awesome. >> David: This is great. >> Super fun. >> Got to represent, we love the orange here. >> Always a good venue. >> Yeah. >> There's not enough orange. I'm not as blind yet. >> Well it's the Bill Graham, I mean it's a great venue. But not generally one for technology conferences. >> Not it's not. You guys are not conventional. >> So far so good. >> But then-- >> Thanks for keeping us out of Las Vegas for a change. >> Over my dead body I thin I've said once or twice before. >> Speaking of-- Love our customers in Vegas. Unconventional, you've said recently this is not your father's storage company. What do you mean by that? >> Well we just always want to do things a little bit less conventional. We want to be modern. We want to do things differently. We want to create an environment where it's community so our customers and our partners, prospective customers can get a feel for what we mean by doing things a little bit more modern. And so the whole orange thing is something that we all opt in for. But it's more about really helping transform customer's organizations think differently, think out of the box, and so we wanted to create a venue that forced people to think differently, and so the last three years, one was on Pier 48, we transformed that. Last year was in a big steelworkers, you know, 100 year old steel manufacturing, ship building yard which is now long since gone. But we thought the juxtaposition of that, big iron rust relative to what we're doing from a modern solid state perspective, was a good metaphor. And here it's about making music, and how can we together as an industry, develop new things and develop new songs and really help transform organizations. >> For those of you who don't know, spinning disk is known as spinning rust, right? Eventually, so very clever sort of marketing. >> The more data you put on it the slower it gets and it gets really old and we wanted to get rid of that. We wanted to have everything be online in the data center, so that was the point. >> So Hat, as you go around and talk to customers, they're going through a digital transformation, you hear all this stuff about machine intelligence, artificial intelligence, whatever you want to call it, what are the questions that you're getting? CEO's, they want to get digital right. IT professionals are wondering what's next for them. What kind of questions and conversations are you having? >> Yeah, I think it's interesting, I was just in one of the largest financial services companies in New York, and we met with the Chief Data Officer. The Chief Data Officer reports into the CEO. And he had right next to him the CIO. And so they have this development of a recognition that moving into a digital world and starting to harness the power of data requires a business context. It requires people that are trying to figure out how to extract value from the data, where does our data live? But that's created the different organization. It drives devops. I mean, if you're going to go through a digital transformation, you're going to try and get access to your data, you have to be a software development house. And that means you're going to use devops. And so what's happened from our point of view over the last 10 years is that those folks have gone to the public cloud because IT wasn't really meeting the needs of what devops needed and what the data scientists were looking for, and so what we wanted to create not only was a platform and a tool set that allowed them to bridge the gap, make things better today dramatically, but have a platform that gets you into the future, but also create a community and an ecosystem where people are aware of what's happening on the devop's side, and connect the dots between IT and the data scientists. And so we see this exploding as companies digitize, and somebody needs to be there to help kind of bridge the gap. >> So what's your point of view and advice to that IT ops person who maybe really good at provisioning LUNS, should they become more dev like? Maybe ops dev? >> Totally, I mean I think there's a huge opportunity to kind of advance your career. And a lot of what Charlie talked about and a lot of what we've been doing for nine years now, coming up on nine years, is trying to make our customers heroes. And if data is a strategic asset, so much so they're actually going to think about putting it on your balance sheet, and you're hiring Chief Data Officers, who knows more about the data than the storage and infrastructure team. They understand the limitations that we had to go through over the past. They've recognized they had to make trade offs between performance and cost. And in a shared accelerated storage platform where you have tons of IO and you can put all of your applications (mumbles) at the same time, you don't have to make those trade offs. But the people that really know that are the storage leads. And so what we want to do is give them a path for their career to become strategic in their organization. Storage should be self driving, infrastructure should be self driving. These are not things that in a boardroom people care about, gigabytes and petabytes and petaflops, and whatever metric. What they care about is how they can change their business and have a competitive advantage. How they can deliver better customer experiences, how they can put more money on the bottom line through better insights, etc. And we want to teach and work with and celebrate data heroes. You know, they're coming from the infrastructure side and connecting the dots. So the value of that data is obviously something that's new in terms of it being front and center. So who determines the value of that data? You would think it's the business line. And so there's got to be a relationship between that IT ops person and the business line. Which maybe here to for was somewhat adversarial. Business guys are calling, the clients are calling again. And the business guys are saying, oh IT, they're slow, they say no. So how are you seeing that relationship changing? >> It has to come together because, you know, it does come down to what are the insights that we can extract from our data? How much more data can we get online to be able to get those insights? And that's a combination of improving the infrastructure and making it easy and removing those trade offs that I talked about. But also being able to ask the right questions. And so a lot has to happen. You know, we have one of the leaders in devops speaking tomorrow to go through, here's what's happening on the software development and devops side. Here's what the data scientists are trying to get at. So our IT professionals understand the language, understand the problem set. But they have to come together. We have Dr. Kate Harding as well from MIT, who's brilliant and thinking about AI. Well, there's only .5% of all the data has actually been analyzed. You know, it's all in these piggy banks as Burt talked about onstage. And so we want to get rid of the piggy banks and actually create it and make it more accessible, and get more than .5% of the data to be usable. You know, bring as much of that online as possible, because it's going to provide richer insights. But up until this point storage has been a bottleneck to making that happen. It was either too costly or too complex, or it wasn't performing enough. And with what we've been able to bring through solid state natively into sort of this platform is an ability to have all of that without the trade offs. >> That number of half a percent, or less than half a percent of all data in the world is actually able to be analyzed, is really really small. I mean we talk about, often you'll here people say data's the lifeblood of an organization. Well, it's really a business catalyst. >> David: Oil. >> Right, but catalysts need to be applied to multiple reactions simultaneously. And that's what a company needs to be able to do to maximize the value. Because if you can't do that there's no value in that. >> Right. >> How are you guys helping to kind of maybe abstract storage? We hear a lot, we heard the word simplicity a lot today from Mercedes Formula One, for example. How are you partnering with customers to help them identify, where do we start narrowing down to find those needles in the haystack that are going to open up new business opportunities, new services for our business? >> Well I think, first of all, we recognize at Pure that we want to be the innovators. We want to be the folks that are, again, making things dramatically better today, but really future-proofing people for what applications and insights they want to get in the future. Charlie talked about the three-legged stool, right? There's innovations that's been happening in compute, there's innovations that have been happening over the years in networking, but storage hasn't really kept up. It literally was sort of the bottleneck that was holding people back from being able to feed the GPUs in the compute that's out there to be able to extract the insights. So we wanted to partner with the ecosystem, but we recognize an opportunity to remove the primary bottleneck, right? And if we can remove the bottleneck and we can partner with firms like NVIDIA and firms like Cisco, where you integrate the solution and make it self driving so customers don't have to worry about it. They don't have to make the trade offs in performance and cost on the backend, but it just is easy to stamp out, and so it was really great to hear Service Now and Keith walk through is story where he was able to get a 3x level improvement and something that was simple to scale as their business grew without having an impact on the customer. So we need to be part of an ecosystem. We need to partner well. We need to recognize that we're a key component of it because we think data's at the core, but we're only a component of it. The one analogy somebody shared with me when I first started at Pure was you can date your compute and networking partner but you actually get married to your storage partner. And we think that's true because data's at the core of every organization, but it's making it available and accessible and affordable so you can leverage the compute and networking stacks to make it happen. >> You've used the word platform, and I want to unpack that a little bit. Platform versus product, right? We hear platform a lot today. I think it's pretty clear that platforms beat products and that allows you to grow and penetrate the market further. It also has an implication in terms of the ecosystem and how you partner. So I wonder if you could talk about platform, what it means to you, the API economy, however you want to take that. >> Yeah, so, I mean a platform, first of all I think if you're starting a disruptive technology company, being hyper-focused on delivering something that's better and faster in every dimension, it had to be 10x in every dimension. So when we started, we said let's start with tier one block, mission critical data workloads with a product, you know our Flash Array product. It was the fastest growing product in storage I think of all time, and it still continues to be a great contributor, and it should be a multi-billion dollar business by itself. But what customers are looking for is that same consumer like or cloud like experience, all of the benefits of that simplicity and performance across their entire data set. And so as we think about providing value to customers, we want to make sure we capture as much of that 99.5% of the data and make it online and make it affordable, regardless of whether it's block, file, or object, or regardless if it's tier one, tier two, and tier three. We talk about this notion of a shared accelerated storage platform because we want to have all the applications hit it without any compromise. And in an architecture that we've provided today you can do that. So as we think about partnering, we want to go, in our strategy, we want to go get as much of the data as we possibly can and make it usable and affordable to bring online and then partner with an API first open approach. There's a ton of orchestration tools that are out there. There's great automation. We have a deep integration with ACI at Cisco. Whatever management and orchestration tools that our customer wants to use, we want to make those available. And so, as you look at our Flash Array, Flash Deck, AIRI, and Flash Blade technologies, all of them have an API open first approach. And so a lot of what we're talking about with our cloud integrations is how do we actually leverage orchestration, and how do we now allow and make it easy for customers to move data in and out of whatever clouds they may want to run from. You know, one of the key premises to the business was with this exploding data growth and whether it's 30, 40, 50 zettabytes of data over the next you know, five years, there's only two and a half or three zettabytes of internet connectivity in those same period of time. Which means that companies, and there's not enough data platform or data resources to actually handle all of it, so the temporal nature of the data, where it's created, what a data center looks like, is going to be highly distributed, and it's going to be multi cloud. And so we wanted to provide an architecture and a platform that removed the trade offs and the bottlenecks while also being open and allowing customers to take advantage of Red Shift and Red Hat and all the container technologies and platform as a service technologies that exist that are completely changing the way we can access the data. And so we're part of an ecosystem and it needs to be API and open first. >> So you had Service Now on stage today, and obviously a platform company. I mean any time they do M and A they bring that company into their platform, their applications that they build are all part of that platform. So should we think about Pure? If we think about Pure as a platform company, does that mean, I mean one of your major competitors is consolidating its portfolio. Should we think of you going forward as a platform company? In other words, you're not going to have a stovepipe set of products, or is that asking too much as you get to your next level of milestone. >> Well we think we're largely there in many respects. You know, if you look at any of the competitive technologies that are out there, you know, they have a different operating system and a different customer experience for their block products, their file products, and their object products, etc. So we wanted to have a shared system that had these similar attributes from a storage perspective and then provide a very consistent customer experience with our cloud-based Pure One platform. And so the combination of our systems, you hear Bill Cerreta talk about, you have to do different things for different protocols to be able to get the efficiencies in the data servers as people want. But ultimately you need to abstract that into a customer experience that's seamless. And so our Pure One cloud-based software allows for a consistent experience. The fact that you'll have a, one application that's leveraging block and one application that's leveraging unstructured tool sets, you want to be able to have that be in a shared accelerated storage platform. That's why Gartner's talking about that, right? Now you can do it with a solid state world. So it's super key to say, hey look, we want consistent customer experience, regardless of what data tier it used to be on or what protocol it is and we do that through our Pure One cloud-based platform. >> You guys have been pretty bullish for a long time now where competition is concerned. When we talk about AWS, you know Andy Jassy always talks about, they look forward, they're not looking at Oracle and things like that. What's that like at Pure? Are you guys really kind of, you've been also very bullish recently about NVME. Are you looking forward together with your partners and listening to the voice of the customer versus looking at what's blue over the corner? >> Yes, so first of all we have a lot of respect for companies that get big. One of my mentors told me one time that they got big because they did something well. And so we have a lot of respect for the ecosystem and companies that build a scale. And we actually want to be one of those and are already doing that. But I think it's also important to listen and be part of the community. And so we've always wanted to the pioneers. We always wanted to be the innovators. We always wanted to challenge conventions. And one of the reasons why we founded the company, why Cos and Hayes founded the company originally was because they saw that there was a bottleneck and it was a media level bottleneck. In order to remove that you need to provide a file system that was purpose built for the new media, whatever it was going to be. We chose solid state because it was a $40 billion industry thanks to our consumer products and devices. So it was a cost curve where I and D was going to happen by Samsung and Toshiba and Micron and all those guys that we could ride that curve down, allowing us to be able to get more and more of the data that's out there. And so we founded the company with the premise that you need to remove that bottleneck and you can drive innovation that was 10x better in every dimension. But we also recognize in doing so that putting an evergreen ownership model in place, you can fundamentally change the business model that customers were really frustrated by over the last 25 years. It was fair because disk has lots of moving parts, it gets slower with the more data you put on, etc., and so you pass those maintenance expenses and software onto customers. But in a solid state world you didn't need that. So what we wanted to do was actually, in addition to provide innovation that was 10x better, we wanted to provide a business model that was evergreen and cloud like in every dimension. Well, those two forces were very disruptive to the competitors. And so it's very, very hard to take a file system that's 25 years old and retrofit it to be able to really get the full value of what the stack can provide. So we focus on innovation. We focus on what the market's are doing, and we focus on our customer requirements and where we anticipate the use cases to be. And then we like to compete, too. We're a company of folks that love to win, but ultimately the real focus here is on enabling our customers to be successful, innovating forward. And so less about looking sidewise, who's blue and who's green, etc. >> But you said it before, when you were a startup, you had to be 10x better because those incumbents, even though it was an older operating system, people's processes were wired to that, so you had to give them an incentive to do that. But you have been first in a number of things. Flash itself, the sort of All-Flash, at a spinning disk price. Evergreen, you guys set the mark on that. NVME you're doing it again with no premium. I mean, everybody's going to follow. You can look back and say, look we were first, we led, we're the innovator. You're doing some things in cloud which are similar. Obviously you're doing this on purpose. But it's not just getting close to your customers. There's got to be a technology and architectural enabler for you guys. Is that? >> Well yeah, it's software, and at the end of the day if you write a file system that's purpose built for a new media, you think about the inefficiencies of that media and the benefits of that media, and so we knew it was going to be memory, we knew it was going to be silicon. It behaves differently. Reads are effectively free. Rights are expensive, right? And so that means you need to write something that's different, and so you know, it's NVME that we've been plumbing and working on for three years that provides 44,000 parallel access points. Massive parallelism, which enables these next generation of applications. So yeah we have been talking about that and inventing ways to be able to take full advantage of that. There's 3D XPoint and SCM and all kinds of really interesting technologies that are coming down the line that we want to be able to take advantage of and future proof for our customers, but in order to do that you have to have a software platform that allows for it. And that's where our competitive advantage really resides, is in the software. >> Well there are lots more software companies in Silicon Valley and outside Silicon Valley. And you guys, like I say, have achieved that escape velocity. And so that's pretty impressive, congratulations. >> Well thank you, we're just getting started, and we really appreciate all the work you guys do. So thanks for being here. >> Yeah, and we just a couple days ago with the Q1FY19, 40%, you have a year growth, you added 300 more customers. Now what, 4800 customers globally. So momentum. >> Thank you, thank you. Well we only do it if we're helping our customers one day at a time. You know, I'll tell you that this whole customer first philosophy, a lot of customers, a lot of companies talk about it, but it truly has to be integrated into the DNA of the business from the founders, and you know, Cos's whole pitch at the very beginning of this was we're going to change the media which is going to be able to transform the business model. But ultimately we want to make this as intuitive as an iPhone. You know, infrastructure should just work, and so we have this focus on delivering simplicity and delivering ownership that's future proofed from the very beginning. And you know that sort of permeates, and so you think about our growth, our growth has happened because our customers are buying more stuff from us, right? If you look at our underneath the covers on our growth, 70 plus percent of our growth every single quarter comes from customers buying more stuff, and so, as we think about how we partner and we think about how we innovate, you know, we're going to continue to build and innovate in new areas. We're going to keep partnering. You know, the data protection staff, we've got great partners like Veeam and Cohesity and Rubrik that are out there. And we're going to acquire. We do have a billion dollars of cash in the bank to be able to go do that. So we're going to listen to our customers on where they want us to do that, and that's going to guide us to the future. >> And expansion overseas. I mean, North America's 70% of your business? Is that right? >> Rough and tough. Yeah, we had 28%-- >> So it's some upside. >> Yeah, yeah, no any mature B2B systems company should line up to be 55, 45, 55 North America, 45, in line with GDP and in line with IT spend, so we made investments from the beginning knowing we wanted to be an independent company, knowing we wanted to support global 200 companies you have to have operations across multiple countries. And so globalization is always going to be key for us. We're going to continue our march on doing that. >> Delivering evergreen from an orange center. Thanks so much for joining Dave and I on the show this morning. >> Thanks Lisa, thanks Dave, nice to see you guys. >> We are theCUBE Live from Pure Accelerate 2018 from San Francisco. I'm Lisa Martin for Dave Vellante, stick around, we'll be right back with our next guests.

Published Date : May 23 2018

SUMMARY :

Brought to be you by Pure Storage. Welcome back to theCUBE, we are live Thank you Lisa, great to be here. There's not enough orange. Well it's the Bill Graham, I mean it's a great venue. You guys are not conventional. Thanks for keeping us What do you mean by that? and so we wanted to create a venue that For those of you who don't know, and it gets really old and we wanted to get rid of that. So Hat, as you go around and talk to customers, and somebody needs to be there And so there's got to be a relationship and get more than .5% of the data to be usable. is actually able to be analyzed, Right, but catalysts need to be applied that are going to open up new business opportunities, and we can partner with firms like NVIDIA and that allows you to grow You know, one of the key premises to the business was Should we think of you going forward as a platform company? And so the combination of our systems, and listening to the voice of the customer and so you pass those maintenance expenses and architectural enabler for you guys. And so that means you need to And you guys, like I say, and we really appreciate all the work you guys do. Yeah, and we just a couple days ago with the Q1FY19, 40%, and so we have this focus on delivering simplicity And expansion overseas. Yeah, we had 28%-- And so globalization is always going to be key for us. on the show this morning. We are theCUBE Live from Pure Accelerate 2018

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

NVIDIAORGANIZATION

0.99+

DavePERSON

0.99+

CiscoORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

David HatfieldPERSON

0.99+

Lisa MartinPERSON

0.99+

ToshibaORGANIZATION

0.99+

30QUANTITY

0.99+

VegasLOCATION

0.99+

Andy JassyPERSON

0.99+

Dave VellantePERSON

0.99+

Bill CerretaPERSON

0.99+

San FranciscoLOCATION

0.99+

Lisa Prince MartinPERSON

0.99+

CharliePERSON

0.99+

VeeamORGANIZATION

0.99+

New YorkLOCATION

0.99+

OracleORGANIZATION

0.99+

DavidPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Las VegasLOCATION

0.99+

AWSORGANIZATION

0.99+

Kate HardingPERSON

0.99+

KeithPERSON

0.99+

nine yearsQUANTITY

0.99+

28%QUANTITY

0.99+

70%QUANTITY

0.99+

Purse StorageORGANIZATION

0.99+

$40 billionQUANTITY

0.99+

MicronORGANIZATION

0.99+

HatPERSON

0.99+

Last yearDATE

0.99+

GartnerORGANIZATION

0.99+

10xQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Pure StorageORGANIZATION

0.99+

MITORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

4800 customersQUANTITY

0.99+

three yearsQUANTITY

0.99+

RubrikORGANIZATION

0.99+

half a percentQUANTITY

0.99+

99.5%QUANTITY

0.99+

less than half a percentQUANTITY

0.99+

firstQUANTITY

0.99+

40%QUANTITY

0.99+

.5%QUANTITY

0.99+

oneQUANTITY

0.99+

55QUANTITY

0.99+

one dayQUANTITY

0.98+

twiceQUANTITY

0.98+

3xQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

two forcesQUANTITY

0.98+

Pure OneCOMMERCIAL_ITEM

0.98+

five yearsQUANTITY

0.98+

44,000 parallel access pointsQUANTITY

0.98+

CohesityORGANIZATION

0.97+

200 companiesQUANTITY

0.97+

tomorrowDATE

0.97+

North AmericaLOCATION

0.97+

one applicationQUANTITY

0.97+

first approachQUANTITY

0.97+

45QUANTITY

0.97+

theCUBEORGANIZATION

0.97+

PureORGANIZATION

0.97+

onceQUANTITY

0.96+

ACIORGANIZATION

0.96+

300 more customersQUANTITY

0.95+

70 plus percentQUANTITY

0.95+

NVMEORGANIZATION

0.95+

billion dollarsQUANTITY

0.95+

two and a halfQUANTITY

0.95+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+

NVMe: Ready for the Enterprise


 

>> Announcer: From the Silicon Angle Media Office in Boston, Massachusetts. It's the theCUBE. Now here's your host Stu Miniman. >> Hi, I'm Stu Miniman and welcome to a special theCUBE conversation here in our Boston area studio. Happy to welcome back to the program, Danny Cobb, who's with Dell EMC in the CTO office. >> Thanks Stu, great to see you here today. >> Great to see you too. So Danny, we're going to talk about a topic that like many things in the industry. It seems like it's something that happen overnight, but there's been a lot of hard work going on for quite a lot of years, even going back to heck when you and I worked together. >> Danny: That's right. >> A company use to be called EMC. NVMe, so first of all just bring everybody up to speed as to what you work on inside the Dell family. >> Danny: Sure, so my responsibility at now Dell EMC has been this whole notion of emergence systems. New technologies, new capabilities that are just coming into broad market adoption, broad readiness, technological feasibility, and those kinds of things. And then making sure that as a company we're prepared for their adoption and inclusion in our product portfolio. So it's a great set of capabilities a great set of work to be doing especially if you have a short attention span like I do. >> Danny, I spend a lot of time these days in the open source world. You talk about people are moving faster, people are trying lots of technologies. You've been doing some really hard work. The company and the industry in the standards world. What's the importance of standards these days, and bring us back to how this NVMe stuff started. >> So a great way to get everybody up to speed as you mentioned when you kicked off. NVMe, an overnight success, almost 11 years in the making now. The very first NVMe standard was about 2007. EMC joined the NVMe consortium in 2008 along with an Austin, Texas computer company called Dell. So Dell and EMC were both in the front row of defining the NVMe standard, and essentially putting in place a set of standards, a set of architectures, a set of protocols, product adoption capabilities, compatibility capabilities for the entire industry to follow, starting in 2008. Now you know from our work together that the storage industry likes to make sure that everything's mature, everything works reliably. Everything has broad interoperability standards and things like that. So since 2008, we've largely been about how do we continue to build momentum and generate support for a new storage technology that's based on broadly accepted industry standards, in order to allow the entire industry to move forward. Not just to achieve the most out of the flash revolution, but prepare the industry for coming enhancements to storage class memory. >> Yeah, so storage class memory you mentioned things like flash. One thing we've looked at for a long time is when flash rolled out. There's a lot of adoption on the consumer side first, and then that drove the enterprise piece, but flash today is still done through Ikusi interface with SaaS or Sata. And believe we're finally getting rid of when we go to NVMe. What some in the industry have called the horrible Ikusi stack. >> Danny: That's right. >> So explain to us a little bit about first, the consumer piece of where this fits first, and how it gets the enterprise. Where are we in the industry today with that? >> Yeah so as you pointed out a number of the new media technologies have actually gained a broad acceptance and a grounds full of support starting in the consumer space. The rapid adoption of mobile devices whether initially iPods and iPhones and things like that. Tablets where the more memory you have the more songs you carry, the more pictures you can take. A lot of very virtuous cycle type things occurred in the consumer space to allow flash to go from a fairly expensive perhaps niche technology to broad high volume manufacturing. And with high volume manufacturing comes much lower costs and so we always knew that flash was fast when we first started working on it at EMC in 2005. It became fast and robust when we shipped in 2008. It went from flash to robust to affordable with technologies like the move from SLC to MLC, and now TLC flash and the continuing advances of Moore's law. And so flash has been the beneficiary of high volume consumer economics along with our friend Moore's law over a number of years. >> Okay, so on the NVMe piece, your friends down in Round Rock in Dell. They've got not only the storage portfolio, but on the consumer side. There's pieces like my understanding NVMe already in the market for some part of this today, correct. >> That's right, I think one of the very first adoption scenarios for NVMe was in Lightweight laptop device. The storage deck could be more efficient. The fundamental number of gates in Silicon required to implement the stack was more efficient. Power was more efficient, so a whole bunch of things that were beneficial to a mobile high volume client device like an ultra light, ultra portable laptop made it a great place to launch the technology. >> Okay, and so bring us to what does that mean then for storage? Is that available in the enterprise storage today? >> Danny: Yeah. >> And where is that today and where is that today, and where are we going to see in the next years though? >> So here's the progression that the industry has more or less followed. If we went from that high volume, ultra light laptop device to very inexpensive M.2 devices that could be used in laptops and desktops more broadly, also gained a fair amount of traction with certain used cases and hyperscalers. And then as the spec matured and as the enterprise ecosystem around it, broader data integrity type solutions in the sili-case itself. A number of other things that are bread and butter for enterprise class devices. As those began to emerge, we've now seen NVMe move forward from laptop and client devices to high volume M.2 devices to full function, full capability dual ported enterprise NVMe devices really crossing over this year. >> Okay, so that means we're going to see not only in the customer pieces but should be seeing really enterprise roll out in I'm assuming things like storage arrays, maybe hyper converged. All the different flavors in the not too distant future. >> Absolutely right, the people who get paid to forecast these things when they look into their crystal balls. They've talked about when does NVMe get close enough to its predecessor SaaS to make the switch over be a no brainer. And often times, you get a performance factor where there's more value or you get a cost factor where suddenly that becomes the way the game is won. In the case of NVMe versus SaaS, both of those situations value and cost are more or less a wash right now across the industry. And so there are very few impediments to adoption. Much like a few years ago, there were very few impediment to adoption of enterprise SSDs versus high performance HDDs. The 15Ks and the 10K HDDs. Once we got to close enough in terms of cost parity. The entire industry went all flash over night. >> Yeah, it's a little bit different than say the original adoption of flash versus HDD. >> Danny: That's right. >> HDD versus SSD. Remember back, you had to have the algebra sheet. And you said okay, how many devices did I have.? What's the power savings that I could get out of that? Plus the performance that I had and then does this makes sense. It seems like this is a much more broadly applicable type of solution that we'll see. >> Danny: Right. >> For much faster adoption. >> Do you remember those days of a little goes a long way? >> Stu: Yeah. >> And then more is better? And then almost be really good, and so that's where we've come over what seems like a very few years. >> Okay, so we've only been talking about NVMe, the thing I know David Foyer's been look a lot from an architectural standpoint. Where we see benefit obviously from NVMe but NVMe over Fabrics is the thing that has him really excited if you talk about the architectures, maybe just explain a little bit about what I get with NVMe and what I'll get added on top with the over fabric piece of that. >> Danny: Sure. >> And what's that roll out look like? >> Can I tell you a little story about what I think of as the birth of NVMe over Fabrics? >> Stu: Please. >> Some of your viewers might remember a project at EMC called Thunder. And Thunder was PCI flash with an RDMA over ethernet front end on it. We took that system to Intel developers forum as a proof of concept. Around the corner from me was an engineer named Dave Min-turn, who's an Intel engineer. Who had almost exactly the same software stack up and running except it was an Intel RDMA capability nick and an Intel flash drive, and of course some changes to the Intel processor stack to support the used case that he had in mind. And we started talking and we realized that we were both counting the number of instructions from packet arriving across the network to bytes being read or written on the vis-tory fast PCI E device. And we realized that there has to be a better way, and so from that day, I think it was September 2013, maybe it was August. We actually started working together on how can we take the benefits of the NVMe standard that exists mapped onto PCI E. And then map those same parameters as cleanly as we possibly can onto, at that time ethernet but also InfiniBand, Fiber channel, and perhaps some other transports as a way to get the benefits of the NVMe software stack, and build on top of the new high performance capabilities of these RDMA capable interconnects. So it goes way back to 2013, we moved it into the NVMe standard as a proposal in 2014. And again three, four years later now, we're starting to see solutions roll out that begin to show the promise that we saw way back then. >> Yeah and the challenge with networking obviously is sounds like you've got a few different transport layers that I can use there. Probably a number of different providers. How baked is the standard? Where do things like hits the interoperability fit into the mix? When do customers get their hands on it, and what can they expect the roll out to be? >> We're clearly at the beginning of what's about to be a very, I think long and healthy future for NVMe over Fabrics. I don't know about you. I was at Flash Memory Summit back in August in Santa Clara and there were a number of vendors there starting to talk about NVMe over Fabrics basics. FPGA implementation, system on chip implementations, software implementations across a variety of stacks. The great thing was NVMe over Fabrics was a phrase of the entire show. The challenging thing was probably no two of those solutions interoperated with each other yet. We were still at the running water through the pipes phase, not really checking for leaks and getting to broad adoption. Broad adoption I think comes when we've got a number of vendors broad interoperability, multi-supplier, component availability and those things, that let a number of implementations exists and interoperate because our customers live in a diverse multi-vendor environment. So that's what it will take to go from interesting proof of concept technology which I think is what we're seeing in terms of early customers engagement today to broad base deployment in both existing fiber channel implementations, and also in some next generation data center implementations, probably beginning next year. >> Okay, so Danny, I talked to a lot of companies out there. Everyone that's involved in this (mumbles) has been talking about NVMe over Fabric for a couple of years now. From a user standpoint, how are they going to help sort this out? What will differentiate the check box. Yes, I have something that follows this to, oh wait this will actually help performance so much better. What works with my environment? Where are the pitfalls and where are the things that are going to help companies? What's going to differentiate the marketplace? >> As an engineer, we always get into the speeds and the feeds and the weeds on performance and things like that, and while those are all true. We can talk about fewer and fewer instructions in the networks stack. Fewer and fewer instructions in the storage stack. We can talk about more efficient Silicon implementations. More affinity for multi-processor, multi-core processing environments, more efficient operating system implementations and things like that. But that's just the performance side. The broader benefits come to beginning to move to more cost effective data center fabric implementation. Where I'm not managing an orange wire and a blue wire unless that's really what I want. There's still a number of people who want to manage their fiber channel and will run NVMe over that. They get the compatibility that they want. They get the policies that they want and the switch behavior that they want, and the provisioning model that they want and all of those things. They'll get that in an NVMe over Fabrics implementation. A new data center however will be able to go, you know what, I'm all in day one on 25, 5000 bit gigabit ethernet as my fundamental connection of choice. I'm going 400 gigabit ethernet ports as soon as Andy Beck-tels shine or somebody gives them to me and things like that. And so if that's the data center architecture model that I'm in, that's a fundamental implementation decision that I get to make knowing that I can run an enterprise grade, storage protocol over the top of that, and the industry is ready. My external storage is ready, my servers are ready and my workloads can get the benefit of that. >> Okay, so if I just step back for a second, NVMe sounds like a lot of it is what we would consider the backend in proving that NVMe over Fabrics helps with some of the front end. From a customer stand point, what about their application standpoint? Can they work with everything that they have today? Are there things that they're going to want to do to optimize for that? So the storage industry just take care of it for them. What do they think about today and future planning from an application standpoint? >> I think it's a matter of that readiness and what is it going to take. The good news and this has analogs to the industry change from HDD to SSDs in the first place. The good new is you can make that switch over today and your data management application, your database application, your warehouse, you're analytics or whatever. Not one line of software changes. NVMe device shows up in the block stack of your favorite operating system, and you get lower latency, more IOs in parallel. More CPU back for your application to run because you don't need it in the storage stack anymore. So you get the benefits of that just by changing over to this new protocol. For applications who then want to optimize for this new environment, you can start thinking about having more IOs in flight in parallel. You could start thinking about what happens when those IOs are satisfied more rapidly without as much overhead in and interrupt processing and a number of things like that. You could start thinking about what happens when your application goes from hundred micro-second latencies and IOs like the flash devices to 10 microsecond or one microsecond IOs. Would perhaps with some of these new storage class memory devices that are out there. Those are the benefits that people are going to see when they start thinking about an all NVMe stack. Not just being beneficial for existing flash implementations but being fundamentally required and mandatory to get the benefits of storage class memory implementations. So this whole notion of future ready was one of the things that was fundamental in how NVMe was initially designed over 10 years ago. And we're starting to see that long term view pay benefits in the marketplace. >> Any insight from the customer standpoint? Is it certain applications or verticals where this is really going to help? I think back to the move to SSDs. It was David Foyer who just wet around the entire news feed. He was like, database, database, database is where we can have the biggest impact. What's NVMe going to impact? >> I think what we always see with these things. First of all, NVMe is probably going to have a very rapid advancement and impact across the industry much more quickly than the transition from HDD to SSD, so we don't have to go through that phase of a little goes a long way. You can largely make the switch and as your ecosystem supports it as your vendor of choice supports it. You can make that switch and to a large extent have the application be agnostic from that. So that's a really good way to start. The other place is you and I have had this conversation before. If you take out a cocktail napkin and you draw an equation that says time equals money. That's an obvious place where NVMe and NVMe over Fabrics benefit someone initially. High speed analytics, real time, high frequency trading, a number of things where more efficiency. My ability to do more work per unit time than yours gives me a competitive advantage. Makes my algorithms better, exposes my IP in a more advantageous way. Those are wonderful places for these types of emerging technologies to get adopted because the value proposition is just slam dunk simple. >> Yeah, so running through my head are all the latest buzz words. Is everything at Wikibon when we did our predictions for this year, data is at the center of all of it. But machine learning, AI, heck blockchain, Edge computing all of these things can definitely be affected by that. Is NVMe going to help all of them? >> Oh machine learning. Incredible high bandwidth application. Wonderful thing stream data in, compute on it, get your answers and things like that. Wonderful benefits for a new squeaky clean storage stack to run into. Edge where often times, real time is required. The ability to react to a stimulus and provide a response because of human safety issue or a risk management issue or what have you. Any place that performance let's you get close, get you outer close to real time is a win. And the efficiency of NVMe has a significant advantage in those environments. So NVMe is largely able to help the industry be ready just at the time that new processing models are coming in such as machine learning, artificial intelligence. New data center deployment architectures like the Edge come in and the new types of telemetry and algorithms that they maybe running there. It's really a technology that's arriving just at the time that the industry needs it. >> Yeah, was reading up on some of the blogs on the Dell sites. Jeff Brew-dough said, "We should expect "to see things from 2018." Not expecting you to pre-announce anything but what should we be looking for from Dell and the Dell family in 2018 when it comes to this space? >> We're very bullish on NVMe. We've been pushing very, very hard in the standards community. Obviously, we have already shipped NVMe for a series of internal use cases in our storage platforms. So we have confidence in the technology, its readiness, the ability of our software stacks to do what they need to do. We have a robust, multi-supplier supply chain ready to go so that we can service our customers, and provide them the choice in capacities and capabilities and things like that that are required to bet your business, and long term supply assurance for and things like that. So we're seeing the next year or so be the full transition to NVMe and we're ready for it. We've been getting ready for a long time. Now, the ecosystem is there and we're predicting very big things in the future. >> Okay, so Danny, you've been working on this for 11 years. Give us just a little bit of insight. What you learned, what this group has learned from previous transitions? What's excited you the most? Give us a little bit of sausage making? >> What's been funny about this is we talk about the initial transition to flash, and just getting to the point where a little goes a long way. That was a three year journey. We started in 2005, we shipped in 2008. We moved from there. We flash in a raise as a tier, as a cache, as the places where a little latency, high performance media adds value and those things. Then we saw the industry begin to develop into some server centric storage solutions. You guys have been at the front of forecasting what that market looks like with software defined storage. We see that in technologies like ScaleIO and VSAN where their abilities to start using the media when it's resident in a server became important. And suddenly that began to grow as a peer to the external storage market. Another market San alternative came along with them. Now we're moving even further out where it seems like we use to ask why flash? And it will get asked that. Now it's why not flash? Why don't we move there? So what we've seen is a combination of things. As we get more and more efficient low latency storage protocols. The bottle neck stops being about the network and start being about something else. As we get more multi-core compute capabilities and Moore's law continues to tickle along. We suddenly have enough compute and enough bandwidth and the next thing to target is the media. As we get faster and faster more capable media such as the move to flash and now the move to storage class memory. Again the bottle neck moves away from the media, maybe back to something else in the stack. As I advance compute in media and interconnect, suddenly it becomes beneficial for me to rewrite my application or re-platform it, and create an entire new set of applications that exploit the current capabilities or the technologies. And so we are in that rinse, lather repeat cycle right now in the technology. And for guys like you and me who've been doing this for awhile, we've seen this movie before. We know how it hands. It actually doesn't end. There are just new technologies and new bottlenecks and new manifestations of Moore's law and Holmes law and Metcalfe's law that come into play here. >> Alright so Danny, any final predictions from you on what we should be seeing? What's the next thing you work on that you call victory soon right? >> Yes, so I'm starting to lift my eyes a little bit and we think we see some really good capabilities coming at us from the device physicists in the white coats with the pocket protectors back in the fabs. We're seeing a couple of storage class memories begin to come to market now. You're led by Intel and microns, 3D XPoint but a number of other candidates on the horizon that will take us from this 100 microsecond world to a 10 microsecond world maybe to a 100 nanosecond world. And you and I we back here talking about that fairly soon I predict. >> Excellent, well Danny Cobb always a pleasure to catch up with you. Thanks so much for walking us through all of the pieces. We'll have lots more coverage of this technology and lots more more. Check out theCUBE.net. You can see Dell Technology World and lots of the other shows will be back. Thank you so much for watching theCUBE. (uptempo techno music)

Published Date : Mar 16 2018

SUMMARY :

Announcer: From the Silicon Angle Media Office Happy to welcome back to the program, to heck when you and I worked together. inside the Dell family. and those kinds of things. The company and the industry in the standards world. that the storage industry likes to make sure There's a lot of adoption on the consumer side first, and how it gets the enterprise. in the consumer space to allow flash to go from Okay, so on the NVMe piece, required to implement the stack was more efficient. and client devices to high volume M.2 devices in the customer pieces but should be seeing The 15Ks and the 10K HDDs. the original adoption of flash versus HDD. What's the power savings that I could get out of that? and so that's where we've come over but NVMe over Fabrics is the thing that has him that begin to show the promise that we saw way back then. Yeah and the challenge with networking obviously We're clearly at the beginning Where are the pitfalls and where are the things and the provisioning model that they want So the storage industry just take care of it for them. Those are the benefits that people are going to see I think back to the move to SSDs. You can largely make the switch and as your ecosystem are all the latest buzz words. that the industry needs it. of the blogs on the Dell sites. that are required to bet your business, What's excited you the most? and the next thing to target is the media. but a number of other candidates on the horizon and lots of the other shows will be back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2008DATE

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

2014DATE

0.99+

Dave Min-turnPERSON

0.99+

DannyPERSON

0.99+

2005DATE

0.99+

Danny CobbPERSON

0.99+

2018DATE

0.99+

StuPERSON

0.99+

one microsecondQUANTITY

0.99+

AugustDATE

0.99+

September 2013DATE

0.99+

Stu MinimanPERSON

0.99+

David FoyerPERSON

0.99+

10 microsecondQUANTITY

0.99+

Santa ClaraLOCATION

0.99+

11 yearsQUANTITY

0.99+

2013DATE

0.99+

BostonLOCATION

0.99+

Jeff Brew-doughPERSON

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

three yearQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

100 nanosecondQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

iPodsCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

Round RockLOCATION

0.99+

Boston, MassachusettsLOCATION

0.99+

next yearDATE

0.99+

todayDATE

0.99+

four years laterDATE

0.99+

WikibonORGANIZATION

0.99+

hundred micro-secondQUANTITY

0.99+

firstQUANTITY

0.99+

IntelORGANIZATION

0.98+

MoorePERSON

0.98+

10KQUANTITY

0.98+

oneQUANTITY

0.98+

25, 5000 bitQUANTITY

0.98+

2007DATE

0.97+

Flash Memory SummitEVENT

0.97+

NVMeORGANIZATION

0.96+

this yearDATE

0.96+

SiliconLOCATION

0.96+

twoQUANTITY

0.96+

threeDATE

0.96+

SataTITLE

0.96+