Image Title

Search Results for Vinny Chopra:

Breaking Analysis: UiPath’s Unconventional $PATH to IPO


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> UiPath has had a long, strange trip to IPO. How so you ask? Well, the company was started in 2005. But it's culture, is akin to a frenetic startup. The firm shunned conventions and instead of focusing on a narrow geographic area to prove its product market fit before it started to grow, it aggressively launched international operations prior to reaching unicorn status. Well prior, when it had very little revenue, around a million dollars. Today, more than 60% of UiPath business is outside of the United States. Despite its headquarters being in New York city. There's more, according to recent SEC filings, UiPath total revenue grew 81% last year. But it's free cash flow, is actually positive, modestly. Wait, there's more. The company raised $750 million in a Series F in early February, at a whopping $35 billion valuation. Yet, the implied back of napkin valuation, based on the number of shares outstanding after the offering multiplied by the proposed maximum offering price per share yields evaluation of just under 26 billion. (Dave chuckling) And there's even more to this crazy story. Hello everyone, and welcome to this week's Wikibon CUBE Insights, Powered by ETR. In this Breaking Analysis we'll share our learnings, from sifting through hundreds of pages (paper rustling) of UiPath's red herring. So you didn't have to, we'll share our thoughts on its market, its competitive position and its outlook. Let's start with a question. Mark Roberge, is a venture capitalist. He's a managing director at Stage 2 Capital and he's also a teacher, a professor at the B-School in Harvard. One of his favorite questions that he asks his students and others, is what's the best way to grow a company? And he uses this chart to answer that question. On the vertical axis is customer retention and the horizontal axis is growth to growth rate and you can see he's got modest and awesome and so forth. Now, so I want to let you look at it for a second. What's the best path to growth? Of course you want to be in that green circle. Awesome retention of more than 90% and awesome growth but what's the best way to get there? Should you blitz scale and go for the double double, triple, triple blow it out and grow your go to market team on the horizontal axis or should be more careful and focus on nailing retention and then, and only then go for growth? What do you think? What do you think most VCs would say? What would you say? When you want to maybe run the table, capture the flag before your competitors could get there or would you want to take a more conservative approach? What would Daniel Dines say the CEO of UiPath? Again, I'll let you think about that for a second. Let's talk about UiPath. What did they do? Well, I shared at the top that the company shunned conventions and expanded internationally, very rapidly. Well before it hit escape velocity and they grew like crazy and it got out of control and he had to reign it in, plug some holes, but the growth didn't stop, go. So very clearly based on it's performance and reading through the S1, the company has great retention. It uses a metric called gross retention rate which is at 96 or 97%, very high. Says customers are sticking with it. So maybe that's the right formula go for growth and grow like crazy. Let chaos reign, then reign in the chaos as Andy Grove would say. Go fast horizontally, and you can go vertically. Let me tell you what I think Mark Roberge would say, he told me you can do that. But churn is the silent killer of SaaS companies and perhaps the better path is to nail product market fit. And then your retention metrics, before you go into hyperbolic growth mode. There's all science behind this, which may be antithetical to the way many investors want to roll the dice and go for super growth, like go fast or die. Well, it worked for UiPath you might say, right. Well, no. And this is where the story gets even more interesting and long and strange for UiPath. As we shared earlier, UiPath was founded in 2005 out of Bucharest Romania. The company actually started as a software outsourcing startup. It called the company, DeskOver and it built automation libraries and SDKs for companies like Microsoft, IBM and Google and others. It also built automation scripts and developed importantly computer vision technology which became part of its secret sauce. In December 2015, DeskOver changed its name to UiPath and became a Delaware Corp and moved its headquarters to New York City a couple of years later. So our belief is that UiPath actually took the preferred path of Mark Roberge, five ticks North, then five more East. They slow-cooked for the better part of 10 years trying to figure out what market to serve. And they spent that decade figuring out their product market fit. And then they threw gas in the fire. Pretty crazy. All right, let's take a peak (chuckling) at the takeaways from the UiPath S1 the numbers are impressive. 580 million ARR with 65% growth. That asterisk is there because like you, we thought ARR stood for annual recurring revenue. It really stands for annualized renewal run rate. annualized renewal run rate is a metric that is one of UiPath's internal KPIs and are likely communicate that publicly over time. We'll explain that further in a moment. UiPath has a very solid customer base. Nearly 8,000, I've interviewed many of them. They're extremely happy. They have very high retention. They get great penetration into the fortune 500, around 63% of the fortune 500 has UiPath. Most of UiPath business around 70% comes from existing customers. I always say you're going to get more money out of existing customers than new customers but everybody's trying to go out and get new customers. But UiPath I think is taking a really interesting approach. It's their land and expand and they didn't invent that term but I'll come back to that. It kind of reminds me of the early days of Tableau. Actually I think Tableau is an interesting example. Like UiPath, Tableau started out as pretty much a point tool and it had, but it had very passionate customers. It was solving problems. It was simplifying things. And it would have bid into a company and grow and grow. Now the market fundamentals for UiPath are very good. Automation is super hot right now. And the pandemic has created an automation mandate to date and I'll share some data there as well. UiPath is a leader. I'm going to show you the Gartner Magic Quadrant for RPA. That's kind of a good little snapshot. UiPath pegs it's TAM at 60 billion dollars based on some bottoms up calculations and some data from Bain. Pre-pandemic, we pegged it at over 30 billion and we felt that was conservative. Post-pandemic, we think the TAM is definitely higher because of that automation mandate, it's been accelerated. Now, according to the S1, UiPath is going to raise around 1.2 billion. And as we said, if that's an implied valuation that is lower than the Series F, so we suspect the Series F investors have some kind of ratchet in there. UiPath needed the cash from its Series F investors. So it took in 750 million in February and its balance sheet in the S1 shows about 474 million in cash and equivalent. So as I say, it needed that cash. UiPath has had significant expense reductions that we'll show you in some detail. And it's brought in some fresh talent to provide some adult supervision around 70% of its executive leadership team and outside directors came to the company after 2019 and the company's S1, it disclosed that it's independent accounting firm identified last year what it called the "material weakness in our internal controls over financial report relating to revenue recognition for the fiscal year ending 2018, caused by a lack of oversight and technical competence within the finance department". Now the company outlined the steps it took to remediate the problem, including hiring new talent. However, we said that last year, we felt UiPath wasn't quite ready to go public. So it really had to get its act together. It was not as we said at the time, the well-oiled machine, that we said was Snowflake under Mike Scarpelli's firm operating guidance. The guy's the operational guru, but we suspect the company wants to take advantage of this mock market. It's a good time to go public. It needs the cash to bolster its balance sheet. And the public offering is going to give it cache in a stronger competitive posture relative to its main new competitor, autumn newbie competitor Automation Anywhere and the big whales like Microsoft and others that aspire and are watching what UiPath is doing and saying, hey we want a piece of that action. Now, one other note, UiPath's CEO Daniel Dines owns 100% of the class B shares of the company and has a 35 to one voting power. So he controls the company, subject of course to his fiduciary responsibilities but if UiPath, let's say it gets in trouble financially, he has more latitude to do secondary offerings. And at the same time, it's insulated from activist shareholders taking over his company. So lots of detail in the S1 and we just wanted to give you some of those highlights. Here are the pretty graphs. If whoever wrote this F1 was a genius. It's just beautiful. As we said, ARR, annualized renewal run rate all it does is it annualizes the invoice amount from subscriptions in the maintenance portion of the revenue. In other words, the parts that are recurring revenue, it excludes revenue from support and perpetual license. Like one-time licenses and services is just kind of the UiPath's and maybe that's some sort of legacy there. It's future is that recurring revenue. So it's pretty similar to what we think of as ARR, but it's not exact. Lots of customers with a growing number of six and seven figure accounts and a dollar-based net retention of 145%. This figure represents the rate of net expansion of the UiPath ARR, from existing listing customers over a 12 month period. Translation. This says UiPath's existing customers are spending more with the company, land and expand and we'll share some data from ETR on that. And as you can see, the growth of 86% CAGR over the past nine quarters, very impressive. Let's talk about some of the fundamentals of UiPath's business. Here's some data from the Brookings Institute and the OECD that shows productivity statistics for the US. The smaller charts in the right are for Germany and Japan. And I've shared some similar data before the US showed in the middle there. Showed productivity improvements with the personal productivity boom in the mid to late 90s. And it spilled into the early 2000s. But since then you can see it's dropped off quite significantly. Germany and Japan are also under pressure as are most developed countries. China's labor productivity might show declines but it's level, is at level significantly higher than these countries, April 16th headline of the Wall Street Journal says that China's GDP grew 18% this quarter. So, we've talked about the snapback in post-COVID and the post-isolation economy, but these are kind of one time bounces. But anyway, the point is we're reaching the limits of what humans can do alone to solve some of the world's most pressing challenges. And automation is one key to shifting labor away from these more mundane tasks toward more productive and more important activities that can deliver lasting benefits. This according to UiPath, is its stated purpose to accelerate human achievement, big. And the market is ready to be automated, for the most part. Now the post-isolation economy is increasingly going to focus on automation to drive toward activity as we've discussed extensively, I got to share the RPA Magic Quadrant where nearly everyone's a winner, many people are of course happy. Many companies are happy, just to get into the Magic Quadrant. You can't just, you have to have certain criteria. So that's good. That's what I mean by everybody wins. We've reported extensively on UiPath and Automation Anywhere. Yeah, we think we might shuffle the deck a little bit on this picture. Maybe creating more separation between UiPath and Automation Anywhere and the rest. And from our advantage point, UiPath's IPO is going to either force Automation Anywhere to respond. And I don't know what its numbers are. I don't know if it's ready. I suspect it's not, we'd see that already but I bet you it's trying to get there. Or if they don't, UiPath is going to extend its lead even further, that would be our prediction. Now personally, I would have Pegasystems higher on the vertical. Of course they're not an IPO, RPA specialist, so I kind of get what Gartner is doing there but I think they're executing well. And I'd probably, in a broader context I'd probably maybe drop blue prism down a little bit, even though last year was a pretty good year for the company. And I would definitely have Microsoft looming larger up in the upper left as a challenger more than a visionary in my opinion, but look, Gartner does good work and its analysts are very deep into this stuff, deeper than I am. So I don't want to discount that. It's just how I see it. Let's bring in the ETR data and show some of the backup here. This is a candlestick chart that shows the components of net score, which is spending momentum, however, ETR goes out every quarter. Says you're spending more, you're spending less. They subtract the lesses from the mores and that's net score. It's more complicated than that, but that's that blue line that you see in the top and yes it's trending downward but it's still highly elevated. We'll talk about that. The market share is in the yellow line at the bottom there. That green represents the percentage of customers that are spending more and the reds are spending less or replacing. That gray is flat. And again, even though UiPath's net score is declining, it's that 61%, that's a very elevated score. Anything over 40% in our view is impressive. So it's, UiPath's been holding in the 60s and 70s percents over the past several years. That's very good. Now that yellow line market share, yes it dips a bit, but again it's nuanced. And this is because Microsoft is so pervasive in the data stat. It's got so many mentions that it tends to somewhat overwhelm and skew these curves. So let's break down net score a little bit. Here's another way to look at this data. This is a wheel chart we show this often it shows the components of net score and what's happening here is that bright red is defection. So look at it, it's very small that wouldn't be churn. It's tiny. Remember that it's churn is the killer for software companies. And so that forest green is existing customers spending more at 49%, that's big. That lime green is new customers. So again, it's from the S1, 70% of UiPath's revenue comes from existing customers. And this really kind of underscores that. Now here's more evidence in the ETR data in terms of land and expand. This is a snapshot from the January survey and it lines up UiPath next to its competitors. And it cuts the data just on those companies that are increasing spending. It's so that forest green that we saw earlier. So what we saw in Q1 was the pace of new customer acquisition for UiPath was decelerating from previous highs. But UiPath, it shows here is outpacing its competition in terms of increasing spend from existing customers. So we think that's really important. UiPath gets very high scores in terms of customer satisfaction. There's, I've talked to many in theCUBE. There's places on the web where we have customer ratings. And so you want to check that out, but it'll confirm that the churn is low, satisfaction is high. Yeah, they get dinged sometimes on pricing. They get dinged sometimes, lately on service cause they're growing so fast. So, maybe they've taken the eye off the ball in a couple of counts, but generally speaking clients are leaning in, they're investing heavily. They're creating centers of excellence around RPA and automation, and UiPath is very focused on that. Again, land and expand. Now here's further evidence that UiPath has a strong account presence, even in accounts where its competitors are presence. In the 149 shared accounts from the Q1 survey where UiPath, Automation Anywhere and Microsoft have a presence, UiPath's net score or spending velocity is not only highly elevated, it's relative momentum, is accelerating compared to last year. So there's some really good news in the numbers but some other things stood out in the S1 that are concerning or at least worth paying attention to. So we want to talk about that. Here is the income statement and look at the growth. The company was doing like 1 million dollars in 2015 like I said before. And when it started to expand internationally it surpassed 600 million last year. It's insane growth. And look at the gross profit. Gross margin is almost 90% because revenue grew so rapidly. And last year, its cost went down in some areas like its services, less travel was part of that. Now jump down to the net loss line. And normally you would expect a company growing at this rate to show a loss. The street wants growth and UiPath is losing money, but it's net loss went from 519 million, half a billion down to only 92 million. And that's because the operating expenses went way down. Now, again, typically a company growing at this rate would show corresponding increases in sales and marketing expense, R&D and even G&A but all three declined in the past 12 months. Now reading the notes, there was definitely some meaningful savings from no travel and canceled events. UiPath has great events around the world. In fact theCUBE, Knock Wood is going to be at its event in October, in Las Vegas at the Bellagio . So we're stoked for that. But, to drop expenses that precipitously with such high growth, is kind of strange. Go look at Snowflake's income statement. They're in hyper-growth as well. We like to compare it to Snowflake is a very well-run company and it's in hyper-growth mode, but it's sales and marketing and R&D and G&A expense lines. They're all growing along with that revenue. Now, perhaps they're growing at a slower rate. Perhaps the percent of revenue is declining as it should as they achieve operating leverage but they're not shrinking in absolute dollar terms as shown in the UiPath S1. So either UiPath has applied some magic automation mojo to it's business (chuckling). Like magic beans or magic grits with my cousin Vinny. Maybe it has found the Holy grail of operating leverage. It's a company that's all about automation or the company was running way too hot on the expense side and had a cut and clean up its income statement for the IPO and conserve some cash. Our guess is the latter but maybe there's a combination there. We'll give him the benefit of the doubt. And just to add a bit more to this long, strange trip. When have you seen an explosive growth company just about to go public, show positive cashflow? Maybe it's happened, but it's rare in the tech and software business these days. Again, go look at companies like Snowflake. They're not showing positive cashflow, not yet anyway. They're growing and trying to run the table. So you have to ask why is UiPath operating this way? And we think it's because they were so hot and burning cash that they had to reel things in a little bit and get ready to IPO. It's going to be really interesting to see how this stock reacts when it does IPO. So here's some things that we want you to pay attention to. We have to ask. Is this IPO, is it window dressing? Or did UiPath again uncover some new productivity and operating leverage model. I doubt there's anything radically new here. This company doesn't want to miss the window. So I think it said, okay, let's do this. Let's get ready for IPO. We got to cut expenses. It had a lot of good advisors. It surrounded itself with a new board. Extended that board, new management, and really want to take advantage of this because it needs the cash. In addition, it really does want to maintain its lead. It's got Automation Anywhere competing with it. It's got Microsoft looming large. And so it wants to continue to lead. It's made some really interesting acquisitions. It's got very strong vision as you saw in the Gartner Magic Quadrant and obviously it's executing well but it's really had to tighten things up. So we think it's used the IPO as a fortune forcing function to really get its house in order. Now, will the automation mandate sustain? We think it will. The forced match to digital worked, it was effective. It wasn't pleasant, but even in a downturn we think it will confer advantage to automation players and particularly companies like UiPath that have simplified automation in a big way and have done a great job of putting in training, great freemium model and has a culture that is really committed to the future of humankind. It sounds ambitious and crazy but talk to these people, you'll see it's true. Pricing, UiPath had to dramatically expand or did dramatically expand its portfolio and had to reprice everything. And I'm not so worried about that. I think it'll figure that pricing out for that portfolio expansion. My bigger concern is for SaaS companies in general. I don't like SaaS pricing that has been popularized by Workday and ServiceNow, and Salesforce and DocuSign and all these companies that essentially lock you in for a year or two and basically charge you upfront. It's really is a one-way street. You can't dial down. You can only dial up. It's not true Cloud pricing. You look at companies like Stripe and Datadog and Snowflake. It is true Cloud pricing. It's consumption pricing. I think the traditional SaaS pricing model is flawed. It's very unfairly weighted toward the vendors and I think it's going to change. Now, the reason we put cloud on the chart is because we think Cloud pricing is the right way to price. Let people dial up and dial down, let them cancel anytime and compete on the basis of your product excellence. And yeah, give them a price concession if they do lock in. But the starting point we think should be that flexibility, pay by the drink. Cancel anytime. I mentioned some companies that are doing that as well. If you look at the modern SaaS startups and the forward-thinking VCs they're really pushing their startups to this model. So we think over time that the term lock-in model is going to give way to true consumption-based pricing and at the clients option, allow them to lock-in for a better price, way better model. And UiPath's Cloud revenue today is minimal but over time, we think it's going to continue to grow that cloud. And we think it will force a rethink in pricing and in revenue recognition. So watch for that. How is the street going to react to Daniel Dines having basically full control of the company? Generally, we feel that that solid execution if UiPath can execute is going to outweigh those concerns. In fact, I'm very confident that it will. We'll see, I kind of like what the CEO says has enough mojo to say (chuckling) you know what, I'm not going to let what happened to for instance, EMC happen to me. You saw Michael Dell do that. You saw just this week they're spinning out VMware, he's maintaining his control. VMware Dell shareholders get get 40.44 shares for every Dell share they're holding. And who's the biggest shareholder? Michael Dell. So he's, you got two companies, one chairman. He's controlling the table. Michael Dell beat the great Icahn. Who beats Carl Icahn? Well, Michael Dell beats Carl Icahn. So Daniel Dines has looked at that and says, you know what? I'm not just going to give up my company. And the reason I like that with an if, is that we think will allow the company to focus more on the long-term. The if is, it's got to execute otherwise it's so much pressure and look, the bottom line is that UiPath has really favorable market momentum and fundamentals. But it is signing up for the 90 day short clock. The fact that the CEO has control again means they can look more long term and invest accordingly. Oftentimes that's easier said than done. It does come down to execution. So it is going to be fun to watch (chuckling). That's it for now, thanks to the community for your comments and insights and really always appreciate your feedback. Remember, I publish each week on Wikibon.com and siliconangle.com and these episodes are all available as podcasts. All you got to do is search for the Breaking Analysis podcast. You can always connect with me on Twitter @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn posts. And we'll see you in clubhouse. Follow me and get notified when we start a room, which we've been doing with John Furrier and Sarbjeet Johal and others. And we love to riff on these topics and don't forget, please check out etr.plus for all the survey action. This is Dave Vellante, for theCUBE Insights Powered by ETR. Be well everybody. And we'll see you next time. (gentle upbeat music)

Published Date : Apr 17 2021

SUMMARY :

This is Breaking Analysis And the market is ready to be automated,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark RobergePERSON

0.99+

OECDORGANIZATION

0.99+

UiPathORGANIZATION

0.99+

2015DATE

0.99+

Dave VellantePERSON

0.99+

Brookings InstituteORGANIZATION

0.99+

IcahnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Daniel DinesPERSON

0.99+

Andy GrovePERSON

0.99+

December 2015DATE

0.99+

2005DATE

0.99+

FebruaryDATE

0.99+

35QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

New York CityLOCATION

0.99+

two companiesQUANTITY

0.99+

Mike ScarpelliPERSON

0.99+

96QUANTITY

0.99+

sixQUANTITY

0.99+

Michael DellPERSON

0.99+

JanuaryDATE

0.99+

last yearDATE

0.99+

April 16thDATE

0.99+

IBMORGANIZATION

0.99+

Las VegasLOCATION

0.99+

1 million dollarsQUANTITY

0.99+

New YorkLOCATION

0.99+

100%QUANTITY

0.99+

81%QUANTITY

0.99+

86%QUANTITY

0.99+

GartnerORGANIZATION

0.99+

145%QUANTITY

0.99+

OctoberDATE

0.99+

United StatesLOCATION

0.99+

BostonLOCATION

0.99+

$750 millionQUANTITY

0.99+

Sarbjeet JohalPERSON

0.99+

97%QUANTITY

0.99+

John FurrierPERSON

0.99+

$35 billionQUANTITY

0.99+

60 billion dollarsQUANTITY

0.99+

a yearQUANTITY

0.99+

519 millionQUANTITY

0.99+

18%QUANTITY

0.99+

SECORGANIZATION

0.99+

hundreds of pagesQUANTITY

0.99+

half a billionQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

Renaud Gaubert, NVIDIA & Diane Mueller, Red Hat | KubeCon + CloudNativeCon NA 2019


 

>>Live from San Diego, California It's the Q covering Koopa and Cloud Native Cot brought to you by Red Cloud, Native Computing Pounding and its ecosystem March. >>Welcome back to the Cube here at Q. Khan Club native Khan, 2019 in San Diego, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. Diane Mueller, who is the technical of the tech lead of cloud native technology. I'm sorry. I'm getting the wrong That's director of community development Red Hat, because renew. Goodbye is the technical lead of cognitive technologies at in video game to the end of day one. I've got three days. I gotta make sure >>you get a little more Red Bull in the conversation. >>All right, well, there's definitely a lot of energy. Most people we don't even need Red Bull here because we're a day one. But Diane, we're going to start a day zero. So, you know, you know, you've got a good group of community of geeks when they're like Oh, yeah, let me fly in a day early and do like 1/2 day or full day of deep dives. There So the Red Hat team decided to bring everybody on a boat, I guess. >>Yeah. So, um, open ships Commons gathering for this coup con we hosted at on the inspiration Hornblower. We had about 560 people on a boat. I promised them that it wouldn't leave the dock, but we deal still have a little bit of that weight going on every time one of the big military boats came by. And so people were like a little, you know, by the end of the day, but from 8 a.m. in the morning till 8 p.m. In the evening, we just gathered had some amazing deep dives. There was unbelievable conversations onstage offstage on we had, ah, wonderful conversation with some of the new Dev ops folks that have just come on board. That's a metaphor for navigation and Coop gone. And and for events, you know, Andrew Cliche for John Willis, the inevitable Crispin Ella, who runs Open Innovation Labs, and J Bloom have all just formed the global Transformation Office. I love that title on dhe. They're gonna be helping Thio preach the gospel of Cultural Dev ops and agile transformation from a red hat office From now going on, there was a wonderful conversation. I felt privileged to actually get to moderate it and then just amazing people coming forward and sharing their stories. It was a great session. Steve Dake, who's with IBM doing all the SDO stuff? Did you know I've never seen SDO done so well, Deployment explains so well and all of the contents gonna be recorded and up on Aaron. We streamed it live on Facebook. But I'm still, like reeling from the amount of information overload. And I think that's the nice thing about doing a day zero event is that it's a smaller group of people. So we had 600 people register, but I think was 560 something. People show up and we got that facial recognition so that now when they're traveling through the hallways here with 12,000 other people, that go Oh, you were in the room. I met you there. And that's really the whole purpose for comments. Events? >>Yeah, I tell you, this is definitely one of those shows that it doesn't take long where I say, Hey, my brain is full. Can I go home. Now. You know I love your first impressions of Q Khan. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? So >>I've been mostly I went to the lightning talks, which were amazing. Anything? Definitely. There. A number of shout outs to the GPU one, of course. Uh, friend in video. But I definitely enjoyed, for example, of the amazing D. M s one, the one about operators. And generally all of them were very high quality. >>Is this your first Q? Khan, >>I've been there. I've been a year. This is my third con. I've been accused in Europe in the past. Send you an >>old hat old hand at this. Well, before we get into the operator framework and I wanna love to dig into this, I just wanted to ask one more thought. Thought about open shift, Commons, The Commons in general, the relationship between open shift, the the offering. And then Okay, the comments and okay, D and then maybe the announcement about about Okay. Dee da da i o >>s. Oh, a couple of things happened yesterday. Yesterday we dropped. Okay, D for the Alfa release. So anyone who wants to test that out and try it out it's an all operators based a deployment of open shift, which is what open ship for is. It's all a slightly new architectural deployment methodology based on the operator framework, and we've been working very diligently. Thio populate operator hub dot io, which is where all of the upstream projects that have operators like the one that Reynolds has created for in the videos GP use are being hosted so that anyone could deploy them, whether on open shift or any kubernetes so that that dropped. And yesterday we dropped um, and announced Open Sourcing Quay as project quay dot io. So there's a lot of Io is going on here, but project dia dot io is, um, it's a fulfillment, really, of a commitment by Red Hat that whenever we do an acquisition and the poor folks have been their acquired by Cora West's and Cora Weston acquired by Red Hat in an IBM there. And so in the interim, they've been diligently working away to make the code available as open source. And that hit last week and, um, to some really interesting and users that are coming up and now looking forward to having them to contribute to that project as well. But I think the operator framework really has been a big thing that we've been really hearing, getting a lot of uptake on. It's been the new pattern for deploying applications or service is on getting things beyond just a basic install of a service on open shift or any kubernetes. And that's really where one of the exciting things yesterday on we were talking, you know, and I were talking about this earlier was that Exxon Mobil sent a data scientist to the open ship Commons, Audrey Resnick, who gave this amazing presentation about Jupiter Hub, deeper notebooks, deploying them and how like open shift and the advent of operators for things like GP use is really helping them enable data scientists to do their work. Because a lot of the stuff that data signs it's do is almost disposable. They'll run an experiment. Maybe they don't get the result they want, and then it just goes away, which is perfect for a kubernetes workload. But there are other things you need, like a Jeep use and work that video has been doing to enable that on open shift has been just really very helpful. And it was It was a great talk, but we were talking about it from the first day. Signs don't want to know anything about what's under the hood. They just want to run their experiments. So, >>you know, let's like to understand how you got involved in the creation of the operator. >>So generally, if we take a step back and look a bit at what we're trying to do is with a I am l and generally like EJ infrastructure and five G. We're seeing a lot of people. They're trying to build and run applications. Whether it's in data Center at the and we're trying to do here with this operator is to bring GPS to enterprise communities. And this is what we're working with. Red Hat. And this is where, for example, things like the op Agrestic A helps us a lot. So what we've built is this video Gee, few operator that space on the upper air sdk where it wants us to multiple phases to in the first space, for example, install all the components that a data scientist were generally a GPU cluster of might want to need. Whether it's the NVIDIA driver, the container runtime, the community's device again feast do is as you go on and build an infrastructure. You want to be able to have the automation that is here and, more importantly, the update part. So being able to update your different components, face three is generally being able to have a life cycle. So as you manage multiple machines, these are going to get into different states. Some of them are gonna fail, being able to get from these bad states to good states. How do you recover from them? It's super helpful. And then last one is monitoring, which is being able to actually given sites dr users. So the upper here is decay has helped us a lot here, just laying out these different state slips. And in a way, it's done the same thing as what we're trying to do for our customers. The different data scientists, which is basically get out of our way and allow us to focus on core business value. So the operator, who basically takes care of things that are pretty cool as an engineer I lost due to your election. But it doesn't really help me to focus on like my core business value. How do I do with the updates, >>you know? Can I step back one second, maybe go up a level? The problem here is that each physical machine has only ah limited number of NVIDIA. GPU is there and you've got a bunch of containers that maybe spawning on different machines. And so they have to figure out, Do I have a GPU? Can I grab one? And if I'm using it, I assume I have to reserve it and other people can't use and then I have to give it up. Is that is that the problem we're solving here? So this is >>a problem that we've worked with communities community so that like the whole resource management, it's something that is integrated almost first class, citizen in communities, being able to advertise the number of deep, use their your cluster and used and then being able to actually run or schedule these containers. The interesting components that were also recently added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this much of GP utilization. So these air supercool like features that have been coming in the past two years in communities and which red hat has been super helpful, at least in these discussions pushing these different features forward so that we see better enterprise support. Yeah, >>I think the thing with with operators and the operator lifecycle management part of it is really trying to get to Day two. So lots of different methodologies, whether it's danceable or python or job or or UH, that's helm or anything else that can get you an insult of a service or an application or something. And in Stan, she ate it. But and the operator and we support all of that with SD case to help people. But what we're trying to do is bridge the to this day to stuff So Thea, you know, to get people to auto pilot, you know, and there's a whole capacity maturity model that if you go to operator hab dot io, you can see different operators are a different stages of the game. So it's been it's been interesting to work with people to see Theo ah ha moment when they realize Oh, I could do this and then I can walk away. And then if that pod that cluster dies, it'll just you know, I love the word automatically, but they, you know, it's really the goal is to help alleviate the hands on part of Day two and get more automation into the service's and applications we deploy >>right and when they when they this is created. Of course it works well with open shift, but it also works for any kubernetes >>correct operator. HAB Daddio. Everything in there runs on any kubernetes, and that's really the goal is to be ableto take stuff in a hybrid cloud model. You want to be able to run it anywhere you want, so we want people to be unable to do it anywhere. >>So if this really should be an enabler for everything that it's Vinny has been doing to be fully cloud native, Yes, >>I think completely arable here is this is a new attack. Of course, this is a bit there's a lot of complexity, and this is where we're working towards is reducing the complexity and making true that people there. Dan did that a scientist air machine learning engineers are able to focus on their core business. >>You watch all of the different service is in the different things that the data scientists are using. They don't I really want to know what's under under the hood. They would like to just open up a Jupiter Hub notebook, have everything there. They need, train their models, have them run. And then after they're done, they're done and it goes away. And hopefully they remember to turn off the Jeep, use in the woods or wherever it is, and they don't keep getting billed for it. But that's the real beauty of it is that they don't have to worry so much anymore about that. And we've got a whole nice life cycle with source to image or us to I. And they could just quickly build on deploy its been, you know, it's near and dear to my heart, the machine learning the eyesight of stuff. It is one of the more interesting, you know, it's the catchy thing, but the work was, but people are really doing it today, and it's been we had 23 weeks ago in San Francisco, we had a whole open ship comments gathering just on a I and ML and you know, it was amazing to hear. I think that's the most redeeming thing or most rewarding thing rather for people who are working on Kubernetes is to have the folks who are doing workloads come and say, Wow, you know, this is what we're doing because we don't get to see that all the time. And it was pretty amazing. And it's been, you know, makes it all worthwhile. So >>Diane Renaud, thank you so much for the update. Congratulations on the launch of the operators and look forward to hearing more in the future. >>All right >>to >>be here >>for John Troy runs to minimum. More coverage here from Q. Khan Club native Khan, 2019. Thanks for watching. Thank you.

Published Date : Nov 20 2019

SUMMARY :

Koopa and Cloud Native Cot brought to you by Red Cloud, California Instrumental in my co host is Jon Cryer and first of all, happy to welcome back to the program. There So the Red Hat team decided to bring everybody on a boat, And that's really the whole purpose for comments. Did you get to go to the day zero event And, uh, what sort of things have you been seeing? But I definitely enjoyed, for example, of the amazing D. I've been accused in Europe in the past. The Commons in general, the relationship between open shift, And so in the interim, you know, let's like to understand how you got involved in the creation of the So the operator, who basically takes care of things that Is that is that the problem we're solving here? added are, for example, the monitoring being able to see that a specific Jupiter notebook is using this the operator and we support all of that with SD case to help people. Of course it works well with open shift, and that's really the goal is to be ableto take stuff in a hybrid lot of complexity, and this is where we're working towards is reducing the complexity and It is one of the more interesting, you know, it's the catchy thing, but the work was, Congratulations on the launch of the operators and look forward for John Troy runs to minimum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Audrey ResnickPERSON

0.99+

Andrew ClichePERSON

0.99+

Diane MuellerPERSON

0.99+

Steve DakePERSON

0.99+

IBMORGANIZATION

0.99+

Jon CryerPERSON

0.99+

Exxon MobilORGANIZATION

0.99+

Diane RenaudPERSON

0.99+

EuropeLOCATION

0.99+

John TroyPERSON

0.99+

San FranciscoLOCATION

0.99+

1/2 dayQUANTITY

0.99+

Red HatORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

firstQUANTITY

0.99+

J BloomPERSON

0.99+

DianePERSON

0.99+

2019DATE

0.99+

Open Innovation LabsORGANIZATION

0.99+

yesterdayDATE

0.99+

Red CloudORGANIZATION

0.99+

560QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

600 peopleQUANTITY

0.99+

three daysQUANTITY

0.99+

John WillisPERSON

0.99+

8 a.m.DATE

0.99+

Crispin EllaPERSON

0.99+

JeepORGANIZATION

0.99+

San Diego, CaliforniaLOCATION

0.99+

Cora WestORGANIZATION

0.99+

YesterdayDATE

0.99+

last weekDATE

0.99+

SDOTITLE

0.99+

DanPERSON

0.99+

8 p.m.DATE

0.98+

23 weeks agoDATE

0.98+

first impressionsQUANTITY

0.98+

one secondQUANTITY

0.98+

Q. Khan ClubORGANIZATION

0.98+

oneQUANTITY

0.98+

RenauPERSON

0.98+

Red BullORGANIZATION

0.98+

ReynoldsPERSON

0.97+

AaronPERSON

0.97+

Day twoQUANTITY

0.97+

MarchDATE

0.96+

third con.QUANTITY

0.96+

first spaceQUANTITY

0.96+

first dayQUANTITY

0.95+

VinnyPERSON

0.95+

Cora WestonORGANIZATION

0.94+

ThioPERSON

0.94+

CloudORGANIZATION

0.93+

FacebookORGANIZATION

0.92+

first classQUANTITY

0.92+

todayDATE

0.9+

about 560 peopleQUANTITY

0.9+

JupiterLOCATION

0.89+

each physical machineQUANTITY

0.88+

12,000 otherQUANTITY

0.88+

day zeroQUANTITY

0.88+

D. MPERSON

0.87+

CloudNativeCon NA 2019EVENT

0.87+

d GaubertPERSON

0.87+

TheaPERSON

0.86+

pythonTITLE

0.84+

Native Computing PoundingORGANIZATION

0.83+

a dayQUANTITY

0.79+

day zeroEVENT

0.78+

day oneQUANTITY

0.78+

KoopaORGANIZATION

0.76+

one more thoughtQUANTITY

0.74+

KhanPERSON

0.72+

CommonsORGANIZATION

0.72+

KubeCon +EVENT

0.72+

Jupiter HubORGANIZATION

0.71+

Vinnie Chhabra, Medallia & Krishnan Badrinarayanan, Nutanix | CUBEConversation, October 2018


 

[Music] hi I'm Stu Mittleman and welcome to a cube conversation really excited to have to the program a first-time guest and a user Vinny Chopra is an IT engineer with Medallia Vinny thank you so much for joining us thank you and - Vinny's left we have Krishnan bad Rena Ryan in who's a director of product marketing with Nutanix Chris thanks so much for you here okay so we always love to be able to dig in with the customers understand the challenges they're facing Chris let's set the table first I'm very familiar with Nutanix we go to all the new tannic shows and the like but for customers what is Nutanix to them why do they turn to Nutanix okay absolutely so I think it's a great time to be in IT you see new businesses that are sprouting at all the last 10 years or so starting with uber Airbnb specifically the ones we've really heard of that have disrupted some really really big industries right so technology is making it happen while IT teams are the ones that help make that happen and helps those CEOs disrupt they're not in the best of positions to utilize infrastructure they have today the way it's set up to be able to get more done be more agile and truly serve the needs of the business and help create those competitive differentiation which is why neutronics is here to help our partners within companies such as yourself to be able to be those people to lean in and help CEOs really achieve what they're trying to get that yeah that's great yeah we definitely see it used to be okay IT was a cost center IT you know business would actually ask for something in IT would often be the no or be really slow and do they work with that so Vinnie before we dig into the IDE piece of it tell us a little bit about Medallia the business what's happening what's Sherma Delia's been around for about 15 years now we're located in it we're headquartered in San Mateo we used to be in Palo Alto moved last year we have a brand new building right off 101 a 92 we our analytics company and we and there's a lot of lots of fields in analytics we specialize in an area called CX which stands for customer experience and our goal is to make our customers customers happy which therefore makes our customers happy and we specialize in doing surveys and then especially in designing surveys for different types of companies and then and then we analyze that data you know surveys well Vinny I I find there's very few companies that I talked to whose industries are stagnant or not changing much the analytic space space that we cover heavily you know here here on the cube and with our research it's boy has that changed a lot I mean five years ago we were talking very much about Big Data today you know all the AI ml and and things like that what what give us a little bit about what's it like being in that business you know fast driving your silicon valley-based I have to imagine that the business is going through a lot of changes that put stresses and strains on IT oh definitely so I better the IT industry for many years and IT area different big companies Sun Microsystems Juniper Networks NetApp in the past excite calm which was a search engine way back when before Google days I remember excite you know because Microsoft didn't they buy that or things well there was an early cerulean at home there's a partnership with that on but yeah excited people would confuse as to wait excite calm what kind of site was that it's like no no it's a search engine back before by the way audience for those of you that haven't been around a while it wasn't all just being in Google there were a lot of predecessors that there was four or five big search engines at that time so most of my company had been out we've always been packaging stuff in a box and selling it in this is my first time at an analytics company and it's it's like you said it's a fast-moving field things are being the things there's no development staging production type of stuff things are just continuously being put into production changes are made you know customized you know customer's applications and their interface so it's it's a very fast-moving alright and Vinny you say IT engineers your job what does that encompass what your role how many people in the group what is your sure so we have basically two IT groups we have one that manages our production data centers which are which our customers interface with and we have one that supports our engineers so I'm part of that group and it's kind of a week up art of the IT system and engineering team and that involves traditional IT tasks like backups monitoring application install new server installs managing storage networking basically keeping infrastructure and applications running as efficiently as possible and therefore keeping our engineers happy because they can get their work done and their development done okay sounds like a you know pretty typical from from what I hear from companies is it what do you hear from customers structure-wise challenges they're facing absolutely so it's very much in line with what you were just talking about where there's these multiple needs from the business and customer expectations so how do you really help IT organizations be able to keep up with those needs infrastructure needs to be the big quittez data needs to be Vic witness application services need to be Vic Willis and you need to be able to scale out as your business needs needs to do so to be able to serve all those multiple requirements so whether it's standardizing internal applications that are delivered through virtual desktops or deploying databases are starting up customer websites you need to be able to do that and respond as quickly as possible and if you're spending cycles on acquiring infrastructure deploying it making sure it's well integrated and then once it's up and running figuring out what went wrong and enjoying those multiple nights of pizza right to figure out how to get this thing going back to the way it was it's it just distracts you from what's important so it's only when you make infrastructure invisible and truly scalable very much cloud-like and and make it your own as a process of doing so can you truly be that business partner and you and I hope we've done that with you definitely all right so Bennie let's go inside was there a specific project rollout that you would that led towards Nutanix was there a pain point you were having would give us kind of the before and what was the mature so traditionally an IT you would you want to set up a new application at you in your infrastructure environment you would buy servers and you would buy storage you would buy HBA cards which helps you connect the servers to the storage you've got things like worldwide numbers to worry about getting the right cables getting the right cards and then you put it all together you get all the stuff delivered and then two weeks later you might have things working and but you having some permission issues security issues so it was always a big challenge to get things up and running so it was the fun of ideas let's roll up our sleeves let's turn those geek knobs and you know optimize everything and yeah within six months I'm sure everything's rocking in right everything's rocking rolling but you're still not quite confident that things are running you're worried that a card might go bad you're worried that a world-wide number might change somewhere or somebody might you know mess up your security so you would spend a lot of time just getting things up and running versus spending time on development and you know working with your people you're supporting and trying to try to enhance things versus just keeping things getting things up and running so Nutanix you know with the hyper-converged infrastructure you know what kind of we're not worried about those things anymore it has our storage needs it has our compute needs it has our memory needs so what was it a refresh cycle what was the impetus that led to looking at a new arc sugar as we were growing and entering base was growing an IT was growing and our requests and you know what we need to satisfy was increasing tremendously we before we were working with just individual desks like desktops or blade servers but each one was kind of working individually with its own storage its own applications not the notion things weren't being shared or anything and we were just growing fast so we needed some we need a new infrastructure where we could actually have everything working of most efficiently and be secure and fast and and easy to manage and so we did look at we did some analysis on a few products and Nutanix you know after some a few pocs Nutanix was our product of choice yeah I mean you described something we heard a lot is it used to be every application you would kind of build your own temple for it yeah let me build it let me get the performance I need let me optimize certain things let me forecast how it's gonna grow but I get islands out there as opposed to I want to be able to scale I don't want to worry about you know here's one of the challenges out there most people and across the board forecasting is really hard or impossible I either overestimated a bunch and then I bought stuff I didn't eat her right under missed it estimate it and then oh my gosh I need to look to a new architecture yeah and then things ended up burning like at 10% of you know you utilizing temperature of the resources that you're purchasing yeah I remain poor virtualization it was like you know six seven percent is usually what we were running awesome so challenges before and we had you know silos out there I couldn't share I couldn't do talk about that that role how did you get from that old environment to the new one there's something I said when you you look at this wave of really a distributed architecture in the old world migrations were really really tough yeah and you had to do it with every cycle hopefully moving to an architecture like this this is your last migration it was like you know my wife always said the last time that's the last time I never want to have to move well I T I'm sure those migrations were always painful what was the experience my heading to migrations was is one thing that we went through but also just now it's just setting up new VMs or new applications new servers it's you know within a few minutes versus hours as far as migration we were we were running a hypervisor before but like I said it was on individual servers so the migration was basically picking your VMs or your servers one at a time and just migrating over to Tenex once it was there and you know with the hypervisor tools that are available it's very easy to use it's like things like vmotion or different types of migration tools that Nutanix offers with their hv hypervisor so it was just it was pretty seamless it was just you just pick and choose and identify your destination host ons Nutanix node or Nutanix cluster and all your stories that you want to move it to and just go okay so so Vinnie you went through a bit of a bake-off to figure out the solution tell us when you finish the deployment how are you measuring what does success mean to in deployment of your stand point and give us the after what show does this change for your process your organization sure qualitatively success is when our engineers are smiling and not calling us too much and asking us go to lunch versus telling us about issues they're having so that's qualitatively quantitatively looking at performance CPU memory I ops performance on a storage how our applications responding that that's what we measured it quantitatively yeah did you know like what kind of utilization you're getting on your current infrastructure then with the Nutanix um also currently you meet as far as uh what you said you were lucky to get 10% in the old world do you measure that yeah we met her that week we kind of um you know we have our kind of have our choices of how much storage you want to use how much CPU remember you want to allocate to each VM and we we just monitor it and through the prism interface that Nutanix offers the image you can actually see performance of each VM and you can decide when to throttle things so but as far as you know how much we're utilizing we're you know we have it we have a structured where we have room to grow so yeah absolutely and if we do need to grow later we can easily add nodes or you know chassis wood notes yeah I think back to the early years of you know what we call hyper converge environments and it was like oh well they are monolithic blocks even if they're small and but you don't have flexibility there when I look at you know many of the solutions especially what Nutanix ups there's a lot of flexibility into how I can grow in scale and get the the utilization that I need but get the performance the ops and everything what I think from your customers how is that story play out today yeah I mean ultimately it's all about empowering people right it's about making IT people truly successful broadening their skillset giving them greater control over the full stack if you will right so it's no longer siloed across functions you're no longer found helpless relying on a different team to deliver upon something that was promised based on a certain SLA so how do we do that how do we make evolved functional specialists into IT journalists would then become cloud engineers true cloud engineers right the world is changing technology is adapting businesses are a craving for more and the only way we can keep up is to adapt ourselves and utilize the best of breed technologies that gives us that power so as a result we hear that a lot where we find a lot of a customer's progressing from being either storage admins network specialists but most likely virtualization admins who then become these cloud engineers if you will they reorganize that way they tend to be in a position where they are a lot more infrastructure we're talking about 100x of what they used to do prior in the in the earlier days so the the number of the ratios just grow immensely as well as the quality of service provided the SAS are far reduced as they used to be so all of that goodness that our customers are able to deliver to their state goes in the organization makes us feel good about what we do if any would love we talked about you know this the engineers now they're smiling and going out to further then you know fighting bugs anything complaining about is yeah anything kind of when you look at skill set if they're you know I've talked to some entertainment customer he's like oh you know I had that security project that was sitting on my desk for years I can finally tackle that or there's I can be more responsive to the business so that they don't you know I can engage with them rather than just going off running it and do in stealth IT any anything along those lines that you can share I mean one thing like IT admins we typically want to know everything right so we all know what's happening behind the scenes with Nutanix we don't have to as much but we still like to and so we we take the opportunity to you know do trainings learn what's happening in an interface you support when needed so as far as yeah as far as skills go I think it's you know the skills you keep up with it's just different like Chris mentioned it's different different type of administration like we're managing virtualization or managing cloud you know you're not just managing loans and cables you know I love you sounds like you've got a team that's got that intellectual curiosity wants to understand what's going on how was the how was the on-ramp how was the kind of the cycle to understand the Nutanix piece how did you yeah so we learned a lot of the POC of course that's when you kind of you know you can play around with stuff and break stuff and try to break stuff if you want we use professional we used some freshly served since to help us get set up originally and after that it was just kind of learning day to day and just improving improving our knowledge in different areas like not if we're not used to having everything in one like in you know in one kind of a couple jassi's storage and you know compute so that was a networking as well so that was a little bit not challenged technically but just just you just need to reset the mindset these are the way I used to do things versus the the way now I can't do three and in troubleshooting um you know the great thing is when we have troubleshooting we're not calling three different vendors like a networking company a storage company in a compute company and having them point fingers oh it's networking now we if I ever have an issue or a question I call Nutanix supporting it so if any how long has it been since you the solution was deployed about two and a half years now awesome so it but you first of all I love your viewpoint as to how Nutanix has changed in those two those two years and along those lines too now that you look at things through the lens of 2018 if you could go back to peers of yours what would you tell them now that you wish you had known back when you rolled this out a couple of years ago I would you know how to tell them there's a much easier way to minute you know the deploy and manager infrastructure and you know this is this is one of the new techniques is definitely something you should look at alright Chris what what advice do you give to the IP people of the world that you know I'm sure most of them heard about this but you know what misconceptions might they have what what things do we want to make sure we open the door for sure so as a former developer myself you know several years ago I think it's very easy for us to forget the role we play in our organizations we're not all about the applications we're not all over the speeds and feeds we had a critical core part of how businesses go to market and achieve success right so let us recognize that and use the best approaches that are available out there to be able to deliver that value right if it means going where the good hyper-converged infrastructure solution if it means leaning in and building new disruptive technologies and such that can help your businesses do better the other thing that I want to highlight is just as you are in the the customer service business I believe we are as well we pride ourselves on our support so if you have if ask questions about how hyper-converged infrastructure can add value call us give support a call you would be put in touch with anyone who can speak about all the values we deliver to our customers and begin to get some of those ideas all right Vinnie uh want to ask you you you've got some experience works for some of the you know really well-known companies you not only here in the valley but in tech in general what's exciting you these days what do you look at either in the analytic space or an IT that that's getting you excited for me it's I like to get up without stress and so ease of management ease of deployment in the IT area is very that that's one of things I look forward to like you know being able to do other stuff than just focusing on data you know routine stuff yeah and one of those lines if I could give you you know the one wish to help make that goal even more either from Nutanix or you know the broad ecosystem out there what would what would make your job even easier you know it's it's I don't know I'm trying to think of a good answer but it's typically you know when issues once them all we have application issues it would just be some kind of self-healing type things you know maybe or maybe some automatic adjustments that could be done that maybe something in the future yeah like I just means as far as resources allocated to different types of yeah all right Chris sure I'll let you have the final word there cuz absolutely once we simplify modernize the platform modernizing the application some it's definitely something I've heard from many of your customers as to you know that role of infrastructure really is to serve up and support those applications and that seems to be where it's going that's right that's right the the business partners right partners the business CFO whoever on the other side of the fence they care about applications and services not so much about all the blood sweat and tears we put into the infrastructure so I think it's an opportunity for us to help us elevate beyond the infrastructure and focus on apps and services along with making sure we have some of those self-healing capabilities such that take care of us and not require us to pay heat to all those infrastructure speeds and feeds so it's a great opportunity to do and you know be truly strategic in the company right alright well Chris really appreciate you sharing the updates Vinny really appreciate you sharing your customer story it's our purpose here at the cube to always help bring out the information so make sure to check out the cube net if you actually go to the top there's a search we've got over five or six thousand interviews we've done including many customers including many of Nutanix go in search Nutanix you'll find a plethora of content out there if you ever have any question for us please reach out to us see us at any of the shows or in between so I'm Stu minimun and thanks again for watching the cube thank you

Published Date : Oct 25 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Vinny ChopraPERSON

0.99+

ChrisPERSON

0.99+

Stu MittlemanPERSON

0.99+

Sun MicrosystemsORGANIZATION

0.99+

Vinnie ChhabraPERSON

0.99+

NutanixORGANIZATION

0.99+

10%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

San MateoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

uberORGANIZATION

0.99+

each VMQUANTITY

0.99+

VinnyPERSON

0.99+

October 2018DATE

0.99+

2018DATE

0.99+

Krishnan BadrinarayananPERSON

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

first timeQUANTITY

0.99+

last yearDATE

0.99+

each VMQUANTITY

0.99+

two weeks laterDATE

0.99+

twoQUANTITY

0.99+

two yearsQUANTITY

0.99+

Rena RyanPERSON

0.99+

five years agoDATE

0.98+

VinniePERSON

0.98+

AirbnbORGANIZATION

0.98+

Juniper NetworksORGANIZATION

0.98+

several years agoDATE

0.97+

BenniePERSON

0.97+

Stu minimunPERSON

0.97+

MedalliaPERSON

0.97+

todayDATE

0.96+

vmotionTITLE

0.96+

five big search enginesQUANTITY

0.96+

six seven percentQUANTITY

0.95+

six monthsQUANTITY

0.95+

Medallia VinnyPERSON

0.95+

each oneQUANTITY

0.95+

NutanixTITLE

0.94+

about 15 yearsQUANTITY

0.94+

first-timeQUANTITY

0.93+

one thingQUANTITY

0.93+

101 a 92OTHER

0.93+

three different vendorsQUANTITY

0.93+

neutronicsORGANIZATION

0.91+

six thousandQUANTITY

0.91+

a couple of years agoDATE

0.88+

firstQUANTITY

0.88+

NetAppTITLE

0.88+

KrishnanPERSON

0.87+

two and a half yearsQUANTITY

0.86+

last 10 yearsDATE

0.85+

about 100xQUANTITY

0.85+

that weekDATE

0.84+

two IT groupsQUANTITY

0.82+

TenexTITLE

0.82+

Nutanix nodeTITLE

0.81+

multiple nightsQUANTITY

0.8+

Sherma DeliaORGANIZATION

0.79+

over fiveQUANTITY

0.77+

a few minutesQUANTITY

0.77+

one of the challengesQUANTITY

0.74+

lotQUANTITY

0.73+

MedalliaORGANIZATION

0.72+

Vic WillisORGANIZATION

0.72+

Dell EMC Next-Gen Data Protection


 

(intense orchestral music) >> Hi everybody this is Dave Vellante, welcome to this special CUBE presentation, where we're covering the Dell EMC Integrated Data Appliance announcement. You can see we also are running a crowd chat, it's an ask me anything crowd chat you can login with Twitter, LinkedIn, or Facebook, and ask any question. We've got Dell EMC executives, we're gonna hear from VMware executives, we've got the analyst perspective, we're gonna hear from customers and then of course we're gonna jump into the crowd chat. With me is Beth Phalen, who is the President of Dell's EMC, Dell EMCs Data Protection Division, Beth, great to see you again. >> Good to be here, Dave. >> Okay so, we know that 80% of the workloads are virtualized, we also know that when virtualization came on the scene it caused customers to really rethink their data protection strategies. Cloud is another force that's causing them to change the way in which they approach data protection, but let's start with virtualization. What are you guys doing for those virtualized customers? >> Data protection is crucial for our customers today, and more and more the vAdmins are being expected to protect their own environments. So we've been working very closely with VMware to make sure we're delivering the simplest data protection for VMware, taking into account all of the cloud capabilities that VMware is bringing to market and making sure we're protecting those as well. We have to do that without compromise, and so we have some really exciting innovations to talk about today. The first of those is the DP4400, we announced this a few weeks ago, it is a purpose-built appliance for mid-sized customers that brings forward all of our learnings from enterprise data protection, and makes it simple and easy to use, and at the right price point for our mid-sized customers. We're the extension into VMware environments and extensions into the cloud. >> Okay, so I mentioned up front that cloud is this disruptive force. You know people expect the outcome of cloud to be simplicity, ease of management, but the cloud adds IT complexity. How are you making data protection simpler for the cloud? >> And the cloud has many different ways the customers can leverage it. The two that we're gonna highlight today are for those customers that are using VMware Cloud on AWS, we're now enabling a seamless disaster recovery option, so customers can fail over to VMware Cloud on AWS for their DR configurations. And on top of that, we're very excited to talk about data protection as a service. We all know how wildly popular that is and how rapidly it's growing, and we've now integrated with VMware vCloud Director to allow customers to not have to have a separate backup as a service portal, but provide management for both their VMware environments and their data protection, all integrated within VCD. >> Okay great, so, we know that VMware of course is the leader in virtualization, we're gonna cut away for a moment and hear from VMware executives, we're gonna back here we're gonna do a deep dive, as I say we got great agenda, we're gonna explore some of these things; and then of course there's the crowd chat, the ask me anything crowd chat. So let's cut over to Palo Alto, California, in our studios over there, and let's hear from the VMware perspective and Peter Burris, take it away, Peter. (intense orchestral music) >> Thanks, Dave! And this is Peter Burris, and I can report that in fact we have another beautiful day here in California. And also, we've got a great VMware executive to talk a bit about this important announcement. Yanbing Li is the Senior Vice President and GM for the Storage and Availability Business Unit at Vmware, welcome back to theCUBE Yanbing. >> It's great to be here, thank you for having me Peter. >> Oh absolutely we've got a lot of great stuff to talk about but let's start with the obvious question. Why is it so important to VMware and Dell EMC to work on this question, data availability, data protection? >> You know I have a very simple answer for you. You know Dell EMC has been the marketing leader for the past decade, and they are also a leading solution for all of our VMware environment, it's very natural that we do a lot of collaboration with them. And what's most important, is our collaboration is not only go-to-market collaboration, in labeling our joint customers, but also deep engineering level collaboration, and that is very very exciting. Lots of our solutions are really co-engineered together. >> So, that is in service to something. And now putting all this knowledge, all this product together to create a solution, is in service of data protection but especially as it relates to spanning the cloud. So talk to us a little bit about how this is gonna make it easier for customers to be where they need to be in their infrastructure. >> Certainly VMware has been also on a journey to help with our customers, their transition from data center to the cloud, and data protection is a very crucial aspect of that; and we're looking for simpler, scalable, more robust data protection solutions. You know VMware launched our VMware Cloud on AWS service last year, and Dell EMC has been with us since day one; they're the first solution to be certified as a data protection service for VMware Cloud on AWS. We also work with 4500 VCCP partners, this is the VMware Cloud partner program partners that, you know they are building cloud services based on VMware software defined data center stack. And we are also working with Dell EMC on integrating their data protection source with vCloud, their vCloud Director software, so that you know our customer has integrated data protection for our VCCP partners. So you know across all the cloud initiatives, we are working very closely with Dell EMC. >> So bringing the best of the technology, the best of this massive ecosystem together, to help customers protect their data and give them options about where they operate their infrastructure. >> Definitely. I'm personally very excited about their recent announcement that has been to the Data Domain Virtual Edition, where they're offering a subscription-based data protection bundle that can allow a VMware Cloud on AWS instance to back up their data, you know, using a subscription model, and you can backup 96 terabytes for any single SDC cluster in VMware Cloud on AWS. So they're definitely driving a lot of innovation not only in technology, but also in consumption, how to make it easier for customers to consume. And we're excited to be a partner with Dell EMC together on this. >> Fantastic! Yanbing Li, VMware, back to you, Dave! >> Thanks, Peter. We're back for the deep dive, Beth Phalen and joining us again, and Ruya Barrett, who's the Vice President of Marketing for Dell EMC's Data Protection Division, thanks guys for coming on. Ruya, let me start with you. Why are customers, and what are they telling you, in terms of why they're acquiring your data protection solutions? >> Well, Beth talked a little bit about the engineering effort, and collaboration we've been putting in place, and so did Yanbing with VMware, so whether that's integration into vCenter, or vSphere, or vRealize Operations Manager, vRealize Automation or vCloud Director, all of this work, all of this engineering effort, and engineering hours is really to do two things: deliver simply powerful data protection for VMware customers >> But what do you mean by simple? >> Simple. Well, simple comes in two types of approaches, right? Simple is through automation. One of the things that we've done is really automate across the data protection stack for VMware. Where as 99% of the market solutions really leave it off at policy management, so they automate the policy layer. We automate not only the policy layer, but the vProxy deployment, as well as the data movement. We have five types of data movement capabilities that have been automated. Whether you're going directly from storage to protection storage, whether you're doing client to protection storage, whether you're doing application to protection storage, or whether you're doing Hypervisor Direct to application storage. So it really is to automate, and to maximize the performance of to meet the customer's service levels, so automation is critical when you're doing that. The other part of automation could be in how easy cloud is for the admins and users, it really has to do with being able to orchestrate all of the activities, you know very simply and easily. Simplicity is also management. We are hearing more and more that the admins are taking on the role of doing their backups and restores, so, our efforts with VMware have been to really simplify the management so that they can use their native tools. We've integrated with VMware for the vAdmins to be able to take backup and restore just a part of their daily operational tasks. >> So, when you talk about power, is that performance, you reference performance, but is it just performance, or is it more than that? >> That's also a great question, Dave, thank you. Power really, in terms of data protection, is three fold, it's power in making sure that you have a single, powerful solution, that really covers a comprehensive set of applications and requirements, not only for today, but also tomorrow's needs. So that comprehensive coverage, whether you're on-premise, or in the cloud is really critical. Power means performance, of course it means performance. Being able to deliver the highest performing protection, and more importantly restores, is really critical to our customers. Power also means not sacrificing efficiency to get that performance. So efficiency, we have the best source ID duplication technology in the market, that coupled with the performance is really critical to our customers. So all of these, the simplicity, the comprehensive coverage, the performance, the efficiency, also drives the lowest cost to protect for our customers. >> Alright, I wanna bring Beth Phalen into the conversation, Beth, let's talk about cloud a little bit. A lot of people feel as though I can take data, I can dump it into an object store in the cloud, and I'm protected. Your thoughts? >> Yeah, we hear that same misconception, and in fact the exact opposite is true; it's even more important that people have world class data protection when they're bringing cloud into that IT environment, they have to know where their data is, and how is protected and how to restore it. So we have a few innovations that are going on here for a long time, we've had our hyper cloud extensions, you can do cloud tiering directly from Data Domain. And now we've also extended what you can do if you're a VMware Cloud on AWS customer, so that you can use that for you cloud DR configuartion, fail over to AWS with VMware Cloud, and then fail back with vMotion if you choose to; and that's great for customers who don't wanna have a second site, but they do wanna have confidence that they can recover if there's a disaster. On top of that we've also been doing some really great with VMware, with vCloud Director integration. Data protection as a service is growing like crazy, it's highly popular around the globe as a way to consume data protection. And so now you can integrate both your VMware tasks, and your data protection tasks, from one UI in the Cloud Director. These are just a few of the things that we're doing, comprehensively bringing data protection to the cloud, is essential. >> Great, okay. Dell EMC just recently made an announcement, the IDPA DP4400, Ruya what's it all about? Explain. >> Absolutely, so, what we announced is really an integrated data protection appliance, turnkey, purpose-built, to meet the specific requirements of mid-sized customers, it's really, to bring that enterprise sensibility and protection to our mid-sized customers. It's all inclusive in terms of capabilities, so if you're talking about backup, restore, replication, disaster recovery, cloud disaster recovery, and cloud long-term retention, all at your fingertips, all included; as well as all of the capabilities we talked about in terms of enabling VM admins to be able to do all of their daily tasks and operations through their own native tools and UI's. So it's really all about bringing simply powerful protection to mid-sized customers at the lowest cost to protect. And we now also have a guarantee under our future proof loyalty program, we are introducing a 55 to one deduplication guarantee for those exact customers. >> Okay. Beth, could you talk about the motivation for this product? Why did you build it, and why is relevant to mid-sized customers? >> So we're known as number one in enterprise data protection we're known for our world-class dedupe, best in class, best in the world dedupe capabilities. And what we've done is we've taken the learnings and the IP that we have that's served enterprise customers for all of these years, and then we're making that accessible to mid-sized customers And there were so many companies out there that can take advantage of our technology that maybe couldn't before these announcements. So by building this, we've created a product that a mid-sized company, may have a small IT staff, like I said at the beginning, may have VM admins who are also responsible for data protection, that they can have what we bring to the market with best-in-class data protection. >> I wanna follow up with you on simple and powerful. What is your perspective on simple, what does it mean for customers? >> Yeah, I mean if you break it down, simple means simple to deploy, two times faster than traditional data protection, simple means easier to manage with modern HTML5 interfaces that include the data protection day-to-day tasks, also include reporting. Simple means easy to grow, growing in place from 24 terabytes up to 96 terabytes with just a simple software license to add in 12 terabyte increments. So all of those things come together to reduce the amount of time that an IT admin has to spend on data protection. >> So, when I hear powerful and here mid-sized customers, I'm thinking okay I wanna bring enterprise-class data protection down to the mid-sized organization. Is that what you means? Can you actually succeed in doing that? >> Yeah. If I'm an IT admin I wanna make sure that I can protect all of my data as quickly and efficiently as possible. And so, we have the broadest support matrix in the industry, I don't have to bring in multiple products to support protection on my different applications, that's key, that's one thing. The other thing is I wanna be able to scale, and I don't wanna have to be forced to bring in new products with this you have a logical five terabytes on-prem, you can grow to protecting additional 10 terabytes in the cloud, so that's another key piece of it, scalability. >> Petabytes, sorry. >> And then-- >> Sorry. Petabytes-- >> Petabytes. >> You said terabytes. (laughs) >> You live in a petabyte world! >> Of course, yes, what am I thinking. (all laugh) and then last but not least, it's just performance, right? This runs on a 14GB PowerEdge server; you're gonna get the efficiency, you can protect five times as many VMs as you could without this kind of product. So, all of those things come together with power, scalability, support matrix, and performance. >> Great, thank you. Okay, Ruya, let's talk about the business impact. Start with this IT operations person, what does it mean for that individual? >> Yeah, absolutely. So first, you're gonna get your weekends back, right? So, the product is just faster, we talked about it's simpler, you're not gonna have to get a PhD on how to do data protection, to be able to do your business. You're gonna enable your vAdmins to be able to take on some of the tasks. So it's really about freeing up your weekends, having that you know sound mind that data protection's just happening, it works! We've already tried and tested this with some of the most crucial businesses, with the most stringent service-level requirements; it's just gonna work. And, by the way, you're gonna look like a hero, because with this 2U appliance, you're gonna be able to support 15 petabytes across the most comprehensive coverage in the data center, so your boss is gonna think your just a superhero. >> Petabytes. >> Yeah exactly, petabytes, exactly. (all laugh) So it's tremendous for the IT user, and also the business user. >> By the way, what about the boss? What about the line of business, what does it mean to that individual? >> So if I'm the CEO or the CIO, I really wanna think about where am I putting my most skilled personnel? And my most skilled personnel, especially as IT is becoming so core to the business, is probably not best served doing data protection. So just being able to free up those resources to really drive applications or initiatives that are driving revenue for the business is critical. Number two, if I'm the boss, I don't wanna overpay for data protection. Data protection is insurance for the business, you need it, but you don't wanna overpay for it. So I think that lowest cost is a really critical requirement The third one is really minimizing risk and compliance issues for the business. If I have the sound mind, and the trust that this is just gonna work, then I'm gonna be able to recover my business no matter what the scenario; and that it's been tried and true in the biggest accounts across the world. I'm gonna rest assured that I have less exposure to my business. >> Great. Ruya, Beth, thank you very much, don't forget, we have an ask me anything crowd chat at the end of this session, so you can go in, login with Twitter, LinkedIn, or Facebook, and ask any question. Alright, let's take a look at the product, and then we're gonna come back and get the analysts perspective, keep it right there. (intense music) >> Organizations today, especially mid-sized organizations, are faced with increased complexity; driving the need for data protection solutions that enable them to do more with less. The Dell EMC IDPA DP4400 packages the proven enterprise class technologies that have made us the number one provider in data protection into a converged appliance specifically designed for mid-sized organizations. While other solutions sacrifice power in the name of simplicity, the IDPA DP4400 delivers simply powerful data protection. The IDPA DP4400 combines protection software and storage, search and analytics, and cloud readiness, in one appliance. To save you time and money, we made it simple for you to deploy and upgrade, and, easily grow in place without disruption, adding capacity with simple license upgrades without buying more hardware. Data protection management is also a snap with the IDPA System Manager. IDPA is optimized for VMware data protection. It is also integrated with vSphere, SQL, and Oracle, to enable a wider IT audience to manage data protection. The IDPA DP4400 provides protection across the largest application ecosystem, deliver breakneck backup speeds, more efficient network usage, and unmatched 55 to one average deduplication. The IDPA DP400 is natively extensible to the cloud for long-term retention. And, also enables simple, and cost effective cloud disaster recovery. Deduplicated data is stored in AWS with minimal footprint, with failover to AWS and failback to on-premises quickly, easily, and cost effectively. The IDPA DP4400 delivers all this at the lowest cost-to-protect. It includes a three year satisfaction guarantee, as well as an up to 55 to one data protection deduplication guarantee. The Dell EMC IDPA DP4400 provides backup, replication, deduplication, search, analytics, instant access for application testing and development, as well as DR and long-term retention to the cloud. Everything you need to deliver enterprise-class data protection, in a small integrated system, optimized for mid-sized environments. It's simply powerful. (upbeat music and rhythmic claps) >> Cool video! Alright, we're back, with Vinny Choinski, who is the Senior Analyst for the Validation Practice at ESG, Enterprise Strategy Group. ESG is a company that does a lot of research, and one of the areas is they have these lab reports, and they basically validate vendor claims, it's an awesome service, they've had it for a number of years and Vinny is an expert in this area. Vinny Choinski, welcome to theCUBE great to see you. >> How you doin' Dave? Great to see you. >> So, when you talk to customers they tell you they hate complexity, first of all, and specifically in the context of data protection, they want high performance, they don't wanna have to mess with this stuff, and they want low cost. What are you seeing in the marketplace? >> So our research is lining up with those challenges; and that's why I've recently done three reports. We talked to how EMC is addressing those challenges and how they are making it easier, faster, and less expensive to do data protection. >> So people don't wanna do a lot of heavy lifting. They worry about the time it takes to do deployment. So, what did you find, hands on, what'd you find with regards to deployment? >> Yeah, so for the deployment, we really yeah, we focused on the DP4400 and you know how that's making it easier for the IT generalist to do data protection deployment, and management. And what we did, I actually walked through the whole process from the delivery truck to first backup. We had it off the truck and racked up and powered up in about 30 minutes, so, it's a service sized appliance, pretty easy, easy to install. Spent 10 minutes in the server room kinda configuring it to the network, and then we went up to an office, and finished the configuration. After that I basically hit go on the configuration button, completely automated. And I simply monitored the process until the appliance was fully configured. Took me about 20 minutes, you know, to add that configuration to the appliance, hit go, and at the end, I had an appliance that was ready for on-site, and backups extended to the cloud. >> So, that met your expectations? It meshed with the vendors claims? >> It was real easy. We actually had to move it around a couple times, and you know, this stuff used to be huge you know, big box, metal gear. >> Refrigerators. (laughs) >> Refrigerators. It was a small appliance, once we installed it, got a note from the IT guy, had to move it. No tools, easy rack, the configuration was automated. We had to set network parameters, that's about it. >> How about your performance testing, what did that show? >> So we did some pretty extensive performance testing. We actually compared the IDPA Dell appliances to the industry recognized server grid scaled architecture. And basically we started by matching the hardware parameters of the box, CPU, memory, disk, network, flash, so once we had the boxes configured apples to apples shall we say, we ran a rigorous set of tests. We scaled the environment from a hundred to a thousand VMs, adding a hundred VMs in between each backup run. And what we found as we were doing the test was that the IDPA reduced the backup window significantly over the competitive solution. A 54 to 68% reduction in the backup window. >> Okay. So again, you're kind of expectations tied into the vendor claims? >> Yep. You know the reduction in backup time was pretty significant that's a pretty good environment, pretty good test environment, right, you got the hundred to a thousand VMs. We also looked at the efficiency of data transfer, and we found that IDPA outperformed the competitor there as well, significantly. And we found that this is do to the the mature data domain deduplication technology. It not only leverages, like most companies will, the VMware Changed Block Tracking API, but it has it's own client-side software that really reduces, significantly reduces the amount of data that needs to be transferred over the network for each backup. And we found that reduced the amount of data that needs to be transferred against the competitor by 74%. >> What about the economics, it's the one of the key paying points obviously for IT professionals. What did you see there? >> Yep, so, there's a lot that goes into the economics of a data protection environment. We summed it up into what we call the cost to protect. We actually collected call home data from 15,000 Dell EMC data protection appliances deployed worldwide. >> Oh cool, real data. >> Real data. So, we had the real data, we got it from 15,000 different environments, we took that data and we we used some of the stuff that we analyzed, the price that they paid for it, how long has it been in service, what the deduplication rates they're getting, and then the amount of data. So we had all the components that told us what was happening with that box. So that allowed us to to distill that into this InstaGraphic that we see up here, which takes 12, shows 12 of the customers that we analyzed. Different industries, different architectures, on the far left of this InstaGraphic you're gonna see that we had a data domain box connected to a third-party backup application, still performing economically, quite well. On the far right we have the fully integrated IDPA solution, you'll see that as you put things better together, the economics get even better, right? So, what we found was that both data domain and the IDPA can easily serve data protection environments storage for a fraction of a penny per month. >> Okay. Important to point out this is metadata, no customer data involved here, right, it's just. >> It's metadata that's correct. >> Right, okay. Summarize your impressions based on your research, and your hands on lab work. >> Yeah, so I've been doing this for almost 25 plus years, I've been in the data protection space, I was an end user, I actually ran backup environments, I worked in the reseller space, sold the gear, and now I'm an analyst with ESG, taking a look at all the different solutions that are out there, and, you know data protection has never been easy, and there's always a lot of moving parts, and it gets harder when you really need a solution that backs up everything, right? From your physical, virtual, to the cloud, the legacy stuff, right? Dell EMC has packaged this up, in my opinion, quite well. They've looked at the economics, they've looked at the ease of use, they've looked at the performance, and they've put the right components in there they have the data protection software, they have the target storage, they have the analytics, you can do it with an agent, you can do it without an agent. So I think they've put all the pieces in here, so it's not an easy thing in my opinion, and I think they've nailed this one. >> Excellent. Well Vinny, thanks so much for for comin' on and sharing the results of your research, really appreciate it. Alright, let's hear from the customer, and then we're gonna come back with Beth Phalen and wrap, keep it right there. (upbeat techno music) >> I was a fortune 500 company, a global provider of product solutions and services, and enterprise computing solutions. The DP4400 is attractive because customers have different consumption models. There are those that like to build their own, and there are those that want an integrated solution, they want to focus on their core business as opposed to engineering a solution. So for those customers that are looking for that type of experience, the DP4400 will address a full data protection solution that has a single pane of glass, simplified management, simplified deployment, and also, ease-of-management over time. >> Vollrath is a food service industry manufacturer, it's been in business for 144 years, in some way we probably touch your life everyday. From a semantic perspective, things that weren't meeting our needs really come around to the management of all of your backup sets. We had backup windows for four to eight hours, and we were to the point where when those backups failed, which was fairly regular, we didn't have enough time to run them again. With Dell EMC data protection, we're getting phenomenal returns, shorter times. What took us eight hours is taking under an hour, maybe it's upwards of two at times for even larger sets. It's single interface, really does help. So when you take into account how much time you spend trying to manage with old solutions that's another unparalleled piece. >> I'm the IT Director for Melanson Heath, we are a full service accounting firm. The top three benefits of the DP4400 simplicity of not having to do a lot of research, the ease of deployment, not having to go back or have external resources, it's really designed so that I can rack it, stack it, and get going. Having a data protection solution that works with all of my software and systems is vital. We are completely reliant on our technology infrastructure, and we need to know that if something happens, we have a plan B, that can be deployed quickly and easily. (upbeat techno music) >> We're back, it's always great to hear the customer perspective. We're back with Beth Phalen. Beth let's summarize, bring it home for us, this announcement. >> We are making sure that no matter what the size of your organization, you can protect your data in your VMware environment simply and powerfully without compromise, and have confidence, whether you're on-prem or in the cloud, you can restore your data whenever you need to. >> Awesome, well thanks so much Beth for sharing the innovations, and we're not done yet, so jump into the crowd chat, as I said, you can log in with Twitter, LinkedIn, or Facebook, ask any questions, we're gonna be teeing up some questions and doing some surveys. So thanks for watching everybody, and we'll see you in the crowd chat.

Published Date : Aug 18 2018

SUMMARY :

Beth, great to see you again. 80% of the workloads are virtualized, and more and more the vAdmins You know people expect the outcome of cloud to be And the cloud has many different ways and let's hear from the VMware perspective Yanbing Li is the Senior Vice President and GM Why is it so important to VMware and Dell EMC the marketing leader for the past decade, So, that is in service to something. to help with our customers, So bringing the best of the technology, to back up their data, you know, We're back for the deep dive, and to maximize the performance of also drives the lowest cost to protect for our customers. I can dump it into an object store in the cloud, and in fact the exact opposite is true; the IDPA DP4400, at the lowest cost to protect. and why is relevant to mid-sized customers? that they can have what we bring to the market with I wanna follow up with you on simple and powerful. that include the data protection day-to-day tasks, Is that what you means? I don't have to bring in multiple products to support Petabytes-- You said terabytes. So, all of those things come together with power, Okay, Ruya, let's talk about the business impact. And, by the way, you're gonna look like a hero, and also the business user. and the trust that this is just gonna work, at the end of this session, so you can go in, that enable them to do more with less. and one of the areas is they have these lab reports, Great to see you. and specifically in the context of data protection, and less expensive to do data protection. So, what did you find, hands on, and at the end, and you know, this stuff used to be huge you know, Refrigerators. got a note from the IT guy, had to move it. We actually compared the IDPA Dell appliances to So again, you're kind of expectations the amount of data that needs to be transferred it's the one of the key paying points obviously the cost to protect. On the far right we have the fully integrated IDPA solution, Important to point out this is metadata, based on your research, and your hands on lab work. and it gets harder when you really need a solution that for comin' on and sharing the results of your research, the DP4400 will address and we were to the point where when those backups failed, the ease of deployment, the customer perspective. you can protect your data in your VMware environment for sharing the innovations, and we're not done yet,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Beth PhalenPERSON

0.99+

DavePERSON

0.99+

PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

Vinny ChoinskiPERSON

0.99+

ESGORGANIZATION

0.99+

Ruya BarrettPERSON

0.99+

BethPERSON

0.99+

CaliforniaLOCATION

0.99+

99%QUANTITY

0.99+

14GBQUANTITY

0.99+

144 yearsQUANTITY

0.99+

eight hoursQUANTITY

0.99+

VinnyPERSON

0.99+

EMCORGANIZATION

0.99+

RuyaPERSON

0.99+

80%QUANTITY

0.99+

24 terabytesQUANTITY

0.99+

VMwareORGANIZATION

0.99+

55QUANTITY

0.99+

AWSORGANIZATION

0.99+

two typesQUANTITY

0.99+

54QUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

three yearQUANTITY

0.99+

74%QUANTITY

0.99+

10 terabytesQUANTITY

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

12QUANTITY

0.99+

12 terabyteQUANTITY

0.99+

hundredQUANTITY

0.99+

OneQUANTITY

0.99+

DellORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

HTML5TITLE

0.99+

fourQUANTITY

0.99+

five terabytesQUANTITY

0.99+

VmwareORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

first solutionQUANTITY

0.99+

five typesQUANTITY

0.99+

three reportsQUANTITY

0.99+

96 terabytesQUANTITY

0.99+

twoQUANTITY

0.99+

two thingsQUANTITY

0.99+

SQLTITLE

0.99+

68%QUANTITY

0.99+

FacebookORGANIZATION

0.99+

second siteQUANTITY

0.99+

tomorrowDATE

0.99+

vCloudTITLE

0.99+

third oneQUANTITY

0.98+

vSphereTITLE

0.98+

five timesQUANTITY

0.98+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+