Image Title

Search Results for Oman:

Ben Amor, Palantir, and Sam Michael, NCATS | AWS PS Partner Awards 2021


 

>>Mhm Hello and welcome to the cubes coverage of AWS amazon web services, Global public Sector partner awards program. I'm john for your host of the cube here we're gonna talk about the best covid solution to great guests. Benham or with healthcare and life sciences lead at palantir Ben welcome to the cube SAm Michaels, Director of automation and compound management and Cats. National Center for advancing translational sciences and Cats. Part of the NIH National sort of health Gentlemen, thank you for coming on and and congratulations on the best covid solution. >>Thank you so much john >>so I gotta, I gotta ask you the best solution is when can I get the vaccine? How fast how long it's gonna last but I really appreciate you guys coming on. I >>hope you're vaccinated. I would say john that's outside of our hands. I would say if you've not got vaccinated, go get vaccinated right now, have someone stab you in the arm, you know, do not wait and and go for it. That's not on us. But you got that >>opportunity that we have that done. I got to get on a plane and all kinds of hoops to jump through. We need a better solution anyway. You guys have a great technical so I wanna I wanna dig in all seriousness aside getting inside. Um you guys have put together a killer solution that really requires a lot of data can let's step back and and talk about first. What was the solution that won the award? You guys have a quick second set the table for what we're talking about. Then we'll start with you. >>So the national covered cohort collaborative is a secure data enclave putting together the HR records from more than 60 different academic medical centers across the country and they're making it available to researchers to, you know, ask many and varied questions to try and understand this disease better. >>See and take us through the challenges here. What was going on? What was the hard problem? I'll see everyone had a situation with Covid where people broke through and cloud as he drove it amazon is part of the awards, but you guys are solving something. What was the problem statement that you guys are going after? What happened? >>I I think the problem statement is essentially that, you know, the nation has the electronic health records, but it's very fragmented, right. You know, it's been is highlighted is there's there's multiple systems around the country, you know, thousands of folks that have E H. R. S. But there is no way from a research perspective to actually have access in any unified location. And so really what we were looking for is how can we essentially provide a centralized location to study electronic health records. But in a Federated sense because we recognize that the data exist in other locations and so we had to figure out for a vast quantity of data, how can we get data from those 60 sites, 60 plus that Ben is referencing from their respective locations and then into one central repository, but also in a common format. Because that's another huge aspect of the technical challenge was there's multiple formats for electronic health records, there's different standards, there's different versions. And how do you actually have all of this data harmonised into something which is usable again for research? >>Just so many things that are jumping in my head right now, I want to unpack one at the time Covid hit the scramble and the imperative for getting answers quickly was huge. So it's a data problem at a massive scale public health impact. Again, we were talking before we came on camera, public health records are dirty, they're not clean. A lot of things are weird. I mean, just just massive amount of weird problems. How did you guys pull together take me through how this gets done? What what happened? Take us through the the steps He just got together and said, let's do this. How does it all happen? >>Yeah, it's a great and so john, I would say so. Part of this started actually several years ago. I explain this when people talk about in three C is that and Cats has actually established what we like to call, We support a program which is called the Clinical translation Science Award program is the largest single grant program in all of NIH. And it constitutes the bulk of the Cats budget. So this is extra metal grants which goes all over the country. And we wanted this group to essentially have a common research environment. So we try to create what we call the secure scientific collaborative platforms. Another example of this is when we call the rare disease clinical research network, which again is a consortium of 20 different sites around the nation. And so really we started working this several years ago that if we want to Build an environment that's collaborative for researchers around the country around the world, the natural place to do that is really with a cloud first strategy and we recognize this as and cats were about 600 people now. But if you look at the size of our actual research community with our grantees were in the thousands. And so from the perspective that we took several years ago was we have to really take a step back. And if we want to have a comprehensive and cohesive package or solution to treat this is really a mid sized business, you know, and so that means we have to treat this as a cloud based enterprise. And so in cats several years ago had really gone on this strategy to bring in different commercial partners, of which one of them is Palin tear. It actually started with our intramural research program and obviously very heavy cloud use with AWS. We use your we use google workspace, essentially use different cloud tools to enable our collaborative researchers. The next step is we also had a project. If we want to have an environment, we have to have access. And this is something that we took early steps on years prior that there is no good building environment if people can't get in the front door. So we invested heavily and create an application which we call our Federated authentication system. We call it unified and cats off. So we call it, you know, for short and and this is the open source in house project that we built it and cats. And we wanted to actually use this for all sorts of implementation, acting as the front door to this collaborative environment being one of them. And then also by by really this this this interest in electronic health records that had existed prior to the Covid pandemic. And so we've done some prior work via mixture of internal investments in grants with collaborative partners to really look at what it would take to harmonize this data at scale. And so like you mentioned, Covid hit it. Hit really hard. Everyone was scrambling for answers. And I think we had a bit of these pieces um, in play. And then that's I think when we turned to ban and the team at volunteer and we said we have these components, we have these pieces what we really need. Something independent that we can stand up quickly to really address some of these problems. One of the biggest one being that data ingestion and the harmonization step. And so I can let Ben really speak to that one. >>Yeah. Ben Library because you're solving a lot of collaboration problems, not just the technical problem but ingestion and harmonization ingestion. Most people can understand is that the data warehousing or in the database know that what that means? Take us through harmonization because not to put a little bit of shade on this, but most people think about, you know, these kinds of research or non profits as a slow moving, you know, standing stuff up sandwich saying it takes time you break it down. By the time you you didn't think things are over. This was agile. So take us through what made it an agile because that's not normal. I mean that's not what you see normally. It's like, hey we'll see you next year. We stand that up. Yeah. At the data center. >>Yeah, I mean so as as Sam described this sort of the question of data on interoperability is a really essential problem for working with this kind of data. And I think, you know, we have data coming from more than 60 different sites and one of the reasons were able to move quickly was because rather than saying oh well you have to provide the data in a certain format, a certain standard. Um and three C. was able to say actually just give us the data how you have it in whatever format is easiest for you and we will take care of that process of actually transforming it into a single standard data model, converting all of the medical vocabularies, doing all of the data quality assessment that's needed to ensure that data is actually ready for research and that was very much a collaborative endeavor. It was run out of a team based at johns Hopkins University, but in collaboration with a broad range of researchers who are all adding their expertise and what we were able to do was to provide the sort of the technical infrastructure for taking the transformation pipelines that are being developed, that the actual logic and the code and developing these very robust kind of centralist templates for that. Um, that could be deployed just like software is deployed, have changed management, have upgrades and downgrades and version control and change logs so that we can roll that out across a large number of sites in a very robust way very quickly. So that's sort of that, that that's one aspect of it. And then there was a bunch of really interesting challenges along the way that again, a very broad collaborative team of researchers worked on and an example of that would be unit harmonization and inference. So really simple things like when a lab result arrives, we talked about data quality, um, you were expected to have a unit right? Like if you're reporting somebody's weight, you probably want to know if it's in kilograms or pounds, but we found that a very significant proportion of the time the unit was actually missing in the HR record. And so unless you can actually get that back, that becomes useless. And so an approach was developed because we had data across 60 or more different sites, you have a large number of lab tests that do have the correct units and you can look at the data distributions and decide how likely is it that this missing unit is actually kilograms or pounds and save a huge portion of these labs. So that's just an example of something that has enabled research to happen that would not otherwise have been able >>just not to dig in and rat hole on that one point. But what time saving do you think that saves? I mean, I can imagine it's on the data cleaning side. That's just a massive time savings just in for Okay. Based on the data sampling, this is kilograms or pounds. >>Exactly. So we're talking there's more than 3.5 billion lab records in this data base now. So if you were trying to do this manually, I mean, it would take, it would take to thousands of years, you know, it just wouldn't be a black, it would >>be a black hole in the dataset, essentially because there's no way it would get done. Ok. Ok. Sam take me through like from a research standpoint, this normalization, harmonization the process. What does that enable for the, for the research and who decides what's the standard format? So, because again, I'm just in my mind thinking how hard this is. And then what was the, what was decided? Was it just on the base records what standards were happening? What's the impact of researchers >>now? It's a great quite well, a couple things I'll say. And Ben has touched on this is the other real core piece of N three C is the community, right? You know, And so I think there's a couple of things you mentioned with this, johN is the way we execute this is, it was very nimble, it was very agile and there's something to be said on that piece from a procurement perspective, the government had many covid authorities that were granted to make very fast decisions to get things procured quickly. And we were able to turn this around with our acquisition shop, which we would otherwise, you know, be dead in the water like you said, wait a year ago through a normal acquisition process, which can take time, but that's only one half the other half. And really, you're touching on this and Ben is touching on this is when he mentions the research as we have this entire courts entire, you know, research community numbering in the thousands from a volunteer perspective. I think it's really fascinating. This is a really a great example to me of this public private partnership between the companies we use, but also the academic participants that are actually make up the community. Um again, who the amount of time they have dedicated on this is just incredible. So, so really, what's also been established with this is core governance. And so, you know, you think from assistance perspective is, you know, the Palin tear this environment, the N three C environment belongs to the government, but the N 33 the entire actually, you know, program, I would say, belongs to the community. We have co governance on this. So who decides really is just a mixture between the folks on End Cats, but not just end cast as folks at End Cats, folks that, you know, and I proper, but also folks and other government agencies, but also the, the academic communities and entire these mixed governance teams that actually set the stage for all of this. And again, you know, who's gonna decide the standard, We decide we're gonna do this in Oman 5.3 point one um is the standard we're going to utilize. And then once the data is there, this is what gets exciting is then they have the different domain teams where they can ask different research questions depending upon what has interest scientifically to them. Um and so really, you know, we viewed this from the government's perspective is how do we build again the secure platform where we can enable the research, but we don't really want to dictate the research. I mean, the one criteria we did put your research has to be covid focused because very clearly in response to covid, so you have to have a Covid focus and then we have data use agreements, data use request. You know, we have entire governance committees that decide is this research in scope, but we don't want to dictate the research types that the domain teams are bringing to the table. >>And I think the National Institutes of Health, you think about just that their mission is to serve the public health. And I think this is a great example of when you enable data to be surfaced and available that you can really allow people to be empowered and not to use the cliche citizen analysts. But in a way this is what the community is doing. You're doing research and allowing people from volunteers to academics to students to just be part of it. That is citizen analysis that you got citizen journalism. You've got citizen and uh, research, you've got a lot of democratization happening here. Is that part of it was a result of >>this? Uh, it's both. It's a great question. I think it's both. And it's it's really by design because again, we want to enable and there's a couple of things that I really, you know, we we clamor with at end cats. I think NIH is going with this direction to is we believe firmly in open science, we believe firmly in open standards and how we can actually enable these standards to promote this open science because it's actually nontrivial. We've had, you know, the citizen scientists actually on the tricky problem from a governance perspective or we have the case where we actually had to have students that wanted access to the environment. Well, we actually had to have someone because, you know, they have to have an institution that they come in with, but we've actually across some of those bridges to actually get students and researchers into this environment very much by design, but also the spirit which was held enabled by the community, which, again, so I think they go they go hand in hand. I planned for >>open science as a huge wave, I'm a big fan, I think that's got a lot of headroom because open source, what that's done to software, the software industry, it's amazing. And I think your Federated idea comes in here and Ben if you guys can just talk through the Federated, because I think that might enable and remove some of the structural blockers that might be out there in terms of, oh, you gotta be affiliate with this or that our friends got to invite you, but then you got privacy access and this Federated ID not an easy thing, it's easy to say. But how do you tie that together? Because you want to enable frictionless ability to come in and contribute same time you want to have some policies around who's in and who's not. >>Yes, totally, I mean so Sam sort of already described the the UNa system which is the authentication system that encounters has developed. And obviously you know from our perspective, you know we integrate with that is using all of the standard kind of authentication protocols and it's very easy to integrate that into the family platform um and make it so that we can authenticate people correctly. But then if you go beyond authentication you also then to actually you need to have the access controls in place to say yes I know who this person is, but now what should they actually be able to see? Um And I think one of the really great things in Free C has done is to be very rigorous about that. They have their governance rules that says you should be using the data for a certain purpose. You must go through a procedure so that the access committee approves that purpose. And then we need to make sure that you're actually doing the work that you said you were going to. And so before you can get your data back out of the system where your results out, you actually have to prove that those results are in line with the original stated purpose and the infrastructure around that and having the access controls and the governance processes, all working together in a seamless way so that it doesn't, as you say, increase the friction on the researcher and they can get access to the data for that appropriate purpose. That was a big component of what we've been building out with them three C. Absolutely. >>And really in line john with what NIH is doing with the research, all service, they call this raz. And I think things that we believe in their standards that were starting to follow and work with them closely. Multifactor authentication because of the point Ben is making and you raised as well, you know, one you need to authenticate, okay. This you are who you say you are. And and we're recognizing that and you're, you know, the author and peace within the authors. E what do you authorized to see? What do you have authorization to? And they go hand in hand and again, non trivial problems. And especially, you know, when we basis typically a lot of what we're using is is we'll do direct integrations with our package. We using commons for Federated access were also even using login dot gov. Um, you know, again because we need to make sure that people had a means, you know, and login dot gov is essentially a runoff right? If they don't have, you know an organization which we have in common or a Federated access to generate a login dot gov account but they still are whole, you know beholden to the multi factor authentication step and then they still have to get the same authorizations because we really do believe access to these environment seamlessly is absolutely critical, you know, who are users are but again not make it restrictive and not make it this this friction filled process. That's very that's very >>different. I mean you think about nontrivial, totally agree with you and if you think about like if you were in a classic enterprise, I thought about an I. T. Problem like bring your own device to work and that's basically what the whole world does these days. So like you're thinking about access, you don't know who's coming in, you don't know where they're coming in from, um when the churn is so high, you don't know, I mean all this is happening, right? So you have to be prepared two Provisions and provide resource to a very lightweight access edge. >>That's right. And that's why it gets back to what we mentioned is we were taking a step back and thinking about this problem, you know, an M three C became the use case was this is an enterprise I. T. Problem. Right. You know, we have users from around the world that want to access this environment and again we try to hit a really difficult mark, which is secure but collaborative, Right? That's that's not easy, you know? But but again, the only place this environment could take place isn't a cloud based environment, right? Let's be real. You know, 10 years ago. Forget it. You know, Again, maybe it would have been difficult, but now it's just incredible how much they advanced that these real virtual research organizations can start to exist and they become the real partnerships. >>Well, I want to Well, that's a great point. I want to highlight and call out because I've done a lot of these interviews with awards programs over the years and certainly in public sector and open source over many, many years. One of the things open source allows us the code re use and also when you start getting in these situations where, okay, you have a crisis covid other things happen, nonprofits go, that's the same thing. They, they lose their funding and all the code disappears. Saying with these covid when it becomes over, you don't want to lose the momentum. So this whole idea of re use this platform is aged deplatforming of and re factoring if you will, these are two concepts with a cloud enables SAM, I'd love to get your thoughts on this because it doesn't go away when Covid's >>over, research still >>continues. So this whole idea of re platform NG and then re factoring is very much a new concept versus the old days of okay, projects over, move on to the next one. >>No, you're absolutely right. And I think what first drove us is we're taking a step back and and cats, you know, how do we ensure that sustainability? Right, Because my background is actually engineering. So I think about, you know, you want to build things to last and what you just described, johN is that, you know, that, that funding, it peaks, it goes up and then it wanes away and it goes and what you're left with essentially is nothing, you know, it's okay you did this investment in a body of work and it goes away. And really, I think what we're really building are these sustainable platforms that we will actually grow and evolve based upon the research needs over time. And I think that was really a huge investment that both, you know, again and and Cats is made. But NIH is going in a very similar direction. There's a substantial investment, um, you know, made in these, these these these really impressive environments. How do we make sure the sustainable for the long term? You know, again, we just went through this with Covid, but what's gonna come next? You know, one of the research questions that we need to answer, but also open source is an incredibly important piece of this. I think Ben can speak this in a second, all the harmonization work, all that effort, you know, essentially this massive, complex GTL process Is in the N three Seagate hub. So we believe, you know, completely and the open source model a little bit of a flavor on it too though, because, you know, again, back to the sustainability, john, I believe, you know, there's a room for this, this marriage between commercial platforms and open source software and we need both. You know, as we're strong proponents of N cats are both, but especially with sustainability, especially I think Enterprise I. T. You know, you have to have professional grade products that was part of, I would say an experiment we ran out and cast our thought was we can fund academic groups and we can have them do open source projects and you'll get some decent results. But I think the nature of it and the nature of these environments become so complex. The experiment we're taking is we're going to provide commercial grade tools For the academic community and the researchers and let them use them and see how they can be enabled and actually focus on research questions. And I think, you know, N3C, which we've been very successful with that model while still really adhering to the open source spirit and >>principles as an amazing story, congratulated, you know what? That's so awesome because that's the future. And I think you're onto something huge. Great point, Ben, you want to chime in on this whole sustainability because the public private partnership idea is the now the new model innovation formula is about open and collaborative. What's your thoughts? >>Absolutely. And I mean, we uh, volunteer have been huge proponents of reproducibility and openness, um in analyses and in science. And so everything done within the family platform is done in open source languages like python and R. And sequel, um and is exposed via open A. P. I. S and through get repository. So that as SaM says, we've we've pushed all of that E. T. L. Code that was developed within the platform out to the cats get hub. Um and the analysis code itself being written in those various different languages can also sort of easily be pulled out um and made available for other researchers in the future. And I think what we've also seen is that within the data enclave there's been an enormous amount of re use across the different research projects. And so actually having that security in place and making it secure so that people can actually start to share with each other securely as well. And and and be very clear that although I'm sharing this, it's still within the range of the government's requirements has meant that the, the research has really been accelerated because people have been able to build and stand on the shoulders of what earlier projects have done. >>Okay. Ben. Great stuff. 1000 researchers. Open source code and get a job. Where do I sign up? I want to get involved. This is amazing. Like it sounds like a great party. >>We'll send you a link if you do a search on on N three C, you know, do do a search on that and you'll actually will come up with a website hosted by the academic side and I'll show you all the information of how you can actually connect and john you're welcome to come in. Billion by all means >>billions of rows of data being solved. Great tech he's working on again. This is a great example of large scale the modern era of solving problems is here. It's out in the open, Open Science. Sam. Congratulations on your great success. Ben Award winners. You guys doing a great job. Great story. Thanks for sharing here with us in the queue. Appreciate it. >>Thank you, john. >>Thanks for having us. >>Okay. It is. Global public sector partner rewards best Covid solution palantir and and cats. Great solution. Great story. I'm john Kerry with the cube. Thanks for watching. Mm mm. >>Mhm

Published Date : Jun 30 2021

SUMMARY :

thank you for coming on and and congratulations on the best covid solution. so I gotta, I gotta ask you the best solution is when can I get the vaccine? go get vaccinated right now, have someone stab you in the arm, you know, do not wait and and go for it. Um you guys have put together a killer solution that really requires a lot of data can let's step you know, ask many and varied questions to try and understand this disease better. What was the problem statement that you guys are going after? I I think the problem statement is essentially that, you know, the nation has the electronic health How did you guys pull together take me through how this gets done? or solution to treat this is really a mid sized business, you know, and so that means we have to treat this as a I mean that's not what you see normally. do have the correct units and you can look at the data distributions and decide how likely do you think that saves? it would take, it would take to thousands of years, you know, it just wouldn't be a black, Was it just on the base records what standards were happening? And again, you know, who's gonna decide the standard, We decide we're gonna do this in Oman 5.3 And I think this is a great example of when you enable data to be surfaced again, we want to enable and there's a couple of things that I really, you know, we we clamor with at end ability to come in and contribute same time you want to have some policies around who's in and And so before you can get your data back out of the system where your results out, And especially, you know, when we basis typically I mean you think about nontrivial, totally agree with you and if you think about like if you were in a classic enterprise, you know, an M three C became the use case was this is an enterprise I. T. Problem. One of the things open source allows us the code re use and also when you start getting in these So this whole idea of re platform NG and then re factoring is very much a new concept And I think, you know, N3C, which we've been very successful with that model while still really adhering to Great point, Ben, you want to chime in on this whole sustainability because the And I think what we've also seen is that within the data enclave there's I want to get involved. will come up with a website hosted by the academic side and I'll show you all the information of how you can actually connect and It's out in the open, Open Science. I'm john Kerry with the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NIHORGANIZATION

0.99+

National Institutes of HealthORGANIZATION

0.99+

Sam MichaelPERSON

0.99+

PalantirPERSON

0.99+

john KerryPERSON

0.99+

SamPERSON

0.99+

BenPERSON

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

1000 researchersQUANTITY

0.99+

Ben AmorPERSON

0.99+

thousandsQUANTITY

0.99+

60 sitesQUANTITY

0.99+

bothQUANTITY

0.99+

next yearDATE

0.99+

60QUANTITY

0.99+

amazonORGANIZATION

0.99+

more than 60 different sitesQUANTITY

0.99+

johns Hopkins UniversityORGANIZATION

0.99+

thousands of yearsQUANTITY

0.99+

pythonTITLE

0.99+

20 different sitesQUANTITY

0.99+

SAm MichaelsPERSON

0.99+

more than 60 different academic medical centersQUANTITY

0.99+

johNPERSON

0.99+

johnPERSON

0.99+

Covid pandemicEVENT

0.98+

several years agoDATE

0.98+

one criteriaQUANTITY

0.98+

more than 3.5 billion lab recordsQUANTITY

0.98+

N3CORGANIZATION

0.98+

firstQUANTITY

0.98+

10 years agoDATE

0.98+

GlobaPERSON

0.98+

60 plusQUANTITY

0.98+

two conceptsQUANTITY

0.97+

first strategyQUANTITY

0.97+

a year agoDATE

0.96+

R.TITLE

0.96+

thousands of folksQUANTITY

0.96+

OneQUANTITY

0.96+

one aspectQUANTITY

0.96+

agileTITLE

0.95+

about 600 peopleQUANTITY

0.94+

AWSEVENT

0.94+

single grant programQUANTITY

0.94+

CovidPERSON

0.92+

googleORGANIZATION

0.91+

secondQUANTITY

0.91+

Free CTITLE

0.9+

one pointQUANTITY

0.9+

End CatsORGANIZATION

0.89+

National Center for advancing translational sciences and CatsORGANIZATION

0.89+

BillionQUANTITY

0.88+

SeagateORGANIZATION

0.88+

one halfQUANTITY

0.88+

two ProvisionsQUANTITY

0.86+

one central repositoryQUANTITY

0.85+

login dot gov.OTHER

0.84+

FederatedORGANIZATION

0.84+

dot govOTHER

0.83+

palantirPERSON

0.83+

billions of rows of dataQUANTITY

0.82+

Dr. Chelle Gentemann, Farallon Institute | AWS Public Sector Online


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Online. Brought to you by Amazon Web Services. >> Welcome back to the coverage of AWS Public Sector Summit virtual. I'm John for host of theCUBE. We're here in theCUBE studios, quarantine crew here talking to all the guests remotely as part of our virtual coverage of AWS Public Sector. So I've got a great guest here talking about data science, weather predictions, accurate climate modeling, really digging into how cloud is helping science. Dr. Chelle Gentemann, who is a senior scientist at Farallon Institute is my guest. Chelle, thank you for joining me. >> Thank you. >> So tell us a little about your research. It's fascinating how, I've always joked in a lot of my interviews, 10, 15, 20 years ago, you need super computers to do all these calculations. But now with cloud computing, it opens up so much more on the research side and the impact is significant. You're at an awesome Institute, the Farallon Institute, doing a lot of stuff in the sea and the ocean and a lot of your things. What's your focus? >> I study the ocean from space, and about 71% was covered by ocean. 40% of our population in the globe actually lives within 100 kilometers of the coast. The ocean influences our weather, it influences climate, but it also provides fisheries and recreational opportunities for people. So it's a really important part of the earth system. And I've been focused on using satellites. So from space, trying to understand how the ocean influences weather and climate >> And how new is this in terms of just state of the art? Fairly new, been around for a while? What's some of the progress for the state of the art we're involved in. >> I started working on satellite data in the 90s during school, and I liked the satellite data cause it's the interface of sort of applied math, computer science and physics. The state of the art is that we've really had remote sensing around for about 20, 30 years. But things are changing because right now we're having more sensors and different types of instruments up there and trying to combine that data is really challenging. To use it, our brain is really good in two and three dimensions, but once you get past that, it's really difficult for the human brain to try and interpret the data. And that's what scientists do. Is they try and take all these multidimensional data sets and try to build some understanding of the physics of what's going on. And what's really interesting is how cloud computing is impacting that. >> It sounds so exciting. The confluence of multiple disciplines kind of all right there, kind of geek out big time. So I've got to ask you, in the past you had the public data set program. Are you involved in that? Do you take advantage that research? How is some of the things that AWS is doing help you and is that public data set part of it? >> It's a big part of it now. I've helped to deploy some of the ocean temperature data sets on the cloud. And the way that AWS public data sets as sort of has potential to transform science is the way that we've been doing science, the way that I was trained in science was that you would go and download the data. And most of these big institutions that do research, you start to create these dark repositories where the institutions or someone in your group has downloaded data sets. And then you're trying to do science with these data, but you're not sure if it's the most recent version. It makes it really hard to do reproducible science, because if you want to share your code, somebody also has to access that data and download it. And these are really big data sets. So downloading it could take quite a long time. It's not very transparent, it's not very open. So when you move to a public data set program like AWS, you just take all of that download out of the equation. And instantly when I share my code now, people can run the code and just build on it and go right from there, or they can add to it or suggest changes. That's a really big advantage for trying to do open science. >> I had a dinner with Teresa Carlson who is awesome. She runs the Public Sector Summit for AWS. And I remember this was years ago and we were dreaming about a future where we would have national parks in the cloud or this concept of a Yosemite-like beautiful treasure. Physical place you could go there. And we were kind of dreaming that, wouldn't it be great to have like these data sets or supercomputer public commons. It sounds like that's kind of the vibe here where it's shareable and it's almost like a digital national park or something. Is that it's a shared resource. Is that kind of happening? First of all, what do you react to that? And what's your thoughts around that dream? And does this kind of tie to that? >> Yeah, I think it ties directly to that. When I think about how science is still being done and has been done for the past sort of 20 years, we had a real change about 20 years ago when a lot of the government agencies started requiring their data to be public. And that was a big change. So then we got, we actually had public data sets to work with. So more people started getting involved in science. Now I see it as sort of this fortress of data that in some ways have prevented scientists from really moving rapidly forward. But with moving onto the cloud and bringing your ideas and your compute to the data set, it opens up this entire Pandora's box, this beautiful world of how you can do science. You're no longer restricted to what you have downloaded or what you're able to do because you have this unlimited compute. You don't have to be at a big institution with massive supercomputers. I've been running hundreds of workers analyzing in my realm. Over two or 300 gigabytes of data on a $36 Raspberry Pi that I was playing around with my kids. That's transformative. That allows anyone to access data. >> And if you think about what it would have to do to do that in the old days, stack and spike servers. Call, first of all, you'll get the cash, buy servers, rack them and stack them, connect to a download of nightmare. So I got to ask you now with all this capability, first of all, you're talking to someone who loves the cloud. So I'm pretty biased. What are you doing now with the cloud that you couldn't do before? So certainly the old way from a provisioning standpoint, check, done. Innovation, bars raised. Now you're creative, you're looking at solutions, you're building enabling device like a Raspberry Pi, almost like a switch or an initiation point. How has the creativity changed? What can you do now? What are some of the things that are possible that you're doing? >> I think that you can point to within some of the data sets that have already gone on the cloud are being used in these really new, different ways. Again, it points to this, when you don't have access to the data, just simply because you have to download it. So that downloading the data and figuring out how to use it and figuring out how to store it is a big barrier for people. But when things like the HF Radar data set went online. Within a couple of months, there was a paper where people were using it to monitor bird migration in ways that they'd never been able to do before, because they simply hadn't been able to get the data. There's other research being done, where they've put whale recordings on the cloud and they're using AI to actually identify different whales. It's using one data set, but it's also the ability to combine all these different data sets and have access to them at the same time and not be limited by your computer anymore. Which for a lot of science, we've been limited by our access to compute. And that when you take away that, it opens all these new doors into doing different types of research with new types of data, >> You could probably correlate the whale sounds with the temperature and probably say, hey, it's cold. >> Chelle: Exactly. >> I'm making that up. But that's the kind of thing that wouldn't be possible before because you'd have to get the data set, do some math. I mean, this is cool stuff with the ocean. I mean, can you just take a minute to share some, give people an insight in some of the cool projects that are being either thought up or dreamed up or initiated or done or in process or in flight, because actually there's so much data in the ocean. So much things to do, it's very dynamic. There's a lot of data obviously. Share, for the folks that might not have a knowledge of what goes on. What are you guys thinking about? >> A lot of what we're thinking about is how to have societal impact. So as a scientist, you want your work to be relevant. And one of the things that we found is that the ocean really impacts weather at scales that we simply can't measure right now. So we're really trying to push forward with space instrumentation so that we can monitor the ocean in new ways at new resolutions. And the reason that we want to do that is because the ocean impacts longterm predictability in the weather forecast. So a lot of weather forecasts now, if you look out, you can go on to Weather Underground or whatever weather site you want. And you'll see the forecast goes out 10 days and that's because there's not a lot of accuracy after that. So a lot of research is going into how do we extend into seasonal forecast? I'm from Santa Rosa, California. We've been massively impacted by wildfires. And being able to understand how to prepare for the coming season is incredibly important. And surprisingly, I think to a lot of people, the ocean plays a big role in that. The ocean can impact how much storm systems, how they grow, how they evolve, how much water they actually got. Moisture they pick up from the ocean and then transport over land. So if you want to talk about, it's really interesting to talk about how the ocean impacts our weather and our seasonal weather. So that's an area where people are doing a lot of research. And again, you're talking about different data sets and being able to work together in a collaborative environment on the cloud is really what's starting to transform how people are working together, how they're communicating and how they're sharing their science. >> I just hope it opens up someone's possibilities. I want to get your vision of what you think the breakthroughs might be possible with cloud for research and computing. Because you have kind of old school and new school. Amazon CEO, Andy Jassy calls it old guard, new guard. The new guard is really more looking for self provisioning, auto-scaling, all that. Super computer on demand, all that stuff at your fingertips. Great, love that. But is there any opportunity for institutional change within the scientific community? What's your vision around the impact? It's not just scientific. It also can go to government for societal impact. So you start to see this modernization trend. What's your vision on the impact of the scientific community with cloud? >> I think that the way the scientific community has been organized for a long time is that scientists that are at an institute. And a lot of the research has been siloed. And it's siloed in part because of the way the funding mechanism works. But that inhibits creativity and inhibits collaboration. And it inhibits the advancement of science. Because if you hold onto data, you hold on to code. You're not allowing other people to work on it and to build on what you do. The traditional way that scientists have moved forward is you make a discovery, you write up a paper, you describe it in a journal article, and then you publish that. Then if someone wants to build on your research, they get your journal article, they read it. Then they try to understand what you did. They maybe recode all of your analysis. So they're redoing the work that you did, which is simply not efficient. Then they have to download the data sets that you access. This slows down all of science. And it also inhibits bringing in new data sets again because you don't have access to them. So one of the things I'm really excited about with cloud computing is that by bringing our scientific ideas and our compute to the data, it allows us to break out of these silos and collaborate with people outside of our institution, outside of our country, and bring new ideas and new voices and elevate everyone's ideas to another level. >> It brings the talent and the ideas together. And now you have digital and virtual worlds, cause we've been virtualized with COVID-19. You can create content as a community building capability or your work can create a network effect with other peers. And is a flash mobbing effect of potential collaboration. So work, work forces, workplaces, work loads, work flows, kind of are interesting or kind of being changed in real time. You were just talking about speed, agility. These are technical concepts being applied to kind of real world scenarios. I mean your thoughts on that. >> I now work with people like right now, I'm working with students in Denmark, Oman, India, France, and the US. That just wasn't possible 10 years ago. And we're able to bring all these different voices together, which it really frees up science and it frees up who can participate in science, which is really fun. I mean, I'm a scientist. I do it because it's really, really fun. And I love working with other people. So this new ability that I've gained in the last couple of years by moving onto the cloud has really accelerated all the different types of collaborations I'm involved with. And hopefully accelerating science as a whole. >> I love this topic. It's one of my passion areas where it's an issue I've been scratching for over a decade too. Is that content and your work is an enabler for community engagement because you don't need to publish it to a journal. It's like waterfall mentality. It's like you do it. But if you can publish something or create something and show it, demo it or illustrate it, that's better than a paper. If you're on video, you can talk about it. It's going to attract other people, like-minded peers can come together. That's going to create more collaboration data. That's going to create more solidarity around topics and accelerate the breakthroughs. >> For our last paper, we actually published all the software with it. We got a digital firewall for the software, published the software and then containerized it so that when you read our paper, at the bottom of the paper, you get a link. You go to that link, you click on a button and you're instantly in our compute environment, you can reproduce all of our results. Do the error propagation analysis that we did. And then if you don't like something, go ahead and change it or add onto it or ask us some questions. That's just magical. >> Yeah, it really is. And Amazon has been a real investor and I got to give props to Teresa Carlson and her team and Andy Jassy, the CEO, because they've been investing in credits and collaborating with groups like Jet Propulsion Lab, you guys, everyone else. Just space has been a big part of that. I see Bezos love space. So they've been investing in that and bringing that resource to the table. So you've got to give Amazon some props for that. But great work that you're doing. I'm fascinated. I think it's one of those examples where it's a moonshot, but it's doable. It's like you can get there. >> Yeah, and it's just so exciting. I'm the lead on a proposal for a new science mission to NASA. And we are going all in with the cloud computing. So we're going to do all the processing on the cloud. We want to do the entire science team on the cloud and create a science data platform where we're all working together. That's just never happened before. And I think that by doing this, we multiply the benefits of all of our analysis. We make it faster and we make it better and we make it more collaborative. So everyone wins. >> Sure, you're an inspiration to many. I'm so excited to do this interview with you. I love what you said earlier at the beginning about your focus of being in computer science, physics, space. That confluence is multiple disciplines. Not everyone can have that. Some people just get a computer science degree. Some people get, I'm premed, or I'm going to do biology. I'm going to do this. This notion of multiple disciplines coming together is really what society needs now. Is we're converging or virtualizing or becoming a global society. And that brings up my final question. Is something I know that you're passionate about creating a more inclusive scientific community because you don't have to be the, just the computer science major. Now, if you have all three, it's a multi-tool when you're a multiple skill player. But you don't have to be something to get into this new world. Because if you have certain disciplines, whether it's math, maybe you don't have computer science but it's quick to learn. There's frameworks out there, no code, low code. So cloud computing supports this. What's your vision and what's your opinion of how more inclusivity can come into the scientific community? >> I think that, when you're at an institution or at a commercial company or a nonprofit, if you're at some sort of organized institution, you have access to things that not everyone has access to. And in a lot of the world, there's trouble with internet connectivity. There is trouble downloading data. They simply don't have the ability to download large data sets. So I'm passionate about inclusivity because I think that, until we include global voices in science, we're not going to see these global results that we need to. We need to be more interdisciplinary. And that means working with different scientists in different fields. And if we can all work together on the same platform that really helps explode interdisciplinary science and what can be done. A lot of science has been quite siloed because you work at an institution. So you talked to the people one door down, or two doors down or on the same floor. But when you start working in this international community and people don't have to be online all the time, they can write code and then just jump on and upload it. You don't need to have these big, powerful resources or institutions behind you. And that gives a platform for all types of scientists, that all types of levels to start working with everyone. >> This is why I love the idea of the content and the community being horizontally scalable. Because if you're stuck around a physical institution or space, you kind of like have group think, or maybe you have the same kind of ideas being talked about. But here, when you pull back the remote work with COVID-19, as an example, it highlights it. The remote scientist could be anywhere. So that's going to increase access. What can we do to accept those voices? Is there a way or an idea or formula you see that people could, assuming there's access, which I would say, yes. What do we do? What do you do? >> I think you have to be open and you have to listen. Because, if I ask a question into the room where my colleagues work, we're going to come up with an answer. But we're going to come up with an answer that's informed by how we were trained in science and what fields we know. So when you open up this box and you allow other voices to participate in science, you're going to get new and different answers. And as a scientist, you need to be open to allowing those voices to be heard and to acting on them and including them in your research results and thinking about how they may change what you think and bring you to new conclusions. >> Machine learning has been a part. I know your work in the past, obviously cloud you're a big fan, obviously can tell. Proponent of it. Machine learning and AI can be a big part of this too, both on not only sourcing new voices and identifying what's contextually relevant at any given time, but also on the science-side machine learning. Because if we can take a minute to give your thoughts on the and relevance of machine learning and AI, because you still got the humans and you got machines augmenting each other, that relationship is going to be a constant conversation point going forward. Is there data about the data and what's the machines doing? What's your thoughts on all of these? Machine learning and AI as an impact. >> It's funny you say impact. So I work with this NASA IMPACT project, which is this interdisciplinary team that tries to advance science, and it's really into machine learning and AI. One of the difficulties when you start to do science is you have an idea like, okay, I want to study tropical storms. And then you have to go and wade through all these different types of data to identify when events happened and then gather all the data from those different events and start to try and do some analysis. They're working and they've been really successful in using AI to actually do this sort of event identification. So what's interesting and how can we use AI and machine learning to identify those interesting events and gathering everything together for scientists to then try and bring for analysis? So AI is being used in a lot of different ways in science. It's being used to look at these multi-dimensional problems that are just a little bit too big for our brains to try and understand. But if we can use AI and machine learning to gather insights into certain aspects of them, it starts to lead to new conclusions and it starts to allow us to see new connections. AI and machine learning has this potential to transform how we do science. Cloud computing is part of that because we have access to so much more data now. >> It's a real enabling technology. And when you have enabling technology, the power is in the hands of the creative minds. And it's really what you can think up and what you can dream up and that's going to come from people. Phenomenal. Final question for you, to kind of end on a light note. Dr. Chelle Gentemann here, senior scientist at the Farallon Institute. You're doing a lot of work on the ocean, space, ocean interaction. What's the coolest thing you're working on right now? Or you you've worked on that you think would be worth sharing. >> There's a couple of things. I have to think about what's the most fun. Right now, I'm working on doing some analysis with data. We had a big, huge international field campaign this winter off of Barbados, there were research festival, rustles and aircraft. There were sail drones involved, which are these autonomous robotic vehicles that go along the ocean surface and measure air-sea interactions. Right now we're working on analyzing that data. So we have all of this ground truth data. We're bringing in all the satellite observations to see how we can better understand the earth system in that region with a specific focus on air-sea interactions over the ocean where when it rains, you get the salinity stratification. When there's strong solar, you get diurnal stratification. So you have upper ocean stratification and heat and salinity. And how those impact the fluxes and how the ocean impacts the heat and moisture transport into the atmosphere, which then affects weather. So again, this is this multidimensional data set with all these different types of both ground truth data, satellite data that we're trying to bring together and it's really exciting. >> It could shape policy, it could shape society. Maybe have a real input into global warming. Our behaviors in the world, sounds awesome. Plus, I love the ground truth and the observational data. It sounds like our media business algorithm, we got to get the observation, get the truth, report it. Sounds like there's something in there that we could learn from. (both giggling) >> Yeah, it's very interesting cause you often find what you see from a distance is not quite true up close. >> I can tell you that we as in media as we do a lot of investigative journalism. So we appreciate that. Dr. Chelle Gentemann, senior scientist at the Farallon Institute, here as part of AWS Public Sector Summit. Thank you so much for time. What a great story. We'll keep in touch. Love the sails drone. Great innovation. And continue the good work, I'm looking forward to checking in later. Thanks for joining. >> Thanks so much. It was nice talking to you. >> I'm John Furrier with theCUBE. We're here in our studios covering the Amazon Web Services Public Sector Summit virtual. This is theCUBE virtual bringing you all the coverage with Amazon and theCUBE. Thanks for watching. (upbeat music)

Published Date : Jun 30 2020

SUMMARY :

Brought to you by Amazon Web Services. Chelle, thank you for joining me. and the ocean and a lot of your things. I study the ocean from space, for the state of the the human brain to try in the past you had the and download the data. First of all, what do you react to that? to what you have downloaded So I got to ask you now And that when you take away that, correlate the whale sounds So much things to do, it's very dynamic. And the reason that we want to do that of the scientific community with cloud? and to build on what you do. and the ideas together. and the US. and accelerate the breakthroughs. You go to that link, you click on a button and bringing that resource to the table. science team on the cloud But you don't have to be something And in a lot of the world, and the community being and you allow other voices and you got machines And then you have to go And it's really what you can think up and how the ocean impacts the heat and the observational data. cause you often find what And continue the good work, It was nice talking to you. the Amazon Web Services

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy JassyPERSON

0.99+

Chelle GentemannPERSON

0.99+

AmazonORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

DenmarkLOCATION

0.99+

Farallon InstituteORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

FranceLOCATION

0.99+

AWSORGANIZATION

0.99+

ChellePERSON

0.99+

Jet Propulsion LabORGANIZATION

0.99+

NASAORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

IndiaLOCATION

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

10 daysQUANTITY

0.99+

twoQUANTITY

0.99+

two doorsQUANTITY

0.99+

$36QUANTITY

0.99+

USLOCATION

0.99+

OmanLOCATION

0.99+

BarbadosLOCATION

0.99+

40%QUANTITY

0.99+

oneQUANTITY

0.99+

COVID-19OTHER

0.99+

300 gigabytesQUANTITY

0.99+

Santa Rosa, CaliforniaLOCATION

0.99+

bothQUANTITY

0.99+

three dimensionsQUANTITY

0.98+

10 years agoDATE

0.98+

PandoraORGANIZATION

0.98+

one doorQUANTITY

0.98+

90sDATE

0.97+

about 71%QUANTITY

0.97+

Public Sector SummitEVENT

0.97+

OneQUANTITY

0.97+

YosemiteLOCATION

0.96+

10DATE

0.96+

BezosPERSON

0.96+

over a decadeQUANTITY

0.95+

Amazon Web Services Public Sector SummitEVENT

0.94+

about 20, 30 yearsQUANTITY

0.94+

AWS Public Sector SummitEVENT

0.93+

15DATE

0.92+

threeQUANTITY

0.92+

about 20 years agoDATE

0.92+

100 kilometersQUANTITY

0.92+

one data setQUANTITY

0.92+

hundreds of workersQUANTITY

0.91+

AWS Public Sector OnlineORGANIZATION

0.89+

Over twoQUANTITY

0.88+

both groundQUANTITY

0.85+

FirstQUANTITY

0.83+

earthLOCATION

0.82+

this winterDATE

0.8+

20 years agoDATE

0.77+

years agoDATE

0.75+

last couple of yearsDATE

0.7+

HFORGANIZATION

0.69+

couple of monthsQUANTITY

0.69+

Dr.PERSON

0.68+

yearsDATE

0.66+

AWS Public SectorORGANIZATION

0.62+

Raspberry PiCOMMERCIAL_ITEM

0.62+

theCUBEORGANIZATION

0.61+

theCUBETITLE

0.6+

thoseQUANTITY

0.6+

Raspberry PiORGANIZATION

0.56+

RadarCOMMERCIAL_ITEM

0.56+

20QUANTITY

0.55+

thingsQUANTITY

0.51+

minuteQUANTITY

0.5+

Nutanix .NEXT Morning Keynote Day1


 

Section 1 of 13 [00:00:00 - 00:10:04] (NOTE: speaker names may be different in each section) Speaker 1: Ladies and gentlemen our program will begin momentarily. Thank you. (singing) This presentation and the accompanying oral commentary may include forward looking statements that are subject to risks uncertainties and other factors beyond our control. Our actual results, performance or achievements may differ materially and adversely from those anticipated or implied by such statements because of various risk factors. Including those detailed in our annual report on form 10-K for the fiscal year ended July 31, 2017 filed with the SEC. Any future product or roadmap information presented is intended to outline general product direction and is not a commitment to deliver any functionality and should not be used when making any purchasing decision. (singing) Ladies and gentlemen please welcome Vice President Corporate Marketing Nutanix, Julie O'Brien. Julie O'Brien: All right. How about those Nutanix .NEXT dancers, were they amazing or what? Did you see how I blended right in, you didn't even notice I was there. [French 00:07:23] to .NEXT 2017 Europe. We're so glad that you could make it today. We have such a great agenda for you. First off do not miss tomorrow morning. We're going to share the outtakes video of the handclap video you just saw. Where are the customers, the partners, the Nutanix employee who starred in our handclap video? Please stand up take a bow. You are not going to want to miss tomorrow morning, let me tell you. That is going to be truly entertaining just like the next two days we have in store for you. A content rich highly interactive, number of sessions throughout our agenda. Wow! Look around, it is amazing to see how many cloud builders we have with us today. Side by side you're either more than 2,200 people who have traveled from all corners of the globe to be here. That's double the attendance from last year at our first .NEXT Conference in Europe. Now perhaps some of you are here to learn the basics of hyperconverged infrastructure. Others of you might be here to build your enterprise cloud strategy. And maybe some of you are here to just network with the best and brightest in the industry, in this beautiful French Riviera setting. Well wherever you are in your journey, you'll find customers just like you throughout all our sessions here with the next two days. From Sligro to Schroders to Societe Generale. You'll hear from cloud builders sharing their best practices and their lessons learned and how they're going all in with Nutanix, for all of their workloads and applications. Whether it's SAP or Splunk, Microsoft Exchange, unified communications, Cloud Foundry or Oracle. You'll also hear how customers just like you are saving millions of Euros by moving from legacy hypervisors to Nutanix AHV. And you'll have a chance to post some of your most challenging technical questions to the Nutanix experts that we have on hand. Our Nutanix technology champions, our MPXs, our MPSs. Where are all the people out there with an N in front of their certification and an X an R an S an E or a C at the end. Can you wave hello? You might be surprised to know that in Europe and the Middle East alone, we have more than 2,600 >> Julie: In Europe and the Middle East alone, we have more than 2,600 certified Nutanix experts. Those are customers, partners, and also employees. I'd also like to say thank you to our growing ecosystem of partners and sponsors who are here with us over the next two days. The companies that you meet here are the ones who are committed to driving innovation in the enterprise cloud. Over the next few days you can look forward to hearing from them and seeing some fantastic technology integration that you can take home to your data center come Monday morning. Together, with our partners, and you our customers, Nutanix has had such an exciting year since we were gathered this time last year. We were named a leader in the Gartner Magic Quadrant for integrated systems two years in a row. Just recently Gartner named us the revenue market share leader in their recent market analysis report on hyper-converged systems. We know enjoy more than 35% revenue share. Thanks to you, our customers, we received a net promoter score of more than 90 points. Not one, not two, not three, but four years in a row. A feat, I'm sure you'll agree, is not so easy to accomplish, so thank you for your trust and your partnership in us. We went public on NASDAQ last September. We've grown to more than 2,800 employees, more than 7,000 customers and 125 countries and in Europe and the Middle East alone, in our Q4 results, we added more than 250 customers just in [Amea 00:11:38] alone. That's about a third of all of our new customer additions. Today, we're at a pivotal point in our journey. We're just barely scratching the surface of something big and Goldman Sachs thinks so too. What you'll hear from us over the next two days is this: Nutanix is on it's way to building and becoming an iconic enterprise software company. By helping you transform your data center and your business with Enterprise Cloud Software that gives you the power of freedom of choice and flexibility in the hardware, the hypervisor and the cloud. The power of one click, one OS, any cloud. And now, to tell you more about the digital transformation that's possible in your business and your industry and share a little bit around the disruption that Nutanix has undergone and how we've continued to reinvent ourselves and maybe, if we're lucky, share a few hand clap dance moves, please welcome to stage Nutanix Founder, CEO and Chairman, Dheeraj Pandey. Ready? Alright, take it away [inaudible 00:13:06]. >> Dheeraj P: Thank you. Thank you, Julie and thank you every one. It looks like people are still trickling. Welcome to Acropolis. I just hope that we can move your applications to Acropolis faster than we've been able to move people into this room, actually. (laughs) But thank you, ladies and gentlemen. Thank you to our customers, to our partners, to our employees, to our sponsors, to our board members, to our performers, to everybody for their precious time. 'Cause that's the most precious thing you actually have, is time. I want to spend a little bit of time today, not a whole lot of time, but a little bit of time talking about the why of Nutanix. Like why do we exist? Why have we survived? Why will we continue to survive and thrive? And it's simpler than an NQ or category name, the word hyper-convergence, I think we are all complicated. Just thinking about what is it that we need to talk about today that really makes it relevant, that makes you take back something from this conference. That Nutanix is an obvious innovation, it's very obvious what we do is not very complicated. Because the more things change, the more they remain the same, so can we draw some parallels from life, from what's going on around us in our own personal lives that makes this whole thing very natural as opposed to "Oh, it's hyper-converged, it's a category, it's analysts and pundits and media." I actually think it's something new. It's not that different, so I want to start with some of that today. And if you look at our personal lives, everything that we had, has been digitized. If anything, a lot of these gadgets became apps, they got digitized into a phone itself, you know. What's Nutanix? What have we done in the last seven, eight years, is we digitized a lot of hardware. We made everything that used to be single purpose hardware look like pure software. We digitized storage, we digitized the systems manager role, an operations manager role. We are digitizing scriptures, people don't need to write scripts anymore when they automate because we can visually design automation with [com 00:15:36]. And we're also trying to make a case that the cloud itself is not just a physical destination. That it can be digitized and must be digitized as well. So we learn that from our personal lives too, but it goes on. Look at music. Used to be tons of things, if you used to go to [inaudible 00:15:55] Records, I'm sure there were European versions of [inaudible 00:15:57] Records as well, the physical things around us that then got digitized as well. And it goes on and on. We look at entertainment, it's very similar. The idea that if you go to a movie hall, the idea that you buy these tickets, the idea that we'd have these DVD players and DVDs, they all got digitized. Or as [inaudible 00:16:20] want to call it, virtualized, actually. That is basically happening in pretty much new things that we never thought would look this different. One of the most exciting things happening around us is the car industry. It's getting digitized faster than we know. And in many ways that we'd not even imagined 10 years ago. The driver will get digitized. Autonomous cars. The engine is definitely gone, it's a different kind of an engine. In fact, we'll re-skill a lot of automotive engineers who actually used to work in mechanical things to look at real chemical things like battery technologies and so on. A lot of those things that used to be physical are now in software in the car itself. Media itself got digitized. Think about a physical newspaper, or physical ads in newspapers. Now we talk about virtual ads, the digital ads, they're all over on websites and so on is our digital experience now. Education is no different, you know, we look back at the kind of things we used to do physically with physical things. Their now all digital. The experience has become that digital. And I can go on and on. You look at retail, you look at healthcare, look at a lot of these industries, they all are at the cusp of a digital disruption. And in fact, if you look at the data, everybody wants it. We all want a digital transformation for industries, for companies around us. In fact, the whole idea of a cloud is a highly digitized data center, basically. It's not just about digitizing servers and storage and networks and security, it's about virtualizing, digitizing the entire data center itself. That's what cloud is all about. So we all know that it's a very natural phenomenon, because it's happening around us and that's the obviousness of Nutanix, actually. Why is it actually a good thing? Because obviously it makes anything that we digitize and we work in the digital world, bring 10X more productivity and decision making efficiencies as well. And there are challenges, obviously there are challenges, but before I talk about the challenges of digitization, think about why are things moving this fast? Why are things becoming digitally disrupted quicker than we ever imagined? There are some reasons for it. One of the big reasons is obviously we all know about Moore's Law. The fact that a lot of hardware's been commoditized, and we have really miniaturized hardware. Nutanix today runs on a palm-sized server. Obviously it runs on the other end of the spectrum with high-end IBM power systems, but it also runs on palm-sized servers. Moore's Law has made a tremendous difference in the way we actually think about consuming software itself. Of course, the internet is also a big part of this. The fact that there's a bandwidth glut, there's Trans-Pacific cables and Trans-Atlantic cables and so on, has really connected us a lot faster than we ever imagined, actually, and a lot of this was also the telecom revolution of the '90s where we really produced a ton of glut for the internet itself. There's obviously a more subtle reason as well, because software development is democratizing. There's consumer-grade programming languages that we never imagined 10, 15, 20 years ago, that's making it so much faster to write- >> Speaker 1: 15-20 years ago that's making it so much faster to write code, with this crowdsourcing that never existed before with Githubs and things like that, open source. There's a lot more stuff that's happening that's outside the boundary of a corporation itself, which is making things so much faster in terms of going getting disrupted and writing things at 10x the speed it used to be 20 years ago. There is obviously this technology at the tip of our fingers, and we all want it in our mobile experience while we're driving, while we're in a coffee shop, and so on; and there's a tremendous focus on design on consumer-grade simplicity, that's making digital disruption that much more compressed in some of sense of this whole cycle of creative disruption that we talk about, is compressed because of mobility, because of design, because of API, the fact that machines are talking to machines, developers are talking to developers. We are going and miniaturizing the experience of organizations because we talk about micro-services and small two-pizza teams, and they all want to talk about each other using APIs and so on. Massive influence on this digital disruption itself. Of course, one of the reasons why this is also happening is because we want it faster, we want to consume it faster than ever before. And our attention spans are reducing. I like the fact that not many people are watching their cell phones right now, but you can imagine the multi-tasking mode that we are all in today in our lives, makes us want to consume things at a faster pace, which is one of the big drivers of digital disruption. But most importantly, and this is a very dear slide to me, a lot of this is happening because of infrastructure. And I can't overemphasize the importance of infrastructure. If you look at why did Google succeed, it was the ninth search engine, after eight of them before, and if you take a step back at why Facebook succeeded over MySpace and so on, a big reason was infrastructure. They believed in scale, they believed in low latency, they believed in being able to crunch information, at 10x, 100x, bigger scale than anyone else before. Even in our geopolitical lives, look at why is China succeeding? Because they've made infrastructure seamless. They've basically said look, governance is about making infrastructure seamless and invisible, and then let the businesses flourish. So for all you CIOs out there who actually believe in governance, you have to think about what's my first role? What's my primary responsibility? It's to provide such a seamless infrastructure, that lines of business can flourish with their applications, with their developers that can write code 10x faster than ever before. And a lot of these tenets of infrastructure, the fact of the matter is you need to have this always-on philosophy. The fact that it's breach-safe culture. Or the fact that operating systems are hardware agnostic. A lot of these tenets basically embody what Nutanix really stands for. And that's the core of what we really have achieved in the last eight years and want to achieve in the coming five to ten years as well. There's a nuance, and obviously we talk about digital, we talk about cloud, we talk about everything actually going to the cloud and so on. What are the things that could slow us down? What are the things that challenge us today? Which is the reason for Nutanix? Again, I go back to this very important point that the reason why we think enterprise cloud is a nuanced term, because the word "cloud" itself doesn't solve for a lot of the problems. The public cloud itself doesn't solve for a lot of the problems. One of the big ones, and obviously we face it here in Europe as well, is laws of the land. We have bureaucracy, which we need to deal with and respect; we have data sovereignty and computing sovereignty needs that we need to actually fulfill as well, while we think about going at breakneck speed in terms of disrupting our competitors and so on. So there's laws of the land, there's laws of physics. This is probably one of the big ones for what the architecture of cloud will look like itself, over the coming five to ten years. Our take is that cloud will need to be more dispersed than they have ever imagined, because computing has to be local to business operations. Computing has to be in hospitals and factories and shop floors and power plants and on and on and on... That's where you really can have operations and computing really co-exist together, cause speed is important there as well. Data locality is one of our favorite things; the fact that computing and data have to be local, at least the most relevant data has to be local as well. And the fact that electrons travel way faster when it's actually local, versus when you have to have them go over a Wide Area Network itself; it's one of the big reasons why we think that the cloud will actually be more nuanced than just some large data centers. You need to disperse them, you need to actually think about software (cloud is about software). Whether data plane itself could be dispersed and even miniaturized in small factories and shop floors and hospitals. But the control plane of the cloud is centralized. And that's the way you can have the best of both worlds; the control plane is centralized. You think as if you're managing one massive data center, but it's not because you're really managing hundreds or thousands of these sites. Especially if you think about edge-based computing and IoT where you really have your tentacles in tens of thousands of smaller devices and so on. We've talked about laws of the land, which is going to really make this digital transformation nuanced; laws of physics; and the third one, which is really laws of entropy. These are hackers that do this for adrenaline. These are parochial rogue states. These are parochial geo-politicians, you know, good thing I actually left the torture sign there, because apparently for our creative designer, geo-politics is equal to torture as well. So imagine one bad tweet can actually result in big changes to the way we actually live in this world today. And it's important. Geo-politics itself is digitized to a point where you don't need a ton of media people to go and talk about your principles and what you stand for and what you strategy for, for running a country itself is, and so on. And these are all human reasons, political reasons, bureaucratic reasons, compliance and regulations reasons, that, and of course, laws of physics is yet another one. So laws of physics, laws of the land, and laws of entropy really make us take a step back and say, "What does cloud really mean, then?" Cause obviously we want to digitize everything, and it all should appear like it's invisible, but then you have to nuance it for the Global 5000, the Global 10000. There's lots of companies out there that need to really think about GDPR and Brexit and a lot of the things that you all deal with on an everyday basis, actually. And that's what Nutanix is all about. Balancing what we think is all about technology and balancing that with things that are more real and practical. To deal with, grapple with these laws of the land and laws of physics and laws of entropy. And that's where we believe we need to go and balance the private and the public. That's the architecture, that's the why of Nutanix. To be able to really think about frictionless control. You want things to be frictionless, but you also realize that you are a responsible citizen of this continent, of your countries, and you need to actually do governance of things around you, which is computing governance, and data governance, and so on. So this idea of melding the public and the private is really about melding control and frictionless together. I know these are paradoxical things to talk about like how do you really have frictionless control, but that's the life you all lead, and as leaders we have to think about this series of paradoxes itself. And that's what Nutanix strategy, the roadmap, the definition of enterprise cloud is really thinking about frictionless control. And in fact, if anything, it's one of the things is also very interesting; think about what's disrupting Nutanix as a company? We will be getting disrupted along the way as well. It's this idea of true invisibility, the public cloud itself. I'd like to actually bring on board somebody who I have a ton of respect for, this leader of a massive company; which itself is undergoing disruption. Which is helping a lot of its customers undergo disruption as well, and which is thinking about how the life of a business analyst is getting digitized. And what about the laws of the land, the laws of physics, and laws of entropy, and so on. And we're learning a lot from this partner, massively giant company, called IBM. So without further ado, Bob Picciano. >> Bob Picciano: Thanks, >> Speaker 1: Thank you so much, Bob, for being here. I really appreciate your presence here- >> Bob Picciano: My pleasure! >> Speaker 1: And for those of you who actually don't know Bob, Bob is a Senior VP and General Manager at IBM, and is all things cognitive and obviously- >> Speaker 1: IBM is all things cognitive. Obviously, I learn a lot from a lot of leaders that have spent decades really looking at digital disruption. >> Bob: Did you just call me old? >> Speaker 1: No. (laughing) I want to talk about experience and talking about the meaning of history, because I love history, actually, you know, and I don't want to make you look old actually, you're too young right now. When you talk about digital disruption, we look at ourselves and say, "Look we are not extremely invisible, we are invisible, but we have not made something as invisible as the public clouds itself." And hence as I. But what's digital disruption mean for IBM itself? Now, obviously a lot of hardware is being digitized into software and cloud services. >> Bob: Yep. >> Speaker 1: What does it mean for IBM itself? >> Bob: Yeah, if you allow me to take a step back for a moment, I think there is some good foundational understanding that'll come from a particular point of view. And, you talked about it with the number of these dimensions that are affecting the way businesses need to consider their competitiveness. How they offer their capabilities into the market place. And as you reflected upon IBM, you know, we've had decades of involvement in information technology. And there's a big disruption going on in the information technology space. But it's what I call an accretive disruption. It's a disruption that can add value. If you were to take a step back and look at that digital trajectory at IBM you'd see our involvement with information technology in a space where it was all oriented around adding value and capability to how organizations managed inscale processes. Thinking about the way they were going to represent their businesses in a digital form. We came to call them applications. But it was how do you open an account, how do you process a claim, how do you transfer money, how do you hire an employee? All the policies of a company, the way the people used to do it mechanically, became digital representations. And that foundation of the digital business process is something that IBM helped define. We invented the role of the CIO to help really sponsor and enter in this notion that businesses could re represent themselves in a digital way and that allowed them to scale predictably with the qualities of their brand, from local operations, to regional operations, to international operations, and show up the same way. And, that added a lot of value to business for many decades. And we thrived. Many companies, SAP all thrived during that span. But now we're in a new space where the value of information technology is hitting a new inflection point. Which is not about how you scale process, but how you scale insight, and how you scale wisdom, and how you scale knowledge and learning from those operational systems and the data that's in those operational systems. >> Speaker 1: How's it different from 1993? We're talking about disruption. There was a time when IBM reinvented itself, 20-25 years ago. >> Bob: Right. >> Speaker 1: And you said it's bigger than 25 years ago. Tell us more. >> Bob: You know, it gets down. Everything we know about that process space right down to the very foundation, the very architecture of the CPU itself and the computer architecture, the von Neumann architecture, was all optimized on those relatively static scaled business processes. When you move into the notion where you're going to scale insight, scale knowledge, you enter the era that we call the cognitive era, or the era of intelligence. The algorithms are very different. You know the data semantically doesn't integrate well across those traditional process based pools and reformation. So, new capabilities like deep learning, machine learning, the whole field of artificial intelligence, allows us to reach into that data. Much of it unstructured, much of it dark, because it hasn't been indexed and brought into the space where it is directly affecting decision making processes in a business. And you have to be able to apply that capability to those business processes. You have to rethink the computer, the circuitry itself. You have to think about how the infrastructure is designed and organized, the network that is required to do that, the experience of the applications as you talked about have to be very natural, very engaging. So IBM does all of those things. So as a function of our transformation that we're on now, is that we've had to reach back, all the way back from rethinking the CPU, and what we dedicate our time and attention to. To our services organization, which is over 130,000 people on the consulting side helping organizations add digital intelligence to this notion of a digital business. Because, the two things are really a confluence of what will make this vision successful. >> Speaker 1: It looks like massive amounts of change for half a million people who work with the company. >> Bob: That's right. >> Speaker 1: I'm sure there are a lot of large customers out here, who will also read into this and say, "If IBM feels disrupted ... >> Bob: Uh hm >> Speaker 1: How can we actually stay not vulnerable? Actually there is massive amounts of change around their own competitive landscape as well. >> Bob: Look, I think every company should feel vulnerable right. If you're at this age, this cognitive era, the age of digital intelligence, and you're not making a move into being able to exploit the capabilities of cognition into the business process. You are vulnerable. If you're at that intersection, and your competitor is passing through it, and you're not taking action to be able to deploy cognitive infrastructure in conjunction with the business processes. You're going to have a hard time keeping up, because it's about using the machines to do the training to augment the intelligence of our employees of our professionals. Whether that's a lawyer, or a doctor, an educator or whether that's somebody in a business function, who's trying to make a critical business decision about risk or about opportunity. >> Speaker 1: Interesting, very interesting. You used the word cognitive infrastructure. >> Bob: Uh hm >> Speaker 1: There's obviously computer infrastructure, data infrastructure, storage infrastructure, network infrastructure, security infrastructure, and the core of cognition has to be infrastructure as well. >> Bob: Right >> Speaker 1: Which is one of the two things that the two companies are working together on. Tell us more about the collaboration that we are actually doing. >> Bob: We are so excited about our opportunity to add value in this space, so we do think very differently about the cognitive infrastructure that's required for this next generation of computing. You know I mentioned the original CPU was built for very deterministic, very finite operations; large precision floating point capabilities to be able to accurately calculate the exact balance, the exact amount of transfer. When you're working in the field of AI in cognition. You actually want variable precision. Right. The data is very sparse, as opposed to the way that deterministic or scorecastic operations work, which is very dense or very structured. So the algorithms are redefining the processes that the circuitry actually has to run. About five years ago, we dedicated a huge effort to rethink everything about the chip and what we made to facilitate an orchestra of participation to solve that problem. We all know the GPU has a great benefit for deep learning. But the GPU in many cases, in many architectures, specifically intel architectures, it's dramatically confined by a very small amount of IO bandwidth that intel allows to go on and off the chip. At IBM, we looked at all 686 roughly square millimeters of our chip and said how do we reuse that square area to open up that IO bandwidth? So the innovation of a GPU or a FPGA could really be utilized to it's maximum extent. And we could be an orchestrator of all of the diverse compute that's going to be necessary for AI to really compel these new capabilities. >> Speaker 1: It's interesting that you mentioned the fact that you know power chips have been redefined for the cognitive era. >> Bob: Right, for Lennox for the cognitive era. >> Speaker 1: Exactly, and now the question is how do you make it simple to use as well? How do you bring simplicity which is where ... >> Bob: That's why we're so thrilled with our partnership. Because you talked about the why of Nutanix. And it really is about that empowerment. Doing what's natural. You talked about the benefits of calm and being able to really create that liberation of an information technology professional, whether it's in operations or in development. Having the freedom of action to make good decisions about defining the infrastructure and deploying that infrastructure and not having to second guess the physical limitations of what they're going to have to be dealing with. >> Speaker 1: That's why I feel really excited about the fact that you have the power of software, to really meld the two forms together. The intel form and the power form comes together. And we have some interesting use cases that our CIO Randy Phiffer is also really exploring, is how can a power form serve as a storage form for our intel form. >> Bob: Sure. >> Speaker 1: It can serve files and mocks and things like that. >> Bob: Any data intensive application where we have seen massive growth in our Lennox business, now for our business, Lennox is 20% of the revenue of our power systems. You know, we started enabling native Lennox distributions on top of little Indian ones, on top of the power capabilities just a few years ago, and it's rocketed. And the reason for that if for any data intensive application like a data base, a no sequel database or a structured data base, a dupe in the unstructured space, they typically run about three to four times better price performance on top of Lennox on power, than they will on top of an intel alternative. >> Speaker 1: Fascinating. >> Bob: So all of these applications that we're talking about either create or consume a lot of data, have to manage a lot of flexibility in that space, and power is a tremendous architecture for that. And you mentioned also the cohabitation, if you will, between intel and power. What we want is that optionality, for you to utilize those benefits of the 3X better price performance where they apply and utilize the commodity base where it applies. So you get the cost benefits in that space and the depth and capability in the space for power. >> Speaker 1: Your tongue in cheek remark about commodity intel is not lost on people actually. But tell us about... >> Speaker 1: Intel is not lost on people actually. Tell us about ... Obviously we digitized Linux 10, 15 years ago with [inaudible 00:40:07]. Have you tried to talk about digitizing AIX? That is the core of IBM's business for the last 20, 25, 30 years. >> Bob: Again, it's about this ability to compliment and extend the investments that businesses have made during their previous generations of decision making. This industry loves to talk about shifts. We talked about this earlier. That was old, this is new. That was hard, this is easy. It's not about shift, it's about using the inflection point, the new capability to extend what you already have to make it better. And that's one thing that I must compliment you, and the entire Nutanix organization. It's really empowering those applications as a catalog to be deployed, managed, and integrated in a new way, and to have seamless interoperability into the cloud. We see the AIX workload just having that same benefit for those businesses. And there are many, many 10's of thousands around the world that are critically dependent on every element of their daily operations and productivity of that operating platform. But to introduce that into that network effect as well. >> Speaker 1: Yeah. I think we're looking forward to how we bring the same cloud experience on AIX as well because as a company it keeps us honest when we don't scoff at legacy. We look at these applications the last 10, 15, 20 years and say, "Can we bring them into the new world as well?" >> Bob: Right. >> Speaker 1: That's what design is all about. >> Bob: Right. >> Speaker 1: That's what Apple did with musics. We'll take an old world thing and make it really new world. >> Bob: Right. >> Speaker 1: The way we consume things. >> Bob: That governance. The capability to help protect against the bad actors, the nefarious entropy players, as you will. That's what it's all about. That's really what it takes to do this for the enterprise. It's okay, and possibly easier to do it in smaller islands of containment, but when you think about bringing these class of capabilities into an enterprise, and really helping an organization drive both the flexibility and empowerment benefits of that, but really be able to depend upon it for international operations. You need that level of support. You need that level of capability. >> Speaker 1: Awesome. Thank you so much Bob. Really appreciate you coming. [crosstalk 00:42:14] Look forward to your [crosstalk 00:42:14]. >> Bob: Cheers. Thank you. >> Speaker 1: Thanks again for all of you. I know that people are sitting all the way up there as well, which is remarkable. I hope you can actually see some of the things that Sunil and the team will actually bring about, talk about live demos. We do real stuff here, which is truly live. I think one of the requests that I have is help us help you navigate the digital disruption that's upon you and your competitive landscape that's around you that's really creating that disruption. Thank you again for being here, and welcome again to Acropolis. >> Speaker 3: Ladies and gentlemen, please welcome Chief Product and Development Officer, Nutanix Sunil Potti. >> Sunil Potti: Okay, so I'm going to just jump right in because I know a bunch of you guys are here to see the product as well. We are a lot of demos lined up for you guys, and we'll try to mix in the slides, and the demos as well. Here's just an example of the things I always bring up in these conferences to look around, and say in the last few months, are we making progress in simplifying infrastructure? You guys have heard this again and again, this has been our mantra from the beginning, that the hotter things get, the more differentiated a company like Nutanix can be if we can make things simple, or keep things simple. Even though I like this a lot, we found something a little bit more interesting, I thought, by our European marketing team. If you guys need these tea bags, which you will need pretty soon. It's a new tagline for the company, not really. I thought it was apropos. But before I get into the product and the demos, to give you an idea. Every time I go to an event you find ways to memorialize the event. You meet people, you build relationships, you see something new. Last night, nothing to do with the product, I sat beside someone. It was a customer event. I had no idea who I was sitting beside. He was a speaker. How many of you guys know him, by the way? Sir Ranulph Fiennes. Few hands. Good for you. I had no idea who I was sitting beside. I said, "Oh, somebody called Sir. I should be respectful." It's kind of hard for me to be respectful, but I tried. He says, "No, I didn't do anything in the sense. My grandfather was knighted about 100 years ago because he was the governor of Antigua. And when he dies, his son becomes." And apparently Sir Ranulph's dad also died in the war, and so that's how he is a sir. But then I started looking it up because he's obviously getting ready to present. And the background for him is, in my opinion, even though the term goes he's the World's Greatest Living Explorer. I would have actually called it the World's Number One Stag, and I'll tell you why. Really, you should go look it up. So this guy, at the age of 21, gets admitted to Special Forces. If you're from the UK, this is as good as it gets, SAS. Six, seven years into it, he rebels, helps out his local partner because he doesn't like a movie who's building a dam inside this pretty village. And he goes and blows up a dam, and he's thrown out of that Special Forces. Obviously he's in demolitions. Goes all the way. This is the '60's, by the way. Remember he's 74 right now. The '60's he goes to Oman, all by himself, as the only guy, only white guy there. And then around the '70's, he starts truly exploring, truly exploring. And this is where he becomes really, really famous. You have to go see this in real life, when he sees these videos to really appreciate the impact of this guy. All by himself, he's gone across the world. He's actually gone across Antarctica. Now he tells me that Antarctica is the size of China and India put together, and he was prepared for -50 to 60 degrees, and obviously he got -130 degrees. Again, you have to see the videos, see his frostbite. Two of his fingers are cut off, by the way. He hacksawed them himself. True story. And then as he, obviously, aged, his body couldn't keep up with him, but his will kept up with him. So after a recent heart attack, he actually ran seven marathons. But most importantly, he was telling me this story, at 65 he wanted to do something different because his body was letting him down. He said, "Let me do something easy." So he climbed Mount Everest. My point being, what is this related to Nutanix? Is that if Nutanix is a company, without technology, allows to spend more time on life, then we've accomplished a piece of our vision. So keep that in mind. Keep that in mind. Now comes the boring part, which is the product. The why, what, how of Nutanix. Neeris talked about this. We have two acts in this company. Invisible Infrastructure was what we started off. You heard us talk about it. How did we do it? Using one-click technologies by converging infrastructure, computer storage, virtualization, et cetera, et cetera. What we are now about is about changing the game. Saying that just like we'd applicated what powers Google and Amazon inside the data center, could we now make them all invisible? Whether it be inside or outside, could we now make clouds invisible? Clouds could be made invisible by a new level of convergence, not about computer storage, but converging public and private, converging CAPEX and OPEX, converging consumption models. And there, beyond our core products, Acropolis and Prism, are these new products. As you know, we have this core thesis, right? The core thesis says what? Predictable workloads will stay inside the data center, elastic workloads will go outside, as long as the experience on both sides is the same. So if you can genuinely have a cloud-like experience delivered inside a data center, then that's the right a- >> Speaker 1: Genuinely have a cloud like experience developed inside the data center. And that's the right answer of predictable workloads. Absolutely the answer of elastic workloads, doesn't matter whether security or compliance. Eventually a public cloud will have a data center right beside your region, whether through local partner or a top three cloud partner. And you should use it as your public cloud of choice. And so, our goal is to ensure that those two worlds are converged. And that's what Calm does, and we'll talk about that. But at the same time, what we found in late 2015, we had a bunch of customers come to us and said "Look, I love this, I love the fact that you're going to converge public and private and all that good stuff. But I have these environments and these apps that I want to be delivered as a service but I want the same operational tooling. I don't want to have two different environments but I don't want to manage my data centers. Especially my secondary data centers, DR data centers." And that's why we created Xi, right? And you'll hear a lot more about this, obviously it's going to start off in the U.S but very rapidly launch in Europe, APJ globally in the next 9-12 months. And so we'll spend some quality time on those products as well today. So, from the journey that we're at, we're starting with the score cloud that essentially says "Look, your public and private needs to be the same" We call that the first instantiation of your cloud architectures and we're essentially as a company, want to build this enterprise cloud operating system as a fabric across public and private. But that's just the starting point. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. Just like you have a public and a private cloud in the core data centers and so forth, you'll need a similar experience inside your remote office branch office, inside your DR data centers, inside your branches, and it won't stop there. It'll go all the way to the edge. All we're already seeing this right? Not just in the army where your forward operating bases in Afghanistan having a three note cluster sitting inside a tent. But we're seeing this in a variety of enterprise scenarios. And here's an example. So, here's a customer, global oil and gas company, has couple of primary data centers running Nutanix, uses GCP as a core public cloud platform, has a whole bunch of remote offices, but it also has this interesting new edge locations in the form of these small, medium, large size rigs. And today, they're in the process of building a next generation cloud architecture that's completely dispersed. They're using one node, coming out on version 5.5 with Nutanix. They're going to use two nodes, they're going to throw us three nods, multicultural architectures. Day one, they're going to centrally manage it using Prism, with one click upgrades, right? And then on top of that, they're also now provisioning using Calm, purpose built apps for the various locations. So, for example, there will be a re control app at the edge, there's an exploration data lag in Google and so forth. My point being that increasingly this architecture that we're talking about is happening in real time. It's no longer just an existing cellular civilization data center that's being replatformed to look like a private cloud and so forth, or a hybrid cloud. But the fact that you're going into this multi cloud era is getting excel bated, the more someone consumes AWL's GCP or any public cloud, the more they're excel bating their internal transformation to this multi cloud architecture. And so that's what we're going to talk about today, is this construct of ONE OS and ONE Click, and when you think about it, every company has a standard stack. So, this is the only slide you're going to see from me today that's a stack, okay? And if you look at the new release coming out, version 5.5, it's coming out imminently, easiest way to say it is that it's got a ton of functionality. We've jammed as much as we can onto one slide and then build a product basically, okay? But I would encourage you guys to check out the release, it's coming out shortly. And we can go into each and every feature here, we'd be spending a lot of time but the way that we look at building Nutanix products as many of you know, it is not feature at a time. It's experience at a time. And so, when you really look at Nutanix using a lateral view, and that's how we approach problems with our customers and partners. We think about it as a life cycle, all the way from learning to using, operating, and then getting support and experiences. And today, we're going to go through each of these stages with you. And who better to talk about it than our local version of an architect, Steven Poitras please come up on stage. I don't know where you are, Steven come on up. You tucked your shirt in? >> Speaker 2: Just for you guys today. >> Speaker 1: Okay. Alright. He's sort of putting on his weight. I know you used a couple of tight buckles there. But, okay so Steven so I know we're looking for the demo here. So, what we're going to do is, the first step most of you guys know this, is we've been quite successful with CE, it's been a great product. How many of you guys like CE? Come on. Alright. I know you had a hard time downloading it yesterday apparently, there's a bunch of guys had a hard time downloading it. But it's been a great way for us not just to get you guys to experience it, there's more than 25,000 downloads and so forth. But it's also a great way for us to see new features like IEME and so forth. So, keep an eye on CE because we're going to if anything, explode the way that we actually use as a way to get new features out in the next 12 months. Now, one thing beyond CE that we did, and this was something that we did about ... It took us about 12 months to get it out. While people were using CE to learn a lot, a lot of customers were actually getting into full blown competitive evals, right? Especially with hit CI being so popular and so forth. So, we came up with our own version called X-Ray. >> Speaker 2: Yup. >> Speaker 1: What does X-Ray do before we show it? >> Speaker 2: Yeah. Absolutely. So, if we think about back in the day we were really the only ACI platform out there on the market. Now there are a few others. So, to basically enable the customer to objectively test these, we came out with X-Ray. And rather than talking about the slide let's go ahead and take a look. Okay, I think it's ready. Perfect. So, here's our X-Ray user interface. And essentially what you do is you specify your targets. So, in this case we have a Nutanix 80150 as well as some of our competitors products which we've actually tested. Now we can see on the left hand side here we see a series of tests. So, what we do is we go through and specify certain workloads like OLTP workloads, database colocation, and while we do that we actually inject certain test cases or scenarios. So, this can be snapshot or component failures. Now one of the key things is having the ability to test these against each other. So, what we see here is we're actually taking a OLTP workload where we're running two virtual machines, and then we can see the IOPS OLTP VM's are actually performing here on the left hand side. Now as we're actually go through this test we perform a series of snapshots, which are identified by these red lines here. Now as you can see, the Nutanix platform, which is shown by this blue line, is purely consistent as we go through this test. However, our competitor's product actually degrades performance overtime as these snapshots are taken. >> Speaker 1: Gotcha. And some of these tests by the way are just not about failure or benchmarking, right? It's a variety of tests that we have that makes real life production workloads. So, every couple of months we actually look at our production workloads out there, subset those two cases and put it into X-Ray. So, X-Ray's one of those that has been more recently announced into the public. But it's already gotten a lot of update. I would strongly encourage you, even if you an existing Nutanix customer. It's a great way to keep us honest, it's a great way for you to actually expand your usage of Nutanix by putting a lot of these real life tests into production, and as and when you look at new alternatives as well, there'll be certain situations that we don't do as well and that's a great way to give us feedback on it. And so, X-Ray is there, the other one, which is more recent by the way is a fact that most of you has spent many days if not weeks, after you've chosen Nutanix, moving non-Nutanix workloads. I.e. VMware, on three tier architectures to Atrio Nutanix. And to do that, we took a hard look and came out with a new product called Xtract. >> Speaker 2: Yeah. So essentially if we think about what Nutanix has done for the data center really enables that iPhone like experience, really bringing it simplicity and intuitiveness to the data center. Now what we wanted to do is to provide that same experience for migrating existing workloads to us. So, with Xtract essentially what we've done is we've scanned your existing environment, we've created design spec, we handled the migration process ... >> Steven: ... environment, we create a design spec. We handle for the migration process as well as the cut over. Now, let's go ahead and take a look in our extract user interface here. What we can see is we have a source environment. In this case, this is a VC environment. This can be any VC, whether it's traditional three tier or hypherconverged. We also see our Nutanix target environments. Essentially, these are our AHV target clusters where we're going to be migrating the data and performing the cut over to you. >> Speaker 2: Gotcha. Steven: The first thing that we do here is we go ahead and create a new migration plan. Here, I'm just going to specify this as DB Wave 2. I'll click okay. What I'm doing here is I'm selecting my target Nutanix cluster, as well as my target Nutanix container. Once I'll do that, I'll click next. Now in this case, we actually like to do it big. We're actually going to migrate some production virtual machines over to this target environment. Here, I'm going to select a few windows instances, which are in our database cluster. I'll click next. At this point, essentially what's occurring is it's going through taking a look at these virtual machines as well as taking a look at the target environment. It takes a look at the resources to ensure that we actually have enough, an ample capacity to facilitate the workload. The next thing we'll do is we'll go ahead and type in our credentials here. This is actually going to be used for logging into the virtual machine. We can do a new device driver installation, as well as get any static IP configuration. Well specify our network mapping. Then from there, we'll click next. What we'll do is we'll actually save and start. This will go through create the migration plan. It'll do some analysis on these virtual machines to ensure that we can actually log in before we actually start migrating data. Here we have a migration, which has been in progress. We can see we have a few virtual machines, obviously some Linux, some Windows here. We've cut over a few. What we do to actually cut over these VMS, is go ahead select the VMS- Speaker 2: This is the actual task of actually doing the final stage of cut over. Steven: Yeah, exactly. That's one of the nice things. Essentially, we can migrate the data whenever we want. We actually hook into the VADP API's to do this. Then every 10 minutes, we send over a delta to sync the data. Speaker 2: Gotcha, gotcha. That's how one click migration can now be possible. This is something that if you guys haven't used this, this has been out in the wild, just for a month or so. Its been probably one of our bestselling, because it's free, bestselling features of the recent product release. I've had customers come to me and say, "Look, there are situations where its taken us weeks to move data." That is now minutes from the operator perspective. Forget where the director, or the VP, it's the line architecture and operator that really loves these tools, which is essentially the core of Nutanix. That's one of our core things, is to make sure that if we can keep the engineer and the architect truly happy, then everything else will be fine for us, right? That's extract. Then we have a lot of things, right? We've done the usual things, there's a tunnel functionality on day zero, day one, day two, kind of capabilities. Why don't we start with something around Prism Central, now that we can do one click PC installs? We can do PC scale outs, we can go from managing thousands of VMS, tens of thousands of VMS, while doing all the one click operations, right? Steven: Yep. Speaker 2: Why don't we take a quick look at what's new in Prism Central? Steven: Yep. Absolutely. Here, we can see our Prism element interface. As you mentioned, one of the key things we added here was the ability to deploy Prism Central very simply just with a few clicks. We'll actually go through a distributed PC scale of deployment here. Here, we're actually going to deploy, as this is a new instance. We're going to select our 5.5 version. In this case, we're going to deploy a scale out Prism Central cluster. Obviously, availability and up-time's very critical for us, as we're mainly distributed systems. In this case we're going to deploy a scale-out PC cluster. Here we'll select our number of PC virtual machines. Based upon the number of VMS, we can actually select our size of VM that we'd deploy. If we want to deploy 25K's report, we can do that as well. Speaker 2: Basically a thousand to tens of thousands of VM's are possible now. Steven: Yep. That's a nice thing is you can start small, and then scale out as necessary. We'll select our PC network. Go ahead and input our IP address. Now, we'll go to deploy. Now, here we can see it's actually kicked off the deployment, so it'll go provision these virtual machines to apply the configuration. In a few minutes, we'll be up and running. Speaker 2: Right. While Steven's doing that, one of the things that we've obviously invested in is a ton of making VM operations invisible. Now with Calm's, what we've done is to up level that abstraction. Two applications. At the end of the day, more and more ... when you go to AWS, when you go to GCP, you go to [inaudible 01:04:56], right? The level of abstractions now at an app level, it's cloud formations, and so forth. Essentially, what Calm's able to do is to give you this marketplace that you can go in and self-service [inaudible 01:05:05], create this internal cloud like environment for your end users, whether it be business owners, technology users to self-serve themselves. The process is pretty straightforward. You, as an operator, or an architect, or [inaudible 01:05:16] create these blueprints. Consumers within the enterprise, whether they be self-service users, whether they'll be end business users, are able to consume them for a simple marketplace, and deploy them on whether it be a private cloud using Nutanix, or public clouds using anything with public choices. Then, as a single frame of glass, as operators you're doing conversed operations, at an application centric level between [inaudible 01:05:41] across any of these clouds. It's this combination of producer, consumer, operator in a curated sense. Much like an iPhone with an app store. It's the core construct that we're trying to get with Calm to up level the abstraction interface across multiple clouds. Maybe we'll do a quick demo of this, and then get into the rest of the stuff, right? Steven: Sure. Let's check it out. Here we have our Prism Central user interface. We can see we have two Nutanix clusters, our cloudy04 as well as our Power8 cluster. One of the key things here that we've added is this apps tab. I'm clicking on this apps tab, we can see that we have a few [inaudible 01:06:19] solutions, we have a TensorFlow solution, a [inaudible 01:06:22] et cetera. The nice thing about this is, this is essentially a marketplace where vendors as well as developers could produce these blueprints for consumption by the public. Now, let's actually go ahead and deploy one of these blueprints. Here we have a HR employment engagement app. We can see we have three different tiers of services part of this. Speaker 2: You need a lot of engagement at HR, you know that. Okay, keep going. Steven: Then the next thing we'll do here is we'll go and click on. Based upon this, we'll specify our blueprint name, HR app. The nice thing when I'm deploying is I can actually put in back doors. We'll click clone. Now what we can see here is our blueprint editor. As a developer, I could actually go make modifications, or even as an in-user given the simple intuitive user interface. Speaker 2: This is the consumers side right here, but it's also the [inaudible 01:07:11]. Steven: Yep, absolutely. Yeah, if I wanted to make any modifications, I could select the tier, I could scale out the number of instances, I could modify the packages. Then to actually deploy, all I do is click launch, specify HR app, and click create. Speaker 2: Awesome. Again, this is coming in 5.5. There's one other feature, by the way, that is coming in 5.5 that's surrounding Calm, and Prism Pro, and everything else. That seems to be a much awaited feature for us. What was that? Steven: Yeah. Obviously when we think about multi-tenant, multi-cloud role based access control is a very critical piece of that. Obviously within the organization, we're going to have multiple business groups, multiple units. Our back's a very critical piece. Now, if we go over here to our projects, we can see in this scenario we just have a single project. What we've added is if you want to specify certain roles, in this case we're going to add our good friend John Doe. We can add them, it could be a user or group, but then we specify their role. We can give a developer the ability to edit and create these blueprints, or consumer the ability to actually provision based upon. Speaker 2: Gotcha. Basically in 5.5, you'll have role based access control now in Prism and Calm burned into that, that I believe it'll support custom role shortly after. Steven: Yep, okay. Speaker 2: Good stuff, good stuff. I think this is where the Nutanix guys are supposed to clap, by the way, so that the rest of the guys can clap. Steven: Thank you, thank you. Okay. What do we have? Speaker 2: We have day one stuff, obviously there's a ton of stuff that's coming in core data path capabilities that most of you guys use. One of the most popular things is synchronous replication, especially in Europe. Everybody wants to do [Metro 01:08:49] for whatever reason. But we've got something new, something even more enhanced than Metro, right? Steven: Yep. Speaker 2: Do you want to talk a little bit about it? Steven: Yeah, let's talk about it. If we think about what we had previously, we started out with a synchronous replication. This is essentially going to be your higher RPO. Then we moved into Metro cluster, which was RPO zero. Those are two ins of the gamete. What we did is we introduced new synchronous replication, which really gives you the best of both worlds where you have very, very decreased RPO's, but zero impact in line mainstream performance. Speaker 2: That's it. Let's show something. Steven: Yeah, yeah. Let's do it. Here, we're back at our Prism Element interface. We'll go over here. At this point, we provisioned our HR app, the next thing we need to do is to protect that data. Let's go here to protection domain. We'll create a new PD for our HR app. Speaker 2: You clearly love HR. Steven: Spent a lot of time there. Speaker 2: Yeah, yeah, yeah. Steven: Here, you can see we have our production lamp DBVM. We'll go ahead and protect that entity. We can see that's protected. The next thing we'll do is create a schedule. Now, what would you say would be a good schedule we should actually shoot for? Speaker 2: I don't know, 15 minutes? Steven: 15 minutes is not bad. But I ... Section 7 of 13 [01:00:00 - 01:10:04] Section 8 of 13 [01:10:00 - 01:20:04] (NOTE: speaker names may be different in each section) Speaker 1: ... 15 minutes. Speaker 2: 15 minutes is not bad, but I think the people here deserve much better than that, so I say let's shoot for ... what about 15 seconds? Speaker 1: Yeah. They definitely need a bathroom break, so let's do 15 seconds. Speaker 2: Alright, let's do 15 seconds. Speaker 1: Okay, sounds good. Speaker 2: K. Then we'll select our retention policy and remote cluster replicate to you, which in this case is wedge. And we'll go ahead and create the schedule here. Now at this point we can see our protection domain. Let's go ahead and look at our entities. We can see our database virtual machine. We can see our 15 second schedule, our local snapshots, as well as we'll start seeing our remote snapshots. Now essentially what occurs is we take two very quick snapshots to essentially see the initial data, and then based upon that then we'll start taking our continuous 15 second snaps. Speaker 1: 15 seconds snaps, and obviously near sync has less of impact than synchronous, right? From an architectural perspective. Speaker 2: Yeah, and that's a nice thing is essentially within the cluster it's truly pure synchronous, but externally it's just a lagged a-sync. Speaker 1: Gotcha. So there you see some 15 second snapshots. So near sync is also built into five-five, it's a long-awaited feature. So then, when we expand in the rest of capabilities, I would say, operations. There's a lot of you guys obviously, have started using Prism Pro. Okay, okay, you can clap. You can clap. It's okay. It was a lot of work, by the way, by the core data pad team, it was a lot of time. So Prism Pro ... I don't know if you guys know this, Prism Central now run from zero percent to more than 50 percent attach on install base, within 18 months. And normally that's a sign of true usage, and true value being supported. And so, many things are new in five-five out on Prism Pro starting with the fact that you can do data[inaudible 01:11:49] base lining, alerting, so that you're not capturing a ton of false positives and tons of alerts. We go beyond that, because we have this core machine-learning technology power, we call it cross fit. And, what we've done is we've used that as a foundation now for pretty much all kinds of operations benefits such as auto RCA, where you're able to actually map to particular [inaudible 01:12:12] crosses back to who's actually causing it whether it's the network, a computer, and so forth. But then the last thing that we've also done in five-five now that's quite different shading, is the fact that you can now have a lot of these one-click recommendations and remediations, such as right-sizing, the fact that you can actually move around [inaudible 01:12:28] VMs, constrained VMs, and so forth. So, I now we've packed a lot of functionality in Prism Pro, so why don't we spend a couple of minutes quickly giving a sneak peak into a few of those things. Speaker 2: Yep, definitely. So here we're back at our Prism Central interface and one of the things we've added here, if we take a look at one of our clusters, we can see we have this new anomalies portion here. So, let's go ahead and select that and hop into this. Now let's click on one of these anomaly events. Now, essentially what the system does is we monitor all the entities and everything running within the system, and then based upon that, we can actually determine what we expect the band of values for these metrics to be. So in this scenario, we can see we have a CPU usage anomaly event. So, normal time, we expect this to be right around 86 to 100 percent utilization, but at this point we can see this is drastically dropped from 99 percent to near zero. So, this might be a point as an administrator that I want to go check out this virtual machine, ensure that certain services and applications are still up and running. Speaker 1: Gotcha, and then also it changes the baseline based on- Speaker 2: Yep. Yeah, so essentially we apply machine-learning techniques to this, so the system will dynamically adjust based upon the value adjustment. Speaker 1: Gotcha. What else? Speaker 2: Yep. So the other thing here that we mentioned was capacity planning. So if we go over here, we can take a look at our runway. So in this scenario we have about 30 days worth of runway, which is most constrained by memory. Now, obviously, more nodes is all good for everyone, but we also want to ensure that you get the maximum value on your investment. So here we can actually see a few recommendations. We have 11 overprovision virtual machines. These are essentially VMs which have more resources than are necessary. As well as 19 inactives, so these are dead VMs essentially that haven't been powered on and not utilized. We can also see we have six constrained, as well as one bully. So, constrained VMs are essentially VMs which are requesting more resources than they actually have access to. This could be running at 100 percent CPU utilization, or 100 percent memory, or storage utilization. So we could actually go in and modify these. Speaker 1: Gotcha. So these are all part of the auto remediation capabilities that are now possible? Speaker 2: Yeah. Speaker 1: What else, do you want to take reporting? Speaker 2: Yeah. Yeah, so I know reporting is a very big thing, so if we think about it, we can't rely on an administrator to constantly go into Prism. We need to provide some mechanism to allow them to get emailed reports. So what we've done is we actually autogenerate reports which can be sent via email. So we'll go ahead and add one of these sample reports which was created today. And here we can actually get specific detailed information about our cluster without actually having to go into Prism to get this. Speaker 1: And you can customize these reports and all? Speaker 2: Yep. Yeah, if we hop over here and click on our new report, we can actually see a list of views we could add to these reports, and we can mix and match and customize as needed. Speaker 1: Yeah, so that's the operational side. Now we also have new services like AFS which has been quite popular with many of you folks. We've had hundreds of customers already on it live with SMB functionality. You want to show a couple of things that is new in five-five? Speaker 2: Yeah. Yep, definitely. So ... let's wait for my screen here. So one of the key things is if we looked at that runway tab, what we saw is we had over a year's worth of storage capacity. So, what we saw is customers had the requirement for filers, they had some excess storage, so why not actually build a software featured natively into the cluster. And that's essentially what we've done with AFS. So here we can see we have our AFS cluster, and one of the key things is the ability to scale. So, this particular cluster has around 3.1 or 3.16 billion files, which are running on this AFS cluster, as well as around 3,000 active concurrent sessions. Speaker 1: So basically thousands of concurrent sessions with billions of files? Speaker 2: Yeah, and the nice thing with this is this is actually only a four node Nutanix cluster, so as the cluster actually scales, these numbers will actually scale linearly as a function of those nodes. Speaker 1: Gotcha, gotcha. There's got to be one more bullet here on this slide so what's it about? Speaker 2: Yeah so, obviously the initial use case was realistically for home folders as well as user profiles. That was a good start, but it wasn't the only thing. So what we've done is we've actually also introduced important and upcoming release of NFS. So now you can now use NFS to also interface with our [crosstalk 01:16:44]. Speaker 1: NFS coming soon with AFS by the way, it's a big deal. Big deal. So one last thing obviously, as you go operationalize it, we've talked a lot of things on features and functions but one of the cool things that's always been seminal to this company is the fact that we all for really good customer service and support experience. Right now a lot of it is around the product, the people, the support guys, and so forth. So fundamentally to the product we have found ways using Pulse to instrument everything. With Pulse HD that has been allowed for a little bit longer now. We have fine grain [inaudible 01:17:20] around everything that's being done, so if you turn on this functionality you get a lot of information now that we built, we've used when you make a phone call, or an email, and so forth. There's a ton of context now available to support you guys. What we've now done is taken that and are now externalizing it for your own consumption, so that you don't have to necessarily call support. You can log in, look at your entire profile across your own alerts, your own advisories, your own recommendations. You can look at collective intelligence now that's coming soon which is the fact that look, here are 50 other customers just like you. These are the kinds of customers that are using workloads like you, what are their configuration profiles? Through this centralized customer insights portal you going to get a lot more insight, not just about your own operations, but also how everybody else is also using it. So let's take a quick look at that upcoming functionality. Speaker 2: Yep. Absolutely. So this is our customer 360 portal, so as [inaudible 01:18:18] mentioned, as a customer I can actually log in here, I can get a high-level overview of my existing environment, my cases, the status of those cases, as well as any relevant announcements. So, here based upon my cluster version, if there's any updates which are available, I can then see that here immediately. And then one of the other things that we've added here is this insights page. So essentially this is information that previously support would leverage to essentially proactively look out to the cluster, but now we've exposed this to you as the customer. So, clicking on this insights tab we can see an overview of our environment, in this case we have three Nutanix clusters, right around 550 virtual machines, and over here what's critical is we can actually see our cases. And one of the nice things about this is these area all autogenerated by the cluster itself, so no human interaction, no manual intervention was required to actually create these alerts. The cluster itself will actually facilitate that, send it over to support, and then support can get back out to you automatically. Speaker 1: K, so look for customer insights coming soon. And obviously that's the full life cycle. One cool thing though that's always been unique to Nutanix was the fact that we had [inaudible 01:19:28] security from day one built-in. And [inaudible 01:19:31] chunk of functionality coming in five-five just around this, because every release we try to insert more and more security capabilities, and the first one is around data. What are we doing? Speaker 2: Yeah, absolutely. So previously we had support for data at rest encryption, but this did have the requirement to leverage self-encrypting drives. These can be very expensive, so what we've done, typical to our fashion is we've actually built this in natively via software. So, here within Prism Element, I can go to data at rest encryption, and then I can go and edit this configuration here. Section 8 of 13 [01:10:00 - 01:20:04] Section 9 of 13 [01:20:00 - 01:30:04] (NOTE: speaker names may be different in each section) Steve: Encryption and then I can go and edit this configuration here. From here I could add my CSR's. I can specify KMS server and leverage native software base encryption without the requirement of SED's. Sunil: Awesome. So data address encryption [inaudible 01:20:15] coming soon, five five. Now data security is only one element, the other element was around network security obviously. We've always had this request about what are we doing about networking, what are we doing about network, and our philosophy has always been simple and clear, right. It is that the problem in networking is not the data plan. Problem in networking is the control plan. As in, if a packing loss happens to the top of an ax switch, what do we do? If there's a misconfigured board, what do we do? So we've invested a lot in full blown new network visualization that we'll show you a preview of that's all new in five five, but then once you can visualize you can take action, so you can actually using our netscape API's now in five five. You can optovision re lands on the switch, you can update reps on your load balancing pools. You can update obviously rules on your firewall. And then we've taken that to the next level, which is beyond all that, just let you go to AWS right now, what do you do? You take 100 VM's, you put it in an AWS security group, boom. That's how you get micro segmentation. You don't need to buy expensive products, you don't need to virtualize your network to get micro segmentation. That's what we're doing with five five, is built in one click micro segmentation. That's part of the core product, so why don't we just quickly show that. Okay? Steve: Yeah, let's take a look. So if we think about where we've been so far, we've done the comparison test, we've done a migration over to a Nutanix. We've deployed our new HR app. We've protected it's data, now we need to protect the network's. So one of the things you'll see that's new here is this security policies. What we'll do is we'll actually go ahead and create a new security policy and we'll just say this is HR security policy. We'll specify the application type, which in this case is HR. Sunil: HR of course. Steve: Yep and we can see our app instance is automatically populated, so based upon the number of running instances of that blueprint, that would populate that drop-down. Now we'll go ahead and click next here and what we can see in the middle is essentially those three tiers that composed that app blueprint. Now one of the important things is actually figuring out what's trying to communicate with this within my existing environment. So if I take a look over here on my left hand side, I can essentially see a few things. I can see a Ha Proxy load balancer is trying to communicate with my app here, that's all good. I want to allow that. I can see some sort of monitoring service is trying to communicate with all three of the tiers. That's good as well. Now the last thing I can see here is this IP address which is trying to access my database. Now, that's not designed and that's not supposed to happen, so what we'll do is we'll actually take a look and see what it's doing. Now hopping over to this database virtual machine or the hack VM, what we can see is it's trying to perform a brute force log in attempt to my MySQL database. This is not good. We can see obviously it can connect on the socket, however, it hasn't guessed the right password. In order to lock that down, we'll go back to our policies here and we're going to click deny. Once we've done that, we'll click next and now we'll go to Apply Now. Now we can see our newly created security policy and if we hop back over to this VM, we can now see it's actually timing out and what this means is that it's not able to communicate with that database virtual machine due to micro segmentation actively blocking that request. Sunil: Gotcha and when you go back to the Prism site, essentially what we're saying now is, it's as simple as that, to set up micro segmentation now inside your existing clusters. So that's one click micro segmentation, right. Good stuff. One other thing before we let Steve walk off the stage and then go to the bathroom, but is you guys know Steve, you know he spends a lot time in the gym, you do. Right. He and I share cubes right beside each other by the way just if you ever come to San Jose Nutanix corporate headquarters, you're always welcome. Come to the fourth floor and you'll see Steve and Sunil beside each other, most of the time I'm not in the cube, most of the time he's in the gym. If you go to his cube, you'll see all kinds of stuff. Okay. It's true, it's true, but the reason why I brought this up, was Steve recently became a father, his first kid. Oh by the way this is, clicker, this is how his cube looks like by the way but he left his wife and his new born kid to come over here to show us a demo, so give him a round of applause. Thank you, sir. Steve: Cool, thanks, Sunil. That was fun. Sunil: Thank you. Okay, so lots of good stuff. Please try out five five, give us feedback as you always do. A lot of sessions, a lot of details, have fun hopefully for the rest of the day. To talk about how their using Nutanix, you know here's one of our favorite customers and partners. He normally comes with sunglasses, I've asked him that I have to be the best looking guy on stage in my keynotes, so he's going to try to reduce his charm a little bit. Please come on up, Alessandro. Thank you. Alessandro R.: I'm delighted to be here, thank you so much. Sunil: Maybe we can stand here, tell us a little bit about Leonardo. Alessandro R.: About Leonardo, Leonardo is a key actor of the aerospace defense and security systems. Helicopters, aircraft, the fancy systems, the fancy electronics, weapons unfortunately, but it's also a global actor in high technology field. The security information systems division that is the division I belong to, 3,000 people located in Italy and in UK and there's several other countries in Europe and the U.S. $1 billion dollar of revenue. It has a long a deep experience in information technology, communications, automation, logical and physical security, so we have quite a long experience to expand. I'm in charge of the security infrastructure business side. That is devoted to designing, delivering, managing, secure infrastructures services and secure by design solutions and platforms. Sunil: Gotcha. Alessandro R.: That is. Sunil: Gotcha. Some of your focus obviously in recent times has been delivering secure cloud services obviously. Alessandro R.: Yeah, obviously. Sunil: Versus traditional infrastructure, right. How did Nutanix help you in some of that? Alessandro R.: I can tell something about our recent experience about that. At the end of two thousand ... well, not so recent. Sunil: Yeah, yeah. Alessandro R.: At the end of 2014, we realized and understood that we had to move a step forward, a big step and a fast step, otherwise we would drown. At that time, our newly appointed CEO confirmed that the IT would be a core business to Leonardo and had to be developed and grow. So we decided to start our digital transformation journey and decided to do it in a structured and organized way. Having clear in mind our targets. We launched two programs. One analysis program and one deployments programs that were essentially transformation programs. We had to renew ourselves in terms of service models, in terms of organization, in terms of skills to invest upon and in terms of technologies to adopt. We were stacking a certification of technologies that adopted, companies merged in the years before and we have to move forward and to rationalize all these things. So we spent a lot of time analyzing, comparing technologies, and evaluating what would fit to us. We had two main targets. The first one to consolidate and centralize the huge amount of services and infrastructure that were spread over 52 data centers in Italy, for Leonardo itself. The second one, to update our service catalog with a bunch of cloud services, so we decided to update our data centers. One of our building block of our new data center architecture was Nutanix. We evaluated a lot, we had spent a lot of time in analysis, so that wasn't a bet, but you are quite pioneers at those times. Sunil: Yeah, you took a lot of risk right as an Italian company- Alessandro R.: At this time, my colleague used to say, "Hey, Alessandro, think it over, remember that not a CEO has ever been fired for having chose IBM." I apologize, Bob, but at that time, when Nutanix didn't run on [inaudible 01:29:27]. We have still a good bunch of [inaudible 01:29:31] in our data center, so that will be the chance to ... Audience Member: [inaudible 01:29:37] Alessandro R.: So much you must [inaudible 01:29:37] what you announced it. Sunil: So you took a risk and you got into it. Alessandro R.: Yes, we got into, we are very satisfied with the results we have reached. Sunil: Gotcha. Alessandro R.: Most of the targets we expected to fulfill have come and so we are satisfied, but that doesn't mean that we won't go on asking you a big discount ... Sunil: Sure, sure, sure, sure. Alessandro R.: On price list. Sunil: Sure, sure, so what's next in terms of I know there are some interesting stuff that you're thinking. Alessandro R.: The next- Section 9 of 13 [01:20:00 - 01:30:04] Section 10 of 13 [01:30:00 - 01:40:04] (NOTE: speaker names may be different in each section) Speaker 1: So what's next, in terms of I know you have some interesting stuff that you're thinking of. Speaker 2: The next, we have to move forward obviously. The name Leonardo is inspired to Leonardo da Vinci, it was a guy that in terms of innovation and technology innovation had some good ideas. And so, I think, that Leonardo with Nutanix could go on in following an innovation target and following really mutual ... Speaker 1: Partnership. Speaker 2: Useful partnership, yes. We surely want to investigate the micro segmentation technologies you showed a minute ago because we have some looking, particularly by the economical point of view ... Speaker 1: Yeah, the costs and expenses. Speaker 2: And we have to give an alternative to the technology we are using. We want to use more intensively AHV, again as an alternative solution we are using. We are selecting a couple of services, a couple of quite big projects to build using AHV talking of Calm we are very eager to understand the announcement that they are going to show to all of us because the solution we are currently using is quite[crosstalk 01:31:30] Speaker 1: Complicated. Speaker 2: Complicated, yeah. To move a step of automation to elaborate and implement[inaudible 01:31:36] you spend 500 hours of manual activities that's nonsense so ... Speaker 1: Manual automation. Speaker 2: (laughs) Yes, and in the end we are very interested also in the prism features, mostly the new features that you ... Speaker 1: Talked about. Speaker 2: You showed yesterday in the preview because one bit of benefit that we received from the solution in the operations field means a bit plus, plus to our customer and a distinctive plus to our customs so we are very interested in that ... Speaker 1: Gotcha, gotcha. Thanks for taking the risk, thanks for being a customer and partner. Speaker 2: It has been a pleasure. Speaker 1: Appreciate it. Speaker 2: Bless you, bless you. Speaker 1: Thank you. So, you know obviously one OS, one click was one of our core things, as you can see the tagline doesn't stop there, it also says "any cloud". So, that's the rest of the presentation right now it's about; what are we doing, to now fulfill on that mission of one OS, one cloud, one click with one support experience across any cloud right? And there you know, we talked about Calm. Calm is not only just an operational experience for your private cloud but as you can see it's a one-click experience where you can actually up level your apps, set up blueprints, put SLA's and policies, push them down to either your AWS, GCP all your [inaudible 01:33:00] environments and then on day one while you can do one click provisioning, day two and so forth you will see new and new capabilities such as, one-click migration and mobility seeping into the product. Because, that's the end game for Calm, is to actually be your cloud autonomy platform right? So, you can choose the right cloud for the right workload. And talk about how they're building a multi cloud architecture using Nutanix and partnership a great pleasure to introduce my other good Italian friend Daniele, come up on stage please. From Telecom Italia Sparkle. How are you sir? Daniele: Not too bad thank you. Speaker 1: You want an espresso, cappuccino? Daniele: No, no later. Speaker 1: You all good? Okay, tell us a little about Sparkle. Daniele: Yeah, Sparkle is a fully owned subsidy of Telecom Italia group. Speaker 1: Mm-hmm (affirmative) Daniele: Spinned off in 2003 with the mission to develop the wholesale and multinational corporate and enterprise business abroad. Huge network, as you can see, hundreds of thousands of kilometers of fiber optics spread between; south east Asia to Europe to the U.S. Most of it proprietary part of it realized on some running cables. Part of them proprietary part of them bilateral part of them[inaudible 01:34:21] with other operators. 37 countries in which we have offices in the world, 700 employees, lean and clean company ... Speaker 1: Wow, just 700 employees for all of this. Daniele: Yep, 1.4 billion revenues per year more or less. Speaker 1: Wow, are you a public company? Daniele: No, fully owned by TIM so far. Speaker 1: So, what is your experience with Nutanix so far? Daniele: Well, in a way similar to what Alessandro was describing. To operate such a huge network as you can see before, and to keep on bringing revenues for the wholesale market, while trying to turn the bar toward the enterprise in a serious way. Couple of years ago the management team realized that we had to go through a serious transformation, not just technological but in terms of the way we build the services to our customers. In terms of how we let our customer feel the Sparkle experience. So, we are moving towards cloud but we are moving towards cloud with connectivity attached to it because it's in our cord as a provider of Telecom services. The paradigm that is driving today is the on-demand, is the dynamic and in order to get these things we need to move to software. Most of the network must become invisible as the Nutanix way. So, we decided instead of creating patchworks onto our existing systems, infrastructure, OSS, BSS and network systems, to build a new data center from scratch. And the paradigm being this new data center, the mantra was; everything is software designed, everything must be easy to manage, performance capacity planning, everything must be predictable and everything to be managed by few people. Nutanix is at the moment the baseline of this data center for what concern, let's say all the new networking tools, meaning as the end controllers that are taking care of automation and programmability of the network. Lifecycle service orchestrator, network orchestrator, cloud automation and brokerage platform and everything at the moment runs on AHV because we are forcing our vendors to certify their application on AHV. The only stack that is not at the moment AHV based is on a specific cloud platform because there we were really looking for the multi[inaudible 01:37:05]things that you are announcing today. So, we hope to do the migration as soon as possible. Speaker 1: Gotcha, gotcha. And then looking forward you're going to build out some more data center space, expose these services Daniele: Yeah. Speaker 1: For the customers as well as your internal[crosstalk 01:37:21] Daniele: Yeah, basically yes for sure we are going to consolidate, to invest more in the data centers in the markets on where we are leader. Italy, Turkey and Greece we are big data centers for [inaudible 01:37:33] and cloud, but we believe that the cloud with all the issues discussed this morning by Diraj, that our locality, customer proximity ... we think as a global player having more than 120 pops all over the world, which becomes more than 1000 in partnerships, that the pop can easily be transformed in a data center, so that we want to push the customer experience of what we develop in our main data centers closer to them. So, that we can combine traditional infrastructure as a service with the new connectivity services every single[inaudible 01:38:18] possibly everything running. Speaker 1: I mean, it makes sense, I mean I think essentially in some ways to summarize it's the example of an edge cloud where you're pushing a micro-cloud closer to the customers edge. Daniele: Absolutely. Speaker 1: Great stuff man, thank you so much, thank you so much. Daniele: Pleasure, pleasure. Thank you. Speaker 1: So, you know a couple of other things before we get in the next demo is the fact that in addition to Calm from multi-cloud management we have Zai, we talked about for extended enterprise capabilities and something for you guys to quickly understand why we have done this. In a very simple way is if you think about your enterprise data center, clearly you have a bunch of apps there, a bunch of public clouds and when you look at the paradigm you currently deploy traditional apps, we call them mode one apps, SAP, Exchange and so forth on your enterprise. Then you have next generation apps whether it be [inaudible 01:39:11] space, whether it be Doob or whatever you want to call it, lets call them mode two apps right? And when you look at these two types of apps, which are the predominant set, most enterprises have a combination of mode one and mode two apps, most public clouds primarily are focused, initially these days on mode two apps right? And when people talk about app mobility, when people talk about cloud migration, they talk about lift and shift, forklift [inaudible 01:39:41]. And that's a hard problem I mean, it's happening but it's a hard problem and ends up that its just not a one time thing. Once you've forklift, once you move you have different tooling, different operation support experience, different stacks. What if for some of your applications that mattered ... Section 10 of 13 [01:30:00 - 01:40:04] Section 11 of 13 [01:40:00 - 01:50:04] (NOTE: speaker names may be different in each section) Speaker 1: What if, for some of your applications that matter to you, that are your core enterprise apps that you can retain the same toolimg, the same operational experience and so forth. And that is what we achieve to do with Xi. It is truly making hybrid invisible, which is a next act for this company. It'll take us a few years to really fulfill the vision here, but the idea here is that you shouldn't think about public cloud as a different silo. You should think of it as an extension of your enterprise data centers. And for any services such as DR, whether it would be dev test, whether it be back-up, and so-forth. You can use the same tooling, same experience, get a public cloud-like capability without lift and shift, right? So it's making this lift and shift invisible by, soft of, homogenizing the data plan, the network plan, the control plan is what we really want to do with Xi. Okay? And we'll show you some more details here. But the simplest way to understand this is, think of it as the iPhone, right? D has mentioned this a little bit. This is how we built this experience. Views IOS as the core, IP, we wrap it up with a great package called the iPhone. But then, a few years into the iPhone era, came iTunes and iCloud. There's no apps, per se. That's fused into IOS. And similarly, think about Xi that way. The more you move VMs, into an internet-x environment, stuff like DR comes burnt into the fabric. And to give us a sneak peek into a bunch of the com and Xi cable days, let me bring back Binny who's always a popular guys on stage. Come on up, Binny. I'd be surprised in Binny untucked his shirt. He's always tucking in his shirt. Binny Gill: Okay, yeah. Let's go. Speaker 1: So first thing is com. And to show how we can actually deploy apps, not just across private and public clouds, but across multiple public clouds as well. Right? Binny Gill: Yeah, basically, you know com is about simplifying the disparity between various public clouds out there. So it's very important for us to be able to take one application blueprint and then quickly deploy in whatever cloud of your choice. Without understanding how one cloud is different. Speaker 1: Yeah, that's the goal. Binny Gill: So here, if you can see, I have market list. And by the way, this market list is a great partner community interest. And every single sort of apps come up here. Let me take a sample app here, Hadoop. And click launch. And now where do you want me to deploy? Speaker 1: Let's start at GCP. Binny Gill: GCP, okay. So I click on GCP, and let me give it a name. Hadoop. GCP. Say 30, right. Clear. So this is one click deployment of anything from our marketplace on to a cloud of your choice. Right now, what the system is doing, is taking the intent-filled description of what the application should look like. Not just the infrastructure level but also within the merchant machines. And it's creating a set of work flows that it needs to go deploy. So as you can see, while we were talking, it's loading the application. Making sure that the provisioning workflows are all set up. Speaker 1: And so this is actually, in real time it's actually extracting out some of the GCP requirements. It's actually talking to GCP. Setting up the constructs so that we can actually push it up on the GCP personally. Binny Gill: Right. So it takes a couple of minutes. It'll provision. Let me go back and show you. Say you worked with deploying AWS. So you Hadoop. Hit address. And that's it. So again, the same work flow. Speaker 1: Same process, I see. Binny Gill: It's going to now deploy in AWS. Speaker 1: See one of the keys things is that we actually extracted out all the isms of each of these clouds into this logical substrate. Binny Gill: Yep. Speaker 1: That you can now piggy-back off of. Binny Gill: Absolutely. And it makes it extremely simple for the average consumer. And you know we like more cloud support here over time. Speaker 1: Sounds good. Binny Gill: Now let me go back and show you an app that I had already deployed. Now 13 days ago. It's on GCP. And essentially what I want to show you is what is the view of the application. Firstly, it shows you the cost summary. Hourly, daily, and how the cost is going to look like. The other is how you manage it. So you know one click ways of upgrading, scaling out, starting, deleting, and so on. Speaker 1: So common actions, but independent of the type of clouds. Binny Gill: Independent. And also you can act with these actions over time. Right? Then services. It's learning two services, Hadoop slave and Hadoop master. Hadoop slave runs fast right now. And auditing. It shows you what are the important actions you've taken on this app. Not just, for example, on the IS front. This is, you know how the VMs were created. But also if you scroll down, you know how the application was deployed and brought up. You know the slaves have to discover each other, and so on. Speaker 1: Yeah got you. So find game invisibility into whatever you were doing with clouds because that's been one of the complaints in general. Is that the cloud abstractions have been pretty high level. Binny Gill: Yeah. Speaker 1: Yeah. Binny Gill: Yeah. So that's how we make the differences between the public clouds. All go away for the Indias of ... Speaker 1: Got you. So why don't we now give folks ... Now a lot of this stuff is coming in five, five so you'll see that pretty soon. You'll get your hands around it with AWS and tree support and so forth. What we wanted to show you was emerging alpha version that is being baked. So is a real production code for Xi. And why don't we just jump right in to it. Because we're running short of time. Binny Gill: Yep. Speaker 1: Give folks a flavor for what the production level code is already being baked around. Binny Gill: Right. So the idea of the design is make sure it's not ... the public cloud is no longer any different from your private cloud. It's a true seamless extension of your private cloud. Here I have my test environment. As you can see I'm running the HR app. It has the DB tier and the Web tier. Yeah. Alright? And the DB tier is running Oracle DB. Employee payroll is the Web tier. And if you look at the availability zones that I have, this is my data center. Now I want to protect this application, right? From disaster. What do I do? I need another data center. Speaker 1: Sure. Binny Gill: Right? With Xi, what we are doing is ... You go here and click on Xi Cloud Services. Speaker 1: And essentially as the slide says, you are adding AZs with one click. Binny Gill: Yeps so this is what I'm going to do. Essentially, you log in using your existing my.nutanix.com credentials. So here I'm going to use my guest credentials and log in. Now while I'm logging in what's happening is we are creating a seamless network between the two sides. And then making the Xi cloud availability zone appear. As if it was my own. Right? Speaker 1: Gotcha. Binny Gill: So in a couple of seconds what you'll notice this list is here now I don't have just one availability zone, but another one appears. Speaker 1: So you have essentially, real time now, paid a one data center doing an availability zone. Binny Gill: Yep. Speaker 1: Cool. Okay. Let's see what else we can do. Binny Gill: So now you think about VR setup. Now I'm armed with another data center, let's do DR Center. Now DR set-up is going to be extremely simple. Speaker 1: Okay but it's also based because on the fact that it is the same stack on both sides. Right? Binny Gill: It's the same stack on both sides. We have a secure network lane connecting the two sides, on top of the secure network plane. Now data can flow back and forth. So now applications can go back and forth, securely. Speaker 1: Gotcha, okay. Let's look at one-click DR. Binny Gill: So for one-click DR set-up. A couple of things we need to know. One is a protection rule. This is the RPO, where does it apply to? Right? And the connection of the replication. The other one is recovery plans, in case disaster happens. You know, how do I bring up my machines and application work-order and so on. So let me first show you, Protection Rule. Right? So here's the protection rule. I'll create one right now. Let me call it Platinum. Alright, and source is my own data center. Destination, you know Xi appears now. Recovery point objective, so maybe in a one hour these snapshots going to the public cloud. I want to retain three in the public side, three locally. And now I select what are the entities that I want to protect. Now instead of giving VMs my name, what I can do is app type employee payroll, app type article database. It covers both the categories of the application tiers that I have. And save. Speaker 1: So one of the things here, by the way I don't know if you guys have noticed this, more and more of Nutanix's constructs are being eliminated to become app-centric. Of course is VM centric. And essentially what that allows one to do is to create that as the new service-level API/abstraction. So that under the cover over a period of time, you may be VMs today, maybe containers tomorrow. Or functions, the day after. Binny Gill: Yep. What I just did was all that needs to be done to set up replication from your own data center to Xi. So we started off with no data center to actually replication happening. Speaker 1: Gotcha. Binny Gill: Okay? Speaker 1: No, no. You want to set up some recovery plans? Binny Gill: Yeah so now set up recovery plan. Recovery plans are going to be extremely simple. You select a bunch of VMs or apps, and then there you can say what are the scripts you want to run. What order in which you want to boot things. And you know, you can set up access these things with one click monthly or weekly and so on. Speaker 1: Gotcha. And that sets up the IPs as well as subnets and everything. Binny Gill: So you have the option. You can maintain the same IPs on frame as the move to Xi. Or you can make them- Speaker 1: Remember, you can maintain your own IPs when you actually use the Xi service. There was a lot of things getting done to actually accommodate that capability. Binny Gill: Yeah. Speaker 1: So let's take a look at some of- Binny Gill: You know, the same thing as VPC, for example. Speaker 1: Yeah. Binny Gill: You need to possess on Xi. So, let's create a recovery plan. A recovery plan you select the destination. Where does the recovery happen. Now, after that Section 11 of 13 [01:40:00 - 01:50:04] Section 12 of 13 [01:50:00 - 02:00:04] (NOTE: speaker names may be different in each section) Speaker 1: ... does the recovery happen. Now, after that you have to think of what is the runbook that you want to run when disaster happens, right? So you're preparing for that, so let me call "HR App Recovery." The next thing is the first stage. We're doing the first stage, let me add some entities by categories. I want to bring up my database first, right? Let's click on the database and that's it. Speaker 2: So essentially, you're building the script now. Speaker 1: Building the script- Speaker 2: ... on the [inaudible 01:50:30] Speaker 1: ... but in a visual way. It's simple for folks to understand. You can add custom script, add delay and so on. Let me add another stage and this stage is about bringing up the web tier after the database is up. Speaker 2: So basically, bring up the database first, then bring up the web tier, et cetera, et cetera, right? Speaker 1: That's it. I've created a recovery plan. I mean usually it's complicated stuff, but we made it extremely simple. Now if you click on "Recovery Points," these are snapshots. Snapshots of your applications. As you can see, already the system has taken three snapshots in response to the protection rule that we had created just a couple minutes ago. And these are now being seeded to Xi data centers. Of course this takes time for seeding, so what I have is a setup already and that's the production environment. I'll cut over to that. This is my production environment. Click "Explore," now you see the same application running in production and I have a few other VMs that are not protected. Let's go to "Recovery Points." It has been running for sometime, these recover points are there and they have been replicated to Xi. Speaker 2: So let's do the failover then. Speaker 1: Yeah, so to failover, you'll have to go to Xi so let me login to Xi. This time I'll use my production account for logging into Xi. I'm logging in. The first thing that you'll see in Xi is a dashboard that gives you a quick summary of what your DR testing has been so far, if there are any issues with the replication that you have and most importantly the monthly charges. So right now I've spent with my own credit card about close to 1,000 bucks. You'll have to refund it quickly. Speaker 2: It depends. If the- Speaker 1: If this works- Speaker 2: IF the demo works. Speaker 1: Yeah, if it works, okay. As you see, there are no VMs right now here. If I go to the recovery points, they are there. I can click on the recovery plan that I had created and let's see how hard it's going to be. I click "Failover." It says three entities that, based on the snapshots, it knows that it can recovery from source to destination, which is Xi. And one click for the failover. Now we'll see what happens. Speaker 2: So this is essentially failing over my production now. Speaker 1: Failing over your production now. [crosstalk 01:52:53] If you click on the "HR App Recovery," here you see now it started the recovery plan. The simple recovery plan that we had created, it actually gets converted to a series of tasks that the system has to do. Each VM has to be hydrated, powered on in the right order and so on and so forth. You don't have to worry about any of that. You can keep an eye on it. But in the meantime, let's talk about something else. We are doing failover, but after you failover, you run in Xi as if it was your own setup and environment. Maybe I want to create a new VM. I create a VM and I want to maybe extend my HR app's web tier. Let me name it as "HR_Web_3." It's going to boot from that disk. Production network, I want to run it on production network. We have production and test categories. This one, I want to give it employee payroll category. Now it applies the same policies as it's peers will. Here, I'm going to create the VM. As you can see, I can already see some VMs coming up. There you go. So three VMs from on-prem are now being filled over here while the fourth VM that I created is already being powered. Speaker 2: So this is basically realtime, one-click failover, while you're using Xi for your [inaudible 01:54:13] operations as well. Speaker 1: Exactly. Speaker 2: Wow. Okay. Good stuff. What about- Speaker 1: Let me add here. As the other cloud vendors, they'll ask you to make your apps ready for their clouds. Well we tell our engineers is make our cloud ready for your apps. So as you can see, this failover is working. Speaker 2: So what about failback? Speaker 1: All of them are up and you can see the protection rule "platinum" has been applied to all four. Now let's look at this recovery plan points "HR_Web_3" right here, it's already there. Now assume the on-prem was already up. Let's go back to on-prem- Speaker 2: So now the scenario is, while Binny's coming up, is that the on-prem has come back up and we're going to do live migration back as in a failback scenario between the data centers. Speaker 1: And how hard is it going to be. "HR App Recovery" the same "HR App Recovery", I click failover and the system is smart enough to understand the direction is reversed. It's also smart enough to figure out "Hey, there are now the four VMs are there instead of three." Xi to on-prem, one-click failover again. Speaker 2: And it's rerunning obviously the same runbook but in- Speaker 1: Same runbook but the details are different. But it's hidden from the customer. Let me go to the VMs view and do something interesting here. I'll group them by availability zone. Here you go. As you can see, this is a hybrid cloud view. Same management plane for both sides public and private. There are two availability zones, the Xi availability zone is in the cloud- Speaker 2: So essentially you're moving from the top- Speaker 1: Yeah, top- Speaker 2: ... to the bottom. Speaker 1: ... to the bottom. Speaker 2: That's happening in the background. While this is happening, let me take the time to go and look at billing in Xi. Speaker 1: Sure, some of the common operations that you can now see in a hybrid view. Speaker 2: So you go to "Billing" here and first let me look at my account. And account is a simple page, I have set up active directory and you can add your own XML file, upload it. You can also add multi-factor authentication, all those things are simple. On the billing side, you can see more details about how did I rack up $966. Here's my credit card. Detailed description of where the cost is coming from. I can also download previous versions, builds. Speaker 1: It's actually Nutanix as a service essentially, right? Speaker 2: Yep. Speaker 1: As a subscription service. Speaker 2: Not only do we go to on-prem as you can see, while we were talking, two VMs have already come back on-prem. They are powered off right now. The other two are on the wire. Oh, there they are. Speaker 1: Wow. Speaker 2: So now four VMs are there. Speaker 1: Okay. Perfect. Sometimes it works, sometimes it doesn't work, but it's good. Speaker 2: It always works. Speaker 1: Always works. All right. Speaker 2: As you can see the platinum protection rule is now already applied to them and now it has reversed the direction of [inaudible 01:57:12]- Speaker 1: Remember, we showed one-click DR, failover, failback, built into the product when Xi ships to any Nutanix fabric. You can start with DSX on premise, obviously when you failover to Xi. You can start with AHV, things that are going to take the same paradigm of one-click operations into this hybrid view. Speaker 2: Let's stop doing lift and shift. The era has come for click and shift. Speaker 1: Binny's now been promoted to the Chief Marketing Officer, too by the way. Right? So, one more thing. Speaker 2: Okay. Speaker 1: You know we don't stop any conferences without a couple of things that are new. The first one is something that we should have done, I guess, a couple of years ago. Speaker 2: It depends how you look at it. Essentially, if you look at the cloud vendors, one of the key things they have done is they've built services as building blocks for the apps that run on top of them. What we have done at Nutanix, we've built core services like block services, file services, now with Calm, a marketplace. Now if you look at [inaudible 01:58:14] applications, one of the core building pieces is the object store. I'm happy to announce that we have the object store service coming up. Again, in true Nutanix fashion, it's going to be elastic. Speaker 1: Let's- Speaker 2: Let me show you. Speaker 1: Yeah, let's show it. It's something that is an object store service by the way that's not just for your primary, but for your secondary. It's obviously not just for on-prem, it's hybrid. So this is being built as a next gen object service, as an extension of the core fabric, but accommodating a bunch of these new paradigms. Speaker 2: Here is the object browser. I've created a bunch of buckets here. Again, object stores can be used in various ways: as primary object store, or for secondary use cases. I'll show you both. I'll show you a Hadoop use case where Hadoop is using this as a primary store and a backup use case. Let's just jump right in. This is a Hadoop bucket. AS you can see, there's a temp directory, there's nothing interesting there. Let me go to my Hadoop VM. There it is. And let me run a Hadoop job. So this Hadoop job essentially is going to create a bunch of files, write them out and after that do map radius on top. Let's wait for the job to start. It's running now. If we go back to the object store, refresh the page, now you see it's writing from benchmarks. Directory, there's a bunch of files that will write here over time. This is going to take time. Let's not wait for it, but essentially, it is showing Hadoop that uses AWS 3 compatible API, that can run with our object store because our object store exposes AWS 3 compatible APIs. The other use case is the HYCU backup. As you can see, that's a- Section 12 of 13 [01:50:00 - 02:00:04] Section 13 of 13 [02:00:00 - 02:13:42] (NOTE: speaker names may be different in each section) Vineet: This is the hycu back up ... As you can see, that's a back-up software that can back-up WSS3. If you point it to Nutanix objects or it can back-up there as well. There are a bunch of back-up files in there. Now, object stores, it's very important for us to be able to view what's going on there and make sure there's no objects sprawled because once it's easy to write objects, you just accumulate a lot of them. So what we wanted to do, in true Nutanix style, is give you a quick overview of what's happening with your object store. So here, as you can see, you can look at the buckets, where the load is, you can look at the bucket sizes, where the data is, and also what kind of data is there. Now this is a dashboard that you can optimize, and customize, for yourself as well, right? So that's the object store. Then we go back here, and I have one more thing for you as well. Speaker 2: Okay. Sounds good. I already clicked through a slide, by the way, by mistake, but keep going. Vineet: That's okay. That's okay. It is actually a quiz, so it's good for people- Speaker 2: Okay. Sounds good. Vineet: It's good for people to have some clues. So the quiz is, how big is my SAP HANA VM, right? I have to show it to you before you can answer so you don't leak the question. Okay. So here it is. So the SAP HANA VM here vCPU is 96. Pretty beefy. Memory is 1.5 terabytes. The question to all of you is, what's different in this screen? Speaker 2: Who's a real Prism user here, by the way? Come on, it's got to be at least a few. Those guys. Let's see if they'll notice something. Vineet: What's different here? Speaker 3: There's zero CVM. Vineet: Zero CVM. Speaker 2: That's right. Yeah. Yeah, go ahead. Vineet: So, essentially, in the Nutanix fabric, every server has to run a [inaudible 02:01:48] machine, right? That's where the storage comes from. I am happy to announce the Acropolis Compute Cloud, where you will be able to run the HV on servers that are storage-less, and add it to your existing cluster. So it's a compute cloud that now can be managed from Prism Central, and that way you can preserve your investments on your existing server farms, and add them to the Nutanix fabric. Speaker 2: Gotcha. So, essentially ... I mean, essentially, imagine, now that you have the equivalent of S3 and EC2 for the enterprise now on Premisis, like you have the equivalent compute and storage services on JCP and AWS, and so forth, right? So the full flexibility for any kind of workload is now surely being available on the same Nutanix fabric. Thanks a lot, Vineet. Before we wrap up, I'd sort of like to bring this home. We've announced a pretty strategic partnership with someone that has always inspired us for many years. In fact, one would argue that the genesis of Nutanix actually was inspired by Google and to talk more about what we're actually doing here because we've spent a lot of time now in the last few months to really get into the product capabilities. You're going to see some upcoming capabilities and 55X release time frame. To talk more about that stuff as well as some of the long-term synergies, let me invite Bill onstage. C'mon up Bill. Tell us a little bit about Google's view in the cloud. Bill: First of all, I want to compliment the demo people and what you did. Phenomenal work that you're doing to make very complex things look really simple. I actually started several years ago as a product manager in high availability and disaster recovery and I remember, as a product manager, my engineers coming to me and saying "we have a shortage of our engineers and we want you to write the fail-over routines for the SAP instance that we're supporting." And so here's the PERL handbook, you know, I haven't written in PERL yet, go and do all that work to include all the network setup and all that work, that's amazing, what you are doing right there and I think that's the spirit of the partnership that we have. From a Google perspective, obviously what we believe is that it's time now to harness the power of scale security and these innovations that are coming out. At Google we've spent a lot of time in trying to solve these really large problems at scale and a lot of the technology that's been inserted into the industry right now. Things like MapReduce, things like TenserFlow algorithms for AI and things like Kubernetes and Docker were first invented at Google to solve problems because we had to do it to be able to support the business we have. You think about search, alright? When you type in search terms within the search box, you see a white screen, what I see is all the data-center work that's happening behind that and the MapReduction to be able to give you a search result back in seconds. Think about that work, think about that process. Taking and pursing those search terms, dividing that over thousands of [inaudible 02:05:01], being able to then search segments of the index of the internet and to be able to intelligent reduce that to be able to get you an answer within seconds that is prioritized, that is sorted. How many of you, out there, have to go to page two and page three to get the results you want, today? You don't because of the power of that technology. We think it's time to bring that to the consumer of the data center enterprise space and that's what we're doing at Google. Speaker 2: Gotcha, man. So I know we've done a lot of things now over the last year worth of collaboration. Why don't we spend a few minutes talking through a couple things that we're started on, starting with [inaudible 02:05:36] going into com and then we'll talk a little bit about XI. Bill: I think one of the advantages here, as we start to move up the stack and virtualize things to your point, right, is virtual machines and the work required of that still takes a fair amount of effort of which you're doing a lot to reduce, right, you're making that a lot simpler and seamless across both On-Prem and the cloud. The next step in the journey is to really leverage the power of containers. Lightweight objects that allow you to be able to head and surface functionality without being dependent upon the operating system or the VM to be able to do that work. And then having the orchestration layer to be able to run that in the context of cloud and On-Prem We've been very successful in building out the Kubernetes and Docker infrastructure for everyone to use. The challenge that you're solving is how to we actually bridge the gap. How do we actually make that work seamlessly between the On-Premise world and the cloud and that's where our partnership, I think, is so valuable. It's cuz you're bringing the secret sauce to be able to make that happen. Speaker 2: Gotcha, gotcha. One last thing. We talked about Xi and the two companies are working really closely where, essentially the Nutanix fabric can seamlessly seep into every Google platform as infrastructure worldwide. Xi, as a service, could be delivered natively with GCP, leading to some additional benefits, right? Bill: Absolutely. I think, first and foremost, the infrastructure we're building at scale opens up all sorts of possibilities. I'll just use, maybe, two examples. The first one is network. If you think about building out a global network, there's a lot of effort to do that. Google is doing that as a byproduct of serving our consumers. So, if you think about YouTube, if you think about there's approximately a billion hours of YouTube that's watched every single day. If you think about search, we have approximately two trillion searches done in a year and if you think about the number of containers that we run in a given week, we run about two billion containers per week. So the advantage of being able to move these workloads through Xi in a disaster recovery scenario first is that you get to take advantage of the scale. Secondly, it's because of the network that we've built out, we had to push the network out to the edge. So every single one of our consumers are using YouTube and search and Google Play and all those services, by the way we have over eight services today that have more than a billion simultaneous users, you get to take advantage of that network capacity and capability just by moving to the cloud. And then the last piece, which is a real advantage, we believe, is that it's not just about the workloads you're moving but it's about getting access to new services that cloud preventers, like Google, provide. For example, are you taking advantage like the next generation Hadoop, which is our big query capability? Are you taking advantage of the artificial intelligence derivative APIs that we have around, the video API, the image API, the speech-to-text API, mapping technology, all those additional capabilities are now exposed to you in the availability of Google cloud that you can now leverage directly from systems that are failing over and systems that running in our combined environment. Speaker 2: A true converged fabric across public and private. Bill: Absolutely. Speaker 2: Great stuff Bill. Thank you, sir. Bill: Thank you, appreciate it. Speaker 2: Good to have you. So, the last few slides. You know we've talked about, obviously One OS, One Click and eCloud. At the end of the day, it's pretty obvious that we're evaluating the move from a form factor perspective, where it's not just an OS across multiple platforms but it's also being distributed genuinely from consuming itself as an appliance to a software form factor, to subscription form factor. What you saw today, obviously, is the fact that, look you know we're still continuing, the velocity has not slowed down. In fact, in some cases it's accelerated. If you ask my quality guys, if you ask some of our customers, we're coming out fast and furious with a lot of these capabilities. And some of this directly reflects, not just in features, but also in performance, just like a public cloud, where our performance curve is going up while our price-performance curve is being more attractive over a period of time. And this is balancing it with quality, it is what differentiates great companies from good companies, right? So when you look at the number of nodes that have been shipping, it was around ten more nodes than where we were a few years ago. But, if you look at the number of customer-found defects, as a percentage of number of nodes shipped it is not only stabilized, it has actually been coming down. And that's directly reflected in the NPS part. That most of you guys love. How many of you guys love your Customer Support engineers? Give them a round of applause. Great support. So this balance of velocity, plus quality, is what differentiates a company. And, before we call it a wrap, I just want to leave you with one thing. You know, obviously, we've talked a lot about technology, innovation, inspiration, and so forth. But, as I mentioned, from last night's discussion with Sir Ranulph, let's think about a few things tonight. Don't take technology too seriously. I'll give you a simple story that he shared with me, that puts things into perspective. The year was 1971. He had come back from Aman, from his service. He was figuring out what to do. This was before he became a world-class explorer. 1971, he had a job interview, came down from Scotland and applied for a role in a movie. And he failed that job interview. But he was selected from thousands of applicants, came down to a short list, he was a ... that's a hint ... he was a good looking guy and he lost out that role. And the reason why I say this is, if he had gotten that job, first of all I wouldn't have met him, but most importantly the world wouldn't have had an explorer like him. The guy that he lost out to was Roger Moore and the role was for James Bond. And so, when you go out tonight, enjoy with your friends [inaudible 02:12:06] or otherwise, try to take life a little bit once upon a time or more than once upon a time. Have fun guys, thank you. Speaker 5: Ladies and gentlemen please make your way to the coffee break, your breakout sessions will begin shortly. Don't forget about the women's lunch today, everyone is welcome. Please join us. You can find the details in the mobile app. Please share your feedback on all sessions in the mobile app. There will be prizes. We will see you back here and 5:30, doors will open at 5, after your last breakout session. Breakout sessions will start sharply at 11:10. Thank you and have a great day. Section 13 of 13 [02:00:00 - 02:13:42]

Published Date : Nov 9 2017

SUMMARY :

of the globe to be here. And now, to tell you more about the digital transformation that's possible in your business 'Cause that's the most precious thing you actually have, is time. And that's the way you can have the best of both worlds; the control plane is centralized. Speaker 1: Thank you so much, Bob, for being here. Speaker 1: IBM is all things cognitive. and talking about the meaning of history, because I love history, actually, you know, We invented the role of the CIO to help really sponsor and enter in this notion that businesses Speaker 1: How's it different from 1993? Speaker 1: And you said it's bigger than 25 years ago. is required to do that, the experience of the applications as you talked about have Speaker 1: It looks like massive amounts of change for Speaker 1: I'm sure there are a lot of large customers Speaker 1: How can we actually stay not vulnerable? action to be able to deploy cognitive infrastructure in conjunction with the business processes. Speaker 1: Interesting, very interesting. and the core of cognition has to be infrastructure as well. Speaker 1: Which is one of the two things that the two So the algorithms are redefining the processes that the circuitry actually has to run. Speaker 1: It's interesting that you mentioned the fact Speaker 1: Exactly, and now the question is how do you You talked about the benefits of calm and being able to really create that liberation fact that you have the power of software, to really meld the two forms together. Speaker 1: It can serve files and mocks and things like And the reason for that if for any data intensive application like a data base, a no sequel What we want is that optionality, for you to utilize those benefits of the 3X better Speaker 1: Your tongue in cheek remark about commodity That is the core of IBM's business for the last 20, 25, 30 years. what you already have to make it better. Speaker 1: Yeah. Speaker 1: That's what Apple did with musics. It's okay, and possibly easier to do it in smaller islands of containment, but when you Speaker 1: Awesome. Thank you. I know that people are sitting all the way up there as well, which is remarkable. Speaker 3: Ladies and gentlemen, please welcome Chief But before I get into the product and the demos, to give you an idea. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. So, what we're going to do is, the first step most of you guys know this, is we've been Now one of the key things is having the ability to test these against each other. And to do that, we took a hard look and came out with a new product called Xtract. So essentially if we think about what Nutanix has done for the data center really enables and performing the cut over to you. Speaker 1: Sure, some of the common operations that you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Binny GillPERSON

0.99+

DanielePERSON

0.99+

IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

BinnyPERSON

0.99+

StevenPERSON

0.99+

JuliePERSON

0.99+

NutanixORGANIZATION

0.99+

ItalyLOCATION

0.99+

UKLOCATION

0.99+

Telecom ItaliaORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

100 percentQUANTITY

0.99+

GartnerORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AlessandroPERSON

0.99+

2003DATE

0.99+

SunilPERSON

0.99+

GoogleORGANIZATION

0.99+

20%QUANTITY

0.99+

Steven PoitrasPERSON

0.99+

15 secondsQUANTITY

0.99+

1993DATE

0.99+

LeonardoPERSON

0.99+

LennoxORGANIZATION

0.99+

hundredsQUANTITY

0.99+

SixQUANTITY

0.99+

two companiesQUANTITY

0.99+

John DoePERSON

0.99+

AWSORGANIZATION

0.99+