Image Title

Search Results for Alessandro:

Pete Lumbis, NVIDIA & Alessandro Barbieri, Pluribus Networks


 

(upbeat music) >> Okay, we're back. I'm John Furrier with theCUBE and we're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alessandro Barbieri, VP of product management at Pluribus Networks and Pete Lumbis, the director of technical marketing and video remotely. Guys thanks for coming on, appreciate it. >> Yeah thanks a lot. >> I'm happy to be here. >> So a deep dive, let's get into the what and how. Alessandro, we heard earlier about the Pluribus and NVIDIA partnership and the solution you're working together in. What is it? >> Yeah, first let's talk about the what. What are we really integrating with the NVIDIA BlueField the DPU technology? Pluribus has been shipping in volume in multiple mission critical networks, this Netvisor ONE network operating systems. It runs today on merchant silicon switches and effectively it's standard based open network operating system for data center. And the novelty about this operating system is that it integrates distributed the control plane to automate effect with SDN overlay. This automation is completely open and interoperable and extensible to other type of clouds. It's not enclosed. And this is actually what we're now porting to the NVIDIA DPU. >> Awesome, so how does it integrate into NVIDIA hardware and specifically how is Pluribus integrating its software with the NVIDIA hardware? >> Yeah, I think we leverage some of the interesting properties of the BlueField DPU hardware which allows actually to integrate our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card is completely agnostic to the hypervisor layer or OS layer running on the host. Even more, we can also independently manage this network node this switch on a NIC effectively, managed completely independently from the host. You don't have to go through the network operating system running on X86 to control this network node. So you truly have the experience effectively top of rack for virtual machine or a top of rack for Kubernetes spots, where if you allow me with analogy, instead of connecting a server NIC directly to a switchboard, now we are connecting a VM virtual interface to a virtual interface on the switch on an niche. And also as part of this integration, we put a lot of effort, a lot of emphasis in accelerating the entire data plan for networking and security. So we are taking advantage of the NVIDIA DOCA API to program the accelerators. And these you accomplish two things with that. Number one, you have much better performance. They're running the same network services on an X86 CPU. And second, this gives you the ability to free up I would say around 20, 25% of the server capacity to be devoted either to additional workloads to run your cloud applications or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20% if you want to run the same number of compute workloads. So great efficiencies in the overall approach. >> And this is completely independent of the server CPU, right? >> Absolutely, there is zero code from Pluribus running on the X86. And this is why we think this enables a very clean demarcation between compute and network. >> So Pete, I got to get you in here. We heard that the DPU enable cleaner separation of DevOps and NetOps. Can you explain why that's important because everyone's talking DevSecOps, right? Now, you've got NetSecOps. This separation, why is this clean separation important? >> Yeah, I think, it's a pragmatic solution in my opinion. We wish the world was all kind of rainbows and unicorns, but it's a little messier than that. I think a lot of the DevOps stuff and that mentality and philosophy. There's a natural fit there. You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And I think that we in the networking industry have gotten closer together but there's still a gap, there's still some distance. And I think that distance isn't going to be closed and so, again, it comes down to pragmatism. And I think one of my favorite phrases is look, good fences make good neighbors. And that's what this is. >> Yeah, and it's a great point 'cause DevOps has become kind of the calling car for cloud, right? But DevOps is a simply infrastructures code and infrastructure is networking, right? So if infrastructure is code you're talking about that part of the stack under the covers, under the hood if you will. This is super important distinction and this is where the innovation is. Can you elaborate on how you see that because this is really where the action is right now? >> Yeah, exactly. And I think that's where one from the policy, the security, the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you can totally open up those capabilities. And so security's part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding up to 48 servers per rack. And so that ability to automate, orchestrate and manage its scale becomes absolutely critical. >> Alessandro, this is really the why we're talking about here and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up you know what? So this is a huge deal. Networking matters, security matters, automation matters, DevOps, NetOps, all coming together clean separation. Help us understand how this joint solution with NVIDIA fits into the Pluribus unified cloud networking vision because this is what people are talking about and working on right now. >> Yeah, absolutely. So I think here with this solution we're attacking two major problems in cloud networking. One, is operation of cloud networking and the second, is distributing security services in the cloud infrastructure. First, let me talk about first what are we really unifying? If we're unifying something, something must be at least fragmented or disjointed. And what is disjointed is actually the network in the cloud. If you look wholistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your IP clause, fabric leaf and spine topologies. This is actually a well understood problem I would say. There are multiple vendors with let's say similar technologies, very well standardized, very well understood and almost a commodity I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud particularly from a security standpoint. Those services are actually now moved into the compute layer where cloud builders have to instrument a separate network virtualization layer where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options that they don't talk to each other and they're very dependent on the kind of hypervisor or compute solution you choose. For example, the networking API between an ESXi environment or an Hyper-V or a Zen are completely disjointed. You have multiple orchestration layers. And then when you throw in also Kubernetes in this type of architecture, you are introducing yet another level of networking. And when Kubernetes runs on top of VMs which is a prevalent approach, you actually are stuck in multiple networks on the compute layer that they eventually ran on the physical fabric infrastructure. Those are all ships in the knights effectively, right? They operate as completely disjointed and we're trying to tackle this problem first with the notion of a unified fabric which is independent from any workloads whether this fabric spans on a switch which can be connected to bare metal workload or can span all the way inside the DPU where you have your multi hypervisor compute environment. It's one API, one common network control plane and one common set of segmentation services for the network. That's problem number one. >> It's interesting I hear you talking and I hear one network among different operating models. Reminds me of the old serverless days. There's still servers but they call it serverless. Is there going to be a term network-less because at the end of the day it should be one network, not multiple operating models. This is a problem that you guys are working on, is that right? I'm just joking serverless and network-less, but the idea is it should be one thing. >> Yeah, effectively what we're trying to do is we're trying to recompose this fragmentation in terms of network cooperation across physical networking and server networking. Server networking is where the majority of the problems are because as much as you have standardized the ways of building physical networks and cloud fabrics with IP protocols and internet, you don't have that sort of operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute security services throughout the infrastructure more efficiently whether it's micro-segmentation is a stateful firewall services or even encryption. Those are all capabilities enabled by the BlueField DPU technology. And we can actually integrate those capabilities directly into the network fabric limiting dramatically at least for east west traffic the sprawl of security appliances whether virtual or physical. That is typically the way people today segment and secure the traffic in the cloud. >> Awesome. Pete, all kidding aside about network-less and serverless kind of fun play on words there, the network is one thing it's basically distributed computing, right? So I'd love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >> Yeah, I think what's beautiful and kind of what the DPU brings that's new to this model is completely isolated compute environment inside. So it's the, yo dog, I heard you like a server so I put a server inside your server. And so we provide ARM CPUs, memory and network accelerators inside and that is completely isolated from the host. The actual X86 host just thinks it has a regular niche in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only this separation within the data plane, but you have this complete control plane separation so you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch and we're distributing them now. And as time has gone on we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that NVIDIA's good enough or we hope that the VXLAN tunnel's good enough. And we can't actually apply more advanced techniques there because we can't financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >> So what's in it for the customer real quick and I think this is an interesting point you mentioned policy. Everyone in networking knows policy is just a great thing. And as you hear it being talked about up the stack as well when you start getting to orchestrating microservices and whatnot all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach because what I heard was more scale, more edge, deployment flexibility relative to security policies and application enablement? What's the customer get out of this architecture? What's the enablement? >> It comes down to taking again the capabilities that we're in that top of rack switch and distributing them down. So that makes simplicity smaller, blast radius' for failures smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. And again, we always want to kind of separate each one of those layers so just as in say a VXLAN network, my leaf in spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. To me the possibilities are endless. >> It's a great security control plan. Really flexibility is key and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandro, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >> Yeah, I think the response from customer has been the most encouraging and exciting for us to sort of continue and work and develop this product. And we have actually learned a lot in the process. We talked to tier two, tier three cloud providers. We talked to SP, Soft Telco type of networks as well as inter large enterprise customers. In one particular case one, let me call out a couple of examples here just to give you a flavor. There is a cloud provider in Asia who is actually managing a cloud where they're offering services based on multiple hypervisors. They are native services based on Zen, but they also on ramp into the cloud workloads based on ESXi and KVM depending on what the customer picks from the menu. And they have the problem of now orchestrating through their orchestrate or integrating with Zen center, with vSphere, with OpenStack to coordinate this multiple environments. And in the process to provide security, they actually deploy virtual appliances everywhere which has a lot of cost complication and eats up into the server CPU. The promise that they saw in this technology, they call it actually game changing is actually to remove all this complexity, having a single network and distribute the micro segmentation service directly into the fabric. And overall they're hoping to get out it tremendous OPEX benefit and overall operational simplification for the cloud infrastructure. That's one important use case. Another global enterprise customer is running both ESXi and Hyper-V environment and they don't have a solution to do micro segmentation consistently across hypervisors. So again, micro segmentation is a huge driver security. Looks like it's a recurring theme talking to most of these customers. And in the Telco space, we're working with few Telco customers on the CFT program where the main goal is actually to harmonize network cooperation. They typically handle all the VNFs with their own homegrown DPDK stack. This is overly complex. It is frankly also slow and inefficient. And then they have a physical network to manage. The idea of having again one network to coordinate the provisioning of cloud services between the Telco VNFs and the rest of the infrastructure is extremely powerful on top of the offloading capability opted by the BlueField DPUs. Those are just some examples. >> That was a great use case. A lot more potential I see that with the unified cloud networking, great stuff, Pete, shout out to you 'cause at NVIDIA we've been following your success us for a long time and continuing to innovate as cloud scales and Pluribus with unified networking kind of bring it to the next level. Great stuff, great to have you guys on and again, software keeps driving the innovation and again, networking is just a part of it and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in this new architecture and solution learn more because this is an architectural shift? People are working on this problem, they're try to think about multiple clouds, they're try to think about unification around the network and giving more security, more flexibility to their teams. How can people learn more? >> Yeah, so Alessandro and I have a talk at the upcoming NVIDIA GTC conference. So it's the week of March 21st through 24th. You can go and register for free nvidia.com/gtc. You can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact. And we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >> Alessandro, how can we people learn more? >> Yeah, absolutely. People can go to the Pluribus website, www.pluribusnetworks.com/eft and they can fill up the form and they will contact Pluribus to either know more or to know more and actually to sign up for the actual early field trial program which starts at the end of April. >> Okay, well, we'll leave it there. Thank you both for joining, appreciate it. Up next you're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

Pete Lumbis, the director and NVIDIA partnership and the solution And the novelty about So the first byproduct of this approach on the X86. We heard that the DPU and the network operators have of the calling car for cloud, right? And so that ability to into the Pluribus unified and the second, is Reminds me of the old serverless days. and secure the traffic in the cloud. as the driver for this the data plane, but you have this complete What's the benefit to the and the systems become easier. to the solution? And in the process to provide security, and it's the key solution. and the details and what we're at the end of April. and review some of the research from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alessandro BarbieriPERSON

0.99+

AlessandroPERSON

0.99+

AsiaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

PluribusORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Pluribus NetworksORGANIZATION

0.99+

John FurrierPERSON

0.99+

20%QUANTITY

0.99+

Pete LumbisPERSON

0.99+

FirstQUANTITY

0.99+

ESXiTITLE

0.99+

March 21stDATE

0.99+

ESGORGANIZATION

0.99+

PetePERSON

0.99+

www.pluribusnetworks.com/eftOTHER

0.99+

second aspectQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

24thDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.98+

one networkQUANTITY

0.98+

DevOpsTITLE

0.98+

end of AprilDATE

0.98+

secondQUANTITY

0.97+

vSphereTITLE

0.97+

Soft TelcoORGANIZATION

0.97+

KubernetesTITLE

0.97+

todayDATE

0.97+

YouTubeORGANIZATION

0.97+

tier threeQUANTITY

0.96+

nvidia.com/gtcOTHER

0.96+

two major problemsQUANTITY

0.95+

ZenTITLE

0.94+

around 20, 25%QUANTITY

0.93+

zero codeQUANTITY

0.92+

each oneQUANTITY

0.92+

X86COMMERCIAL_ITEM

0.92+

OpenStackTITLE

0.92+

NetOpsTITLE

0.92+

single networkQUANTITY

0.92+

ARMORGANIZATION

0.91+

one common setQUANTITY

0.89+

one APIQUANTITY

0.88+

BlueFieldORGANIZATION

0.87+

one important use caseQUANTITY

0.86+

zero trustQUANTITY

0.86+

tier twoQUANTITY

0.85+

Hyper-VTITLE

0.85+

one common network control planeQUANTITY

0.83+

BlueFieldOTHER

0.82+

Number oneQUANTITY

0.81+

48 serversQUANTITY

0.8+

Alessandro Barbieri and Pete Lumbis


 

>>mhm. Okay, we're back. I'm John. Fully with the Cuban. We're going to go deeper into a deep dive into unified cloud networking solution from Pluribus and NVIDIA. And we'll examine some of the use cases with Alexandra Barberry, VP of product Management and Pluribus Networks. And Pete Lambasts, the director of technical market and video. Remotely guys, thanks for coming on. Appreciate it. >>I think >>so. Deep dive. Let's get into the what and how Alexandra, we heard earlier about the pluribus and video partnership in the solution you're working together on. What is it? >>Yeah. First, let's talk about the what? What are we really integrating with the NVIDIA Bluefield deep You Technology pluribus says, uh, has been shipping, uh, in volume in multiple mission critical networks. So this adviser, one network operating systems it runs today on merchant silicon switches and effectively, it's a standard based open network computing system for data centre. Um, and the novelty about this operating system is that it integrates a distributed the control plane for Atwater made effective in STN overlay. This automation is completely open and interoperable, and extensible to other type of clouds is nothing closed and this is actually what we're now porting to the NVIDIA GPU. >>Awesome. So how does it integrate into video hardware? And specifically, how is plural is integrating its software within video hardware? >>Yeah, I think we leverage some of the interesting properties of the blue field the GPU hardware, which allows actually to integrate, um, our soft our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the GPU card is completely agnostic to the hyper visor layer or OS layer running on on the host even more. Um, uh, we can also independently manage this network. Note this switch on a nick effectively, uh, managed completely independently from the host. You don't have to go through the network operating system running on X 86 to control this network node. So you truly have the experience effectively of a top of rack for virtual machine or a top of rack for kubernetes spots. Where instead of, uh, um, if you allow me with analogy instead of connecting a server nique directly to a switchboard now you're connecting a VM virtual interface to a virtual interface on the switch on a nick. And also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in accelerating the entire day to play in for networking and security. So we are taking advantage of the DACA, uh, video DACA api to programme the accelerators and this your accomplished two things with that number one, you, uh, have much greater performance, much better performance than running the same network services on an X 86 CPU. And second, this gives you the ability to free up. I would say around 2025% of the server capacity to be devoted either to additional war close to run your cloud applications. Or perhaps you can actually shrink the power footprint and compute footprint of your data centre by 20% if you want to run. The same number of computer work was so great efficiencies in the overall approach. >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero quote from pluribus running on the X 86 this is what why we think this enables a very clean demarcation between computer and network. >>So, Pete, I gotta get I gotta get you in here. We heard that the GPUS enable cleaner separation of devops and net ops. Can you explain why that's important? Because everybody's talking. Def SEC ops, right now you've got Net ops. Net net SEC ops, this separation. Why is this clean separation important? >>Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little a little messier than that. And I think a lot of the devops stuff in that, uh, mentality and philosophy. There's a natural fit there, right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers? Well, the network has always been this other thing, and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we we in the networking industry have gotten closer together. But there's still a gap. There's still some distance, and I think in that distance isn't going to be closed and So again it comes down to pragmatism. And I think, you know, one of my favourite phrases is look, good fences make good neighbours. And that's what this is. Yeah, >>it's a great point because devops has become kind of the calling card for cloud. Right? But devops is a simply infrastructure as code infrastructure is networking, right? So if infrastructure as code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will. This is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where one from from the policy, the security, the zero trust aspect of this right. If you get it wrong on that network side, all of a sudden, you you can totally open up that those capabilities and so security is part of that. But the other part is thinking about this at scale, right. So we're taking one top of rack switch and adding, you know, up to 48 servers per rack, and so that ability to automate orchestrate and manage its scale becomes absolutely critical. >>Alexandra, this is really the why we're talking about here. And this is scale and again getting it right. If you don't get it right, you're gonna be really kind of up. You know what you know. So this is a huge deal. Networking matters. Security matters. Automation matters. DEVOPS. Net ops all coming together. Clean separation. Help us understand how this joint solution within video gets into the pluribus unified cloud networking vision. Because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're talking to major problems in cloud networking. One is the operation of cloud networking, and the second is distributing security services in the cloud infrastructure. First, let me talk about first. What are we really unifying? If you really find something, something must be at least fragmented or disjointed. And what is this? Joint is actually the network in the cloud. If you look holistically how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers. You build your I P clause fabric leaf and spine to apologies. this is actually well understood the problem. I would say, um, there are multiple vendors with a similar technologies. Very well, standardised. Very well understood. Um, and almost a commodity, I would say building an I P fabric these days. But this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint. Those services are actually now moved into the compute layer where you actually were called. Builders have to instrument a separate network virtualisation layer, where they deploy segmentation and security closer to the workloads. And this is where the complication arise. This high value part of the cloud network is where you have a plethora of options, that they don't talk to each other, and they are very dependent on the kind of hyper visor or compute solution you choose. Um, for example, the networking API between an SX I environment or and hyper V or a Zen are completely disjointed. You have multiple orchestration layers and when and then when you throw in Also kubernetes in this In this in this type of architecture, uh, you're introducing yet another level of networking, and when you burn it, it runs on top of the M s, which is a prevalent approach. You actually just stuck in multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the night effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any work clothes. Uh, whether it's this fabric spans on a switch which can become connected to a bare metal workload or can spend all the way inside the deep You where you have your multi hypervisors computer environment. It's one a P I one common network control plane and one common set of segmentation services for the network. That's probably number one. >>You know, it's interesting you I hear you talking. I hear one network different operating models reminds me the old server list days. You know there's still servers, but they called server list. Is there going to be a term network list? Because at the end of the, it should be one network, not multiple operating models. This this is like a problem that you guys are working on. Is that right? I mean, I'm not I'm just joking. Server, Listen, network list. But the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we're trying to recompose this fragmentation in terms of network operations across physical networking and server networking. Server networking is where the majority of the problems are because of the as much as you have standardised the ways of building, uh, physical networks and cloud fabrics with high people articles on the Internet. And you don't have that kind of, uh, sort of, uh, operational efficiency at the server layer. And this is what we're trying to attack first with this technology. The second aspect we're trying to attack is how we distribute the security services throughout the infrastructure more efficiently. Whether it's micro segmentation is a state, full firewall services or even encryption, those are all capabilities enabled by the blue field deep you technology and, uh, we can actually integrate those capabilities directly into the network fabric. Limiting dramatically, at least for is to have traffic, the sprawl of security appliances with a virtual or physical that is typically the way people today segment and secured the traffic in the >>cloud. All kidding aside about network. Listen, Civil is kind of fun. Fun play on words There the network is one thing is basically distributed computing, right? So I love to get your thoughts about this Distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think. What's what's beautiful and kind of what the deep you brings that's new to this model is completely isolated. Compute environment inside. So you know, it's the yo dog. I heard you like a server, So I put a server inside your server. Uh, and so we provide, you know, arm CPUs, memory and network accelerators inside, and that is completely isolated from the host. So the server, the the actual X 86 host just thinks it has a regular nick in there. But you actually have this full control plane thing. It's just like taking your top of rack, switch and shovel. Get inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage. But we're taking all of the functions we used to do at the top of rack Switch, and we distribute them now. And, you know, as time has gone on, we've we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralised appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that if aliens good enough or we hope that if you excellent tunnel is good enough, and we can actually apply more advanced techniques there because we can't physically, financially afford that appliance to see all of the traffic, and now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer real quick. I think this is an interesting point. You mentioned policy. Everyone in networking those policies just a great thing. And it has. You hear it being talked about up the stack as well. When you start getting to orchestrate microservices and what not all that good stuff going on their containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility relative to security policies and application. Enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement? >>It comes down to taking again the capabilities that were that top of rack switch and distracting them down. So that makes simplicity smaller. Blast Radius is for failure, smaller failure domains, maintenance on the networks and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So, just as in, say, a Vieques land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer and so you can run a deep You with any networking in the core there. And so you get this extreme flexibility, you can start small. You can scale large. Um, you know, to me that the possibilities are endless. >>It's a great security control Playing really flexibility is key, and and also being situationally aware of any kind of threats or new vectors or whatever is happening in the network. Alexandra, this is huge Upside, right? You've already identified some, uh, successes with some customers on your early field trials. What are they doing? And why are they attracted? The solution? >>Yeah, I think the response from customer has been the most encouraging and exciting for for us to, uh, to sort of continuing work and develop this product. And we have actually learned a lot in the process. Um, we talked to three or two or three cloud providers. We talked to s P um, sort of telco type of networks, uh, as well as enter large enterprise customers. Um, in one particular case, um uh, one, I think. Let me let me call out a couple of examples here just to give you a flavour. There is a service provider, a cloud provider in Asia who is actually managing a cloud where they are offering services based on multiple hypervisors their native services based on Zen. But they also, um, ramp into the cloud workloads based on SX I and N K P M. Depending on what the customer picks from the piece from the menu. And they have the problem of now orchestrating through the orchestrate or integrating with Zen Centre with this fear with open stock to coordinate this multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of cost complication, and it's up into the service of you the promise that they saw in this technology they call it. Actually, game changing is actually to remove all this complexity, even a single network, and distribute the micro segmentation service directly into the fabric. And overall, they're hoping to get out of it. Tremendous OPEC's benefit and overall operational simplification for the cloud infrastructure. That's one important use case, um, another large enterprise customer, a global enterprise customer is running both Essex I and I purvey in their environment, and they don't have a solution to do micro segmentation consistently across Hypervisors. So again, micro segmentation is a huge driver. Security looks like it's a recurring theme talking to most of these customers and in the telco space. Um, uh, we're working with a few telco customers on the CFT programme, uh, where the main goal is actually to Arman Eyes Network operation. They typically handle all the V NFC with their own homegrown DPD K stock. This is overly complex. It is, frankly, also slow and inefficient. And then they have a physical network to manage the idea of having again one network to coordinate the provisioning of cloud services between the take of the NFC. Uh, the rest of the infrastructure is extremely powerful on top of the offloading capability. After by the blue fill the pews. Those are just some examples. >>There's a great use case, a lot more potential. I see that with the unified cloud networking. Great stuff shout out to you guys that NVIDIA, you've been following your success for a long time and continuing to innovate his cloud scales and pluribus here with unified networking. Kind of bringing the next level great stuff. Great to have you guys on and again, software keeps, uh, driving the innovation again. Networking is just part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in in this new architecture and solution learn more? Because this is an architectural ship. People are working on this problem. They're trying to think about multiple clouds are trying to think about unification around the network and giving more security more flexibility to their teams. How can people learn more? >>And so, uh, Alexandra and I have a talk at the upcoming NVIDIA GTC conference, so it's the week of March 21st through 24th. Um, you can go and register for free and video dot com slash gtc. Um, you can also watch recorded sessions if you end up watching this on YouTube a little bit after the fact, Um, and we're going to dive a little bit more into the specifics and the details and what we're providing a solution >>as Alexandra. How can people learn more? >>Yeah, so that people can go to the pluribus website www pluribus networks dot com slash e. F t and they can fill up the form and, uh, they will contact Pluribus to either no more or to know more and actually to sign up for the actual early field trial programme. Which starts at the end of it. >>Okay, well, we'll leave it there. Thank you both for joining. Appreciate it up. Next, you're going to hear an independent analyst perspective and review some of the research from the Enterprise Strategy Group E s G. I'm John Ferry with the Cube. Thanks for watching. Mhm. Mhm.

Published Date : Mar 4 2022

SUMMARY :

And Pete Lambasts, the director of technical market and Let's get into the what and how Alexandra, we heard earlier about the pluribus and video Um, and the novelty about this operating system is that it integrates a distributed the And specifically, how is plural is integrating its software within video hardware? of the server capacity to be devoted either to additional war close to is what why we think this enables a very clean demarcation between computer and network. We heard that the GPUS enable cleaner separation of Yeah, I think it's, uh, you know, it's a pragmatic solution, in my opinion, Um, you know, So if infrastructure as code, you know, you're talking about, you know, that part of the stack But the other part is thinking about this at scale, right. You know what you know. the place where you deploy most of your services in the cloud, particularly from a security standpoint. I hear one network different operating models reminds me the old server enabled by the blue field deep you technology and, So I love to get your thoughts scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? I can now do this at a different layer and so you can run Alexandra, this is huge Upside, Let me let me call out a couple of examples here just to give you a flavour. So I got to ask both of you to wrap this bit more into the specifics and the details and what we're providing a solution How can people learn more? Yeah, so that people can go to the pluribus website www pluribus networks dot analyst perspective and review some of the research from the Enterprise Strategy Group E s G.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlexandraPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AsiaLOCATION

0.99+

Pete LambastsPERSON

0.99+

twoQUANTITY

0.99+

John FerryPERSON

0.99+

threeQUANTITY

0.99+

PluribusORGANIZATION

0.99+

20%QUANTITY

0.99+

Alexandra BarberryPERSON

0.99+

Pete LumbisPERSON

0.99+

JohnPERSON

0.99+

Alessandro BarbieriPERSON

0.99+

FirstQUANTITY

0.99+

OPECORGANIZATION

0.99+

second aspectQUANTITY

0.99+

PetePERSON

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

March 21stDATE

0.99+

24thDATE

0.99+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

Arman Eyes NetworkORGANIZATION

0.98+

todayDATE

0.98+

two thingsQUANTITY

0.98+

AtwaterORGANIZATION

0.98+

Pluribus NetworksORGANIZATION

0.98+

oneQUANTITY

0.98+

YouTubeORGANIZATION

0.96+

one thingQUANTITY

0.92+

DACATITLE

0.92+

one networkQUANTITY

0.92+

EnterpriseORGANIZATION

0.91+

single networkQUANTITY

0.91+

zero quoteQUANTITY

0.89+

one common setQUANTITY

0.88+

zero trustQUANTITY

0.88+

one important use caseQUANTITY

0.87+

Essex IORGANIZATION

0.84+

telcoORGANIZATION

0.84+

three cloud providersQUANTITY

0.82+

N K PORGANIZATION

0.82+

CubanPERSON

0.82+

KCOMMERCIAL_ITEM

0.81+

X 86OTHER

0.8+

zeroQUANTITY

0.79+

ZenORGANIZATION

0.79+

each oneQUANTITY

0.78+

one particular caseQUANTITY

0.76+

up to 48 servers per rackQUANTITY

0.74+

around 2025%QUANTITY

0.73+

coupleQUANTITY

0.68+

GroupORGANIZATION

0.67+

ViequesORGANIZATION

0.65+

X 86COMMERCIAL_ITEM

0.64+

XCOMMERCIAL_ITEM

0.61+

NVIDIA GTC conferenceEVENT

0.6+

pluribusORGANIZATION

0.57+

NVIDIA BluefieldORGANIZATION

0.54+

CentreCOMMERCIAL_ITEM

0.52+

X 86TITLE

0.51+

ZenTITLE

0.47+

86TITLE

0.45+

CubeORGANIZATION

0.44+

SXTITLE

0.41+

Tom Anderson, Joe Fitzgerald & Alessandro Perilli, Red Hat | AnsibleFest 2021


 

(cheerful music) >> Hello everyone, welcome to theCUBE's coverage of AnsibleFest 2021, with Red Hat. Topic of this power panel is the future of automation, we've got a great lineup of CUBE alumni, Joe Fitzgerald, vice president, general manager of the Red Hat business unit, thanks for coming on, Tom Anderson, vice president, product manager of Red Hat, and Alessandro Perilli, the senior director of product market at Red, all good CUBE alumni. Distinct power panel, Joe we'll start out with you, what have you seen in automation game right now, 'cause it continues to evolve. I mean you can't go to an event, a virtual event, or read anything online without hearing AI automation, automation hybrid, automation hybrid hybrid hybrid hybrid, I mean automation is the top conversation in almost all verticals. What do you see happening right now? >> Yeah, it's sort of amazing, you know? Automation is quite fashionable these days, as you pointed out. Automation's always been on the radar of a lot of enterprises, and I think it was always perceived as sort of like that, an efficiency, a task model thing, that people did. Now automation is, if you believe some of the analysts, it's up to a board room imperative in some cases. So we are seeing with our customers that the level of complexity they're dealing with, particularly exaggerated by what's gone past year and a half in the world, is putting a tremendous amount of pressure, attention and importance on automation. So automation's definitely one of the busiest places to be right now. >> What's the big change this year, though? I mean we love the automation conversation, we had it last year a lot too, as well. What's the change, what's the trend right now that's driving this next level automation conversation with customers? >> Well, I'll ask my colleagues to comment on that in a second, but, the challenges here with automation, is people are constrained now, they can't access facilities as easy as they used to be able to. They still need to go fast, some businesses have had to expand dramatically, and introduce new services to handle all sorts of new scenarios, they've had to deploy things faster. Security, not a week goes by you don't read about something going on regarding security and breaches and hacking and things like that, so they're trying to secure things as fast as possible, right, and deploy critical fixes and patches and things like that. So there's just tremendous amount of activity, that's really been exaggerated by what's gone on over the past year. >> And all of this is being compounded with a nature of increasing complexity, that we're seeing in the architecture, explosion microservices, the adoption en masse of containers, and the adoption of multiple clouds for most customers around the world. So really, the extension of the IT environment, especially for large enterprises, enormous for any team, no matter how big it is, so how scale it is, to really go after and look for all the systems, and then the complexity of the architectures, is enormous within that IT environment. It is impossible to scale the applications and to scale the infrastructure, and not scale the IT operations. And so automation becomes really a way to scale IT operations, rather than just keep repeating the same steps over and over, in an attempt to simplify, or to reduce costs. It's well beyond that at this point. >> That's a great point. Tom, what's your reaction to this, because Alessandro brings up a good point, developers are going faster than ever before. The changes of speed and complexity have gone up, so the demand for the IT and/or security groups, or anyone, to be faster, not weeks, minutes. We're talking about a complete time shift here. >> Yeah, so I talk to a lot of customers, and what I keep hearing again and again from them is kind of two things, which is, a need for skills, and reskilling existing staff. When Alessandro talks about the complexity and the scale, think about all the different new tools, new environments, new platforms that these employees and these associates are being exposed to and expected to be able to handle. So, a real, not a skill shortage, but a stress on the skills of the organization. And then secondly, really, our customers are talking to us about the culture in the environment itself, the culture of collaboration, the culture of automation, and the kind of impact that has in our organization, the way teams are now expected to work together, to share information, to share automation, to push, you know, we talk about shifting left in a lot of things now in IT, automation is now shifting left, pushing automation and access to subsystems, IT subsystems and resources, into the hands of people who traditionally haven't had direct access to those resources. So really kind of shift in skills, and a shift in culture I see. >> Ah, the culture. (indistinct), I want to come back to that culture thing, but I want to ask you specifically on that point, do you think automation users still view automation as just repeating and simplifying processes that they already are doing? You've heard the term, "Done it three times, automate it." Is that definition changing and evolving, what's your thoughts? >> Yeah, IT is really changing, going from the traditional, "I'm a network engineer and I use a command line to update my devices I'm responsible for, the config devices, and then I decide to write a playbook using a really cool product like Ansible to drive automation into my daily tasks." And then it comes up to exposing, again, exposing that subsystem I'm responsible for, whatever it is, storage, network, compute, whatever it is, exposing that op so other people can consume it without me being involved, right? So that's a real change in a mindset, and tooling, and approach, that I'm going to expose that op to a set of workflows, business workflows, that drive automation throughout an organization. So that's a real kind of evolution of automation, (indistinct) first, and that's usually focused mostly on day zero, provisioning of a new service. Now we see a lot more focus, or a lot of additional focus on day two operations. How do I automate my day two operations to make them a lot more efficient, as my scale and complexity grows? How do I take the human element out of operating this on a day to day basis? >> So you're saying basically, if I understand you correctly, the system's architecture view, or mindset, around automation, it moves from "Hey, I'm going to use," and Ansible by the way is great for "Hey, I want to automate something, I'm doing a lot," that's cool. But you're looking at it differently. If I understand you correctly, you're saying the automation has to be a system view, meaning you create the rules of the road so that automation can happen at the front lines of the CICD pipeline. You mentioned shift left, is that the difference, is that kind of what's happening here, that's beyond just doing automation, because you can automate it, so you've done that, this is like the next level, is that what you're getting at? >> It is, and we joke about it a little bit, crushing silos, right? Breaking down silos, and again, I keep talking about culture, it really is important, tools are important and technology's important, but the culture's super important, and trying to think of that thing from a systems mindset, of sort of workflows and orchestration of a business process that touches IT components, and how do I automate that and expose that to that workflow, without a human having to touch it, right? Yet still enforce my security protocols, my performance expectations, my compliance stuff, all of that stuff still needs to be enforced, and that's where repeatable automation comes in, of being able to expose this stuff up into these system-level workflows. >> And then there is another element to this (indistinct), I think it's really important to attach to this, the element of speed. We talk about complexity, we talk about scale, but then there is this emerging third dimension, as I call it, that is the speed. And the speed has a number of different articulation, it's the speed when you're thinking about how quickly you need to deliver the application. If you're in a very competitive environment, think about web scale startups for example, or companies in an emerging market, and then you have the speed in terms of reacting to a cybersecurity attack, which Tom just mentioned. And then you have the third kind of speed I'm thinking about right now, which is the increasing amount of artificial intelligence, so an algorithmic kind of operation that is taking place in the organization. For now it's still very limited, but it's not unthinkable that going forward, the operations will be driven, or at least assisted by artificial intelligence. This speed, just like the scale and the complexity we mentioned before, are impossible to be addressed by a single team, and so automation becomes indispensable. >> Yeah, that's a great point, I want to just double click on that, I mean both Tom and Joe were just talking about system, they used the word system. In a subsystem, if one is going faster than the other, to your point, there's a bottleneck there. So if the IT group or security groups are going to take time to approve things, they're not putting rules to the road together to automate and help developers be faster, because look, it's clear, we've been reporting on this in theCUBE, cloud developers are fast. They're moving really fast with code. And so what happens is, if they're going to shift left, that means they're going to be at the point of coding to set policies on security. So, that's going to put pressure on the other subsystems to go faster, so they have to then expose rules of the road, or I'm just making that up, but policy base, or have some systems thinking. They can't just be the old way of saying "No, slow it down." So this is a cultural thing, I think Joe, you brought up culture, Alessandro, you brought up culture. Is that still there? That speed, fast team here and a slow team here? Is that still around, or people getting faster on both sides? And I'm kind of talking about IT, generally speaking, they tend to be slower than the developers. >> Well, just a couple comments, first of all, you heard silos, you heard complexity, you heard speed, talked about shift left. Let me sort of maybe tie those together, right? What's happened to date is every silo has their own set of tooling, right? And so one silo might move very fast, with a very private set of tools, or network management, or security, or whatever, right? And if you think about it, one of the number one skills gaps right now is for automation people. But if an automation person has to learn 17 different tools, 'cause I'm running on three public clouds, I'm on-premise, edge, and I'm doing things to move network storage, compute, security, all sorts of different systems, the tooling is so complicated, right, that I end up with a bunch of specialists. Which can only do one or two things, because they don't know the other domains and they don't know the skills. One of the things we've seen from our customers, I think this is a fundamental shift in automation, is that what we've done with Ansible in particular is, we actually adopted Ansible because of its simplicity. It's actually human-readable, you don't have to be a hardcore programmer to write automation. So that allows the emergence of citizen creators of automation. There's not like a group in some ivory tower that now can make automation and they do it for the masses. Individuals can now use Ansible to create automation. Going cross-domain, Ansible automation touches networks, security, storage, compute, cloud, edge, Linux, Windows, containers, traditional, ITSM, it touches so many systems, that basically what you have is you have a set of power tooling, in Ansible, that allows you now to share automation across teams, 'cause they speak the same language, right? And that's how you go faster. If every silo is fast, but when you have to go inter-silo you slow down, or have to open a ticket, or have some (indistinct) mismatch, it causes delays, errors, and exposures. >> I think that is a very key point, I mean that delay of opening up tickets, not being responsive, Alessandro, you put up machine learning and AI, I mean if you think about what that could do from an automation standpoint, if you can publish the HIPAA rules for your healthcare, you can just traverse that with a bot, right? I mean this is the new... This just saves so much time, why even open up a ticket? So if you can shift left and do the security, and there's kind of rules there, this is a trend, how do you make that happen, how do you bust the silos, and I guess that's the question I'd love to get everyone to react to, because that implies some sort of horizontally scalable control plane. How does someone do that in an architectural way, that doesn't really kind of, maybe break everything, or make the (indistinct) go into a cultural sideways situation? >> Maybe I can jump in, and grab this one, and then maybe ask Alessandro to weigh in afterwards, but, what we've seen and what you'll see some of the speakers at AnsibleFest this year talk about, from a cultural perspective is bringing teams together across automation guilds, JPMC calls it a community of practice, where they're bringing hundreds and thousands of individuals in the organization together virtually, into these teams that share best practices, and processes and automation that they've created. Secondly, and this is a little bit of a shameless plug for Ansible, which is having a common language, a common automation language across these teams, so that sharing becomes obviously a lot easier when you're using the same language. And then thirdly, what we see a lot now is people treating automation as code. Storing that, and get version managing and version controlling and checking in, checking out, really thinking of automation differently from an individual writing a script, to this being infrastructure or whatever my subsystem is, managed it and automated it as code, and thinking of themselves as people responsible for code. >> These are all great points. I think that on top of all these things, there is an additional element which is change management. You cannot count on technology alone to change something that is purely cultural, as we kept saying during this video right now. So, I believe that a key element to win, to succeed in an automation project, is to couple the technology, great technology, easy to understand, able to become the common language as Tom just said, with an effort in change management that starts from the top. It's something you don't see very often because a technology vendor rarely works with a more consulting firm, but it's definitely an area that I think would be very interesting to explore for our customers. >> That's a great point on the change management, but let me ask you, what do you think it needs to make automation more frictionless for users, what do you see that needs to happen, Alessandro? >> I think there are at least a couple of elements that need to change. The first one is that, the effort that we're seeing right now in the industry, to further democratize the capability to automate has to go one notch further. And by that I mean, implementing cell service provisioning portals and ways for automatically execute an automation workflow that already exists, so that an end user, somebody that works in the line of business, and doesn't understand necessarily what the automation workflow, the script is doing, still able to use it, to consume it when it, she or he needs to use it. This is the first element, and then the second element that is definitely more ambitious, is about the language, about how do I actually write the automation workflow? This is a key problem. It's true that some automation engines and some workflows have done, historically speaking, a better job than others, in simplifying the way we write automation workflows, and definitely this is much simpler than writing code with a programming language, and it's simpler than writing automation compared to a tool that we use 10, 15 years ago. But still, there is a certain amount of complexity, because you need to understand how to write in a way that the automation framework understands, and you need even before that, you need to express what you want to achieve, and in a way that the automation engine understands. So, I'm thinking that going forward we'll start to see artificial intelligence being applied to this problem, in a way that's very similar to what OpenAI Microsoft are doing with Codex, the capability that is a model that allows a person to write in plain English through a comment in code, to translate that comment into actual code, taken from GitHub or through the machine learning process that's been done. I'm really thinking that going forward, we will start to see some effort in the same direction, but applied to automation. What if the AI could assist us, not replace us, in writing the automation workflow so that more people are capable to translating what they want to achieve, in a way that is automatable? >> So you're saying the language, making it easy to program, or write, or create. Being a creator of automation. And then having that be available as code, with other code, so there's kind of this new paradigm of automating the automation. >> In a sense, this is absolutely true, yes. >> In addition to that, John, I think there's another dimension here which is often overlooked, which we do spend a lot of time on. It's one thing to have things like Alessandro mentioned, that are front edge in terms of helping you write code, but you want to know something? In big organizations, a lot of times what we find is, someone's already written the code that you need. You know what the problem is? You don't know about it, you can't find it, you can't share it and you can't collaborate on it, so the best code is something that somebody's already invested the time to write, test, burn in, certify, what if they could share it, and what if people could find it, and then reuse it? Right, everybody's talking about low code, no code, well, reuse is the best, right? Because you've already invested expertise into doing it. So we've spent a lot of time working with our customers based on their feedback, on building the tools necessary for them to share automation, to collaborate on it, certify it, and also to create that supply chain from partners who create integrations and interfaces to their systems, and to be able to share that content through the supply chain out to our customers and have them be able to share automation across very large globally distributed organizations. Very powerful. >> That's a powerful point, I mean reuse, leverage there, is phenomenal. Discovery engine's got to be built. You got to know, I mean someone's got to build a search engine for the code. "Hey code, who's written some code?" But just a whole 'nother mindset, so this brings up my next question for you guys, 'cause this is really, we're teasing out the biggest things coming next in automation. These are all great points, they're all about the future, where will the puck be, let's skate to where the puck will be, but it's computer science and automation that's being democratized and opened up more, so it's, what do you guys think is the biggest thing coming next for automation? >> Joe, you want to go next? >> Sure. Sure. Yeah, I'll take it. So we're getting a glimpse of that with a number of customers right now that we're working with that are doing things around concepts like self-healing infrastructures. Well what the heck is that? Basically, it's tying event systems, and AI, which is looking at what's going on in an environment, and deciding that something is broken, sub-optimal, spending too much, there's some issue that needs to be dealt with. In the old days, it was, that system would stop with opening a ticket, dispatch some people who were either manually or semi-automated go fix their whatever. Now people are connecting these systems and saying "Wait a minute. I've got all this rich data coming through my eventing systems. I can make some sense out of it with AI or machine learning. Then I can drive automation, I just eliminated a whole bunch of people, time, exposure, cost, everything else." So I think that, sort of a ventureman automation is going to be huge. I'm going to argue that every single system in the world that uses AI, the result of that's going to be, I want to go do something, I want to change, optimize my move, secure, stop, start, relocate, how's that going to get done? It's going to get done with automation. >> And what Joe just said is really highly successful in the consumer space. If you think about solutions like If This Then That, or Zapier for example, those are examples of event-driven automation. They've been in the consumer space for a long time, and they are wildly popular to the point that there are dozens of clones and competitors. The enterprise space, it didn't adopt the same approach so far, but we start to see event bridges, and event hubs that can really help with this. And this really connects to the previous point, at this point I'm a broken record, which is about the speed and the complexity. If the environment is so spread out, so complex, and it goes all the way to the edge, and all these events take place at a neck-breaking pace, the only way for you is to tie the automation workflows that you have written, to a trigger, an event that takes place at some point, according to your logic. >> Tom, what are your thoughts? >> Yeah, last but not least on that kind of thread, which is sort of the architectures as we get out to the edge, what does it take to automate things at the edge? We thought there was a big jump from data center to cloud, and now when you start extending that out to the edge, am I going to need a new automation platform to handle those edge devices? Will I need a new language, will I need a new team, or can I connect these things together using a common platform to develop the automate at the edge? And I think that's where we see some of our customers moving now, which is automating those edge environments which have become critical to their business. >> Awesome, I want to ask one final question while I've got you guys here in this power panel, great insights here. Operational complexity was mentioned, skills gap was mentioned earlier, I want to ask you guys about the organizational behavior and dynamic going on with this change. Automation, hybrid, multi-cloud, all happening. When you start getting into speed of application development for the modern app, opensource where things are opening up and things are going to be democratized with automation and code and writing automation, and scaling that, you're going to have a cultural battle that's happening, and we're kind of seeing it play out in real time. DevOps has kind of gone and been successful, and we're seeing cloud-native bring new innovation, people are refactoring their business models with cloud technologies, now the edge is here, so this idea of speed, shifting left, from a developer standpoint, is putting pressure on the old, incumbent systems, like the security group, or the IT group that's still holding onto their ticketing system, and they're slower, they're getting requests, and the developer's like "Okay, go faster, I want this done faster." So we're seeing departments reorganizing. What do you guys see, 'cause Red Hat, you guys have been in there, all these big accounts for the generation of this modern era. What's the cultural dynamics happening, and what can companies do to be successful, to get to the next level? >> So I think for us, John, we certainly see it and we experience it, across thousands of customers, and what we've done as an organization is put together adoption journeys, a consulting engagement for our customers around an automation adoption journey, and that isn't just about the technology, it's all throughout that technology, it's about those cultural things, thinking differently about the way I automate and the way I share, and the way I do these tasks. So it's as much about cultural and process as it is about technology. And our customers are asking us for that help. Red Hat, you have thousands of customers that are using this product, surely you can come and tell us how we can achieve more with automation, how can we break down these silos, how can we move faster, and so we've put together these offerings, both directly as well as with our partners, to try and help these customers kind of get over that cultural hump. >> Awesome. Anyone else want to react to the cultural shift and dynamics and how it can play out in a positive way? >> Yeah, I think that it's a huge issue. We always talk about people, processes, and technology. Well the people issue's a really big deal here. We're seeing customers, huge organizations, with really capable teams building apps and services and infrastructures, saying "Help me think about automation in a new way." The old days, it was "Hey, I'm thinking about it as a cost savings thing." Yeah, there's still cost savings in there. To your point, John, now they're talking about speed, and security, and things like that. How fast, zero day exploits, now it's like zero hour exploits. How fast can I think about securing something? You know, time to heal, time to secure, time to optimize, so people are asking us, "What are the best practices? What is the best way to look at what I've got, my automation deficits," used to have tech deficits, now you got automation deficits, right? "What do I need to do culturally?" It's very similar to what happened with DevOps, right? Getting teams to get together and think about it differently and holistically, that same sort of transition is happening, and we're helping customers do that, 'cause we're talking to a lot of them where you've got the scholars have been through it. >> Awesome. Alessandro, your thoughts on this issue. >> I think that what Tom and Joe just said is going to further aggravate, it's going to happen more and more going forward, and there is a reason for that. And this connects back with the skill problem, that we discussed before. In the last 10 years, I've seen growing demand for developers to become experts in a lot of areas that have nothing to do with development, code development. They had to become experts in cloud infrastructures, they had to become experts in security because, you've probably heard this many times, security's everybody's responsibility. Now they've been asked to become experts in artificial intelligence, transforming their title into something like ML engineer. The amount of skills and disciplines that they need to master, alone, by themself, would require a lifetime of work. And we're asking human beings to get better and better at all of these things, and all of the best practice. It's absolutely impossible. And so the only way for them, yeah, five jobs in one, six jobs in one, right? Probably for the same seller, and the only way that these people can execute the best practice, enforce the best practice, if the best practices are encoded in automation workflow, not necessarily written by them, but by somebody else, and execute them at the right time, the right context, and for the right reason. >> It's like the five tool player in baseball, you got to do five different things, I mean this is, you got to do AI, you got to do machine learning, you got to have access to all the data, you got to do all these different things. This is the future of automation, and automation's critical. I've never heard that term, automation deficit or automation debt, we used to talk about tech debt, but I think automation is so important because the only way to go fast is to have automation, kind of at the center of it. This is a huge, huge topic. Thank you very much for coming on, power panel on the future of automation, Joe, Tom, Alessandro from Red Hat, thanks for coming on, everyone, really appreciate the insight, great conversation. >> Thanks, John. >> 'Kay, this is theCUBE's coverage of AnsibleFest 2021 virtual. This is theCUBE, I'm John Furrier, your host, thanks for watching. (calm music)

Published Date : Oct 1 2021

SUMMARY :

is the future of automation, one of the busiest places to be right now. What's the change, what's in a second, but, the and the adoption of multiple clouds or anyone, to be faster, and the kind of impact that back to that culture thing, that I'm going to expose that the automation has to be a system view, and expose that to that workflow, as I call it, that is the speed. that means they're going to and I'm doing things to and I guess that's the question in the organization together virtually, So, I believe that a key element to win, the capability to automate of automating the automation. In a sense, this is already invested the time to write, test, I mean someone's got to build the result of that's going to be, the only way for you is to extending that out to the edge, and things are going to be democratized and that isn't just about the technology, to the cultural shift What is the best way to your thoughts on this issue. and the only way that these people kind of at the center of it. of AnsibleFest 2021 virtual.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

TomPERSON

0.99+

Tom AndersonPERSON

0.99+

Alessandro PerilliPERSON

0.99+

JoePERSON

0.99+

Red HatORGANIZATION

0.99+

John FurrierPERSON

0.99+

Joe FitzgeraldPERSON

0.99+

AlessandroPERSON

0.99+

five jobsQUANTITY

0.99+

CUBEORGANIZATION

0.99+

oneQUANTITY

0.99+

second elementQUANTITY

0.99+

six jobsQUANTITY

0.99+

first elementQUANTITY

0.99+

KayPERSON

0.99+

17 different toolsQUANTITY

0.99+

last yearDATE

0.99+

HIPAATITLE

0.99+

JPMCORGANIZATION

0.99+

zero hourQUANTITY

0.99+

RedORGANIZATION

0.99+

two thingsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

EnglishOTHER

0.98+

OneQUANTITY

0.98+

five toolQUANTITY

0.98+

one final questionQUANTITY

0.98+

bothQUANTITY

0.98+

thirdQUANTITY

0.98+

both sidesQUANTITY

0.98+

WindowsTITLE

0.97+

AnsibleFestEVENT

0.97+

LinuxTITLE

0.96+

first oneQUANTITY

0.96+

GitHubORGANIZATION

0.96+

this yearDATE

0.96+

zero dayQUANTITY

0.95+

AnsibleORGANIZATION

0.94+

day twoQUANTITY

0.94+

CodexTITLE

0.94+

secondlyQUANTITY

0.94+

thirdlyQUANTITY

0.94+

three timesQUANTITY

0.92+

firstQUANTITY

0.92+

one thingQUANTITY

0.91+

single teamQUANTITY

0.91+

day zeroQUANTITY

0.9+

last 10 yearsDATE

0.89+

SecondlyQUANTITY

0.88+

hundreds and thousandsQUANTITY

0.87+

dozens of clonesQUANTITY

0.86+

thousands of customersQUANTITY

0.86+

10, 15 years agoDATE

0.86+

Alejandro Lopez Osornio, Argentine Ministry of Health | Red Hat Summit 2020


 

>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Hi. And welcome back to the Cube's coverage of Red Hat Summit 2020. I'm stew Minuteman. And while this year's event is being held virtually, which means we're talking to all of the guests where they're coming from, one of the things that we always love about the user conference is talking to the practitioners themselves And Red Hat Summit. Of course, we love talking to customers and really happy to welcome to the program. Uh, Alejandro Lopez Asano, who's the director of e health with the Argentine Ministry of Health, Coming to us from Buenos Iris, Argentina. Alessandro, thank you so much for joining us. Thank you for having me. All right, So Ah, you know, look, healthcare obviously is, You know, normally, you know, challenging in the midst of what is happening globally. There are strange and pressures on. What? What is happening? So really appreciate. You think with us? Um, tell us a little bit about you know, the organization, and you know your role in Nike's role in supporting the company's mission. >>I'm part of the minister of girls in Argentina, Argentina Federal country. That's a national military girls, according it's Felker Healthcare System. All around the country with different provinces work, we work with the with the Ministry of Culture, which problems with the governor of problems trying to maintain and coordination the healthcare system. And we create the national policies that tried everybody. Show them to apply on the assistance that we create national incentive. This is much more. It's similar to the US, with the national government. Create incentives the province since the states adopt new new new practices and the best quality >>Excellent. So, yeah, the anytime we talk about healthcare, you know, uh, you know, medical records, of course, critically important. It's usually a key piece of, I d you know, governance, compliance in general. So what are some of the challenges that the ministry basis when it comes to you know, this piece >>of overall health care? My role in the midst of cops is exactly that. Coordinate health information systems around the country and having and access to the single sorts of medical records around the country. It's a great thing that we're trying to achieve We don't want to have a central repository, but they're going to have some kind of have that allows you to access information for all around the country. So the fragmentation of the seat between different provinces and also having public providers and private providers. It's a challenge because the information for one patient is this. Turn a lot of different places. I need to have some kind off have or enterprise services. But you're allows you to gather this information at the point of care and to provide the best quality of care for the patient having the full road regardless of work. It was taking her before. >>Yeah, pretty Universal Challenger talking about their distributed architecture, obviously security of Paramount performance, but still has to have the scale and performance that customers need to bring us in a little bit. This this project, you know, how long has this national health information system? How long has it been to put that together, Bring us through a little bit as to you know, how you choose how to architect these pieces, >>except that we've been working on for the last three years and then be able to create an architecture that was not invasive, that anyone can collaborate and contribute to this information network, but still having the on the rights and other responsibility for Monday in their own data. And we didn't want to have a central that the rates that it's acceptable security issues or privacy issues. We wanted information to remain distributed. But to be able to collect that a 10 point so they're able to create a set off AP Eyes Bay seven Healthcare interoperability standards that allow developers off critical systems all around the country to adopt this new way of changing information to your and privately provided to the practitioners so they can access information. Another side, >>Excellent. And so three years. You know, that's a rather big project. You've got quite a lot of constituents, and obviously, you know, healthcare is, you know, completely essential and critical service. There, underneath the pieces obviously were part of Red Hat Summit covering this so help us understand a little bit, you know, Red Hat and any other partners. You know what technologies they're using to deliver this? >>That's the big challenge was to have this kind of distributed organization with a central how that needs to provide services around the country at any time today. And we really think people need to be confident that they can use this network, that we're treating patients. We don't want them to try to do it and fail from the lost confidence in that you're not going to have the greater adoption from system developers. We need to have a very strong and company in the world, and this can grow really exponentially cause data. I mean, any chess is constructing, like one billion right work on math or something like that. But we know we can grow exponentially, but we need to have some kind of infrastructure that was reliable, but it was easy to deploy the first time. But the house and growth road map that will allow us to incorporate all the extra capacity around Argentina, Mr Safeway Way, need to be confident that we can grow a dog's level. So basically we were working already. We're Kalina and all the basic things. We wanted to go to open shift. It was really important to be able to have the container station system that allows us to found according to the needs and the adoption, right? That was really unpredictable because we need to create incentives for election. But you never know how fast the adoption would be. We need to have some flexibility of attracted by open ship, but also, we need to use a P. I like the scale in order to provide this way to communicate ap eyes to give people secure form to access the FBI's to learn about them and to try. So we're using different parts off the off the stack we have in order to do that. >>Okay, great. Tell us the adoption of this solution. How was the how is the learning curve? But, you know, moving to containerized architectures. You talking about all the AP eyes in there? How much was there a retraining of your group? Were there any new people that came in? You know what was what was Red Hat's role in really the organizational pieces of getting everybody on this on this new skill set? >>Well, the role of record was central because we didn't have the capability to go on research all these open source tools and find the proper combination between the container administrated orchestrator, the continuous integration part it was really difficult for us to start from scratch. I mean, this is something that this violent wanting to have a huge team, a lot of time, special skills and when you, because there are teams were used to work in monolithic applications with a very long development cycles that every time you need to change, we need, like, three months another. See, the change lives in the application for the end user, but we need to make a radical change there. So we saw in Red Hat Opportunity. We have a robot on the container adoption program sandcastle the steps that we need to work true. So what's really good to have our 16 team to retrain and to go through the container adoption program to use the combination of tools that breath already provides, like a stock that's the really compatible with each other. Then you need to know that that is easy to update when there are changes in their security things that they need to take to get the notification. So this and you have the daily support also because we have to create a new brand developers and the Dev Ops team was negative and you have developers and very technical person that didn't know anything about the application. We helped to create the tools that this, these new roles that combined these activities on the day to day work record expert was really key to that because they give us the roadmap. But what we need to do with timeframe with thing, that sort of statement we need to do in order on give us the daily support, the retraining, and they were really excited to work. Yeah, attempting that also was really good news for them because they were using old versions of job on old versions, off deployment systems, that they were everything by heart and the common life. And now, when they learn to do that with sensible and with the continuous integration system, a lot of menial tasks that they were doing everything you know there are automated. But that's a really great impact on the quality of life for them. >>Well, it's interesting that you talk about that, you know. Automation, of course, has been something we've been talking about for decades, but critically important today, you know, 100. I'm curious with kind of the situation happening with the pandemic. You know, people are having to work from home. There needs to be social, distancing the automation. And you know some of this new tooling. You know, what impact has that had on being able to deal with today's work >>environment? That kind of very good impact also, because not only for the automation, because that was that. It's really people have a secure way to work from home to the place ever. You don't need to access directly. Each one of the servers with logging or things like that is much more secure, much safer, much easier to work from home and maintaining the city. But also the dynamic has put a strain on the system because we are maintaining in open shift the whole family objects and violence system for Argentina, and that has much more information going through all the decision making. Politicians are getting information from the violence system and make predictions the style policies and they did. That information is to be available all the time, and previously, when a new strain came like the officially system went down, what was old workings globally So but now, with open shift, we were able to dial up more resources. The system, I maintain the quality, the world, the perimeter Signet work until the decision making person that needs information just in there. >>All right, so So all 100. We've talked about kind of a transformation that you've had. There's the government impact. There's the practice, the other providers of services. If you talk about you know, the ultimate end patient, you know what is the impact on them or you know what? What you have implemented here, >>what they did, that the patients now would be able to move between different parts of this complex system we have before. It was very common that the patient arrived hospital with about full of studies in paper, like somebody from a previous hospital finishes reported lab reports. And they have to bring about Dr and don't have to go to all the way from the foundation or a basic both from a province to the capital to get terrible, especially when they go back. And the Dr in the province don't have any information about what happened on one side that said no. They will care if you but no information. I get it through the patient. But now I think the system will integrate the older caregiver around Argentina in a much more simpler where you will be able to collaborate with doctors, another throwing, sitting, other CPIs on the patient will be able to vote from private to public. We have different kind of procedures, and every information will follow him on. Everyone will be able to take care of him with the best information. >>I'll under that. That's really powerful pieces there. So I guess the last piece is a little bit about kind of where you are with the overall project. What future goals do you have for this initiative? >>You've been really happy with the way we're starting to have adoption. We have more than 37 knows not already working in this network. And so this is really good. We have a good adoption right on. The implementation of open shift is going really well. The developers are really happy. We see the impact. That there are no downtime is really good. We need to continue transforming old legacy applications, monolithic applications to transform that into micro services. This work to do in deconstructing these big applications into more scalable micro services, and we need to take more advantage off. Sorry. Scale, Because really excellent feature for Developer portal. So, like that, everything will be about the adoption of the FBI. That information much simpler when we give all those tools developed. >>That's that. Once again, Andre, thank you so much. This has been, ah, really important work that your team is doing. Congratulations on the progress that you've made and, you know, definitely hope in the future. We will get to see you at one of the Red hat summits in person. So thank you so much for joining us. Thank you very much. All right, Lots more coverage from the cube at Red Hat Summit 2020. I'm stew minimum. And thank you. As always for watching the Cube. >>Yeah, yeah, yeah, yeah.

Published Date : Apr 28 2020

SUMMARY :

Summit 2020 Brought to you by Red Hat. You know, normally, you know, challenging in the midst of what is happening globally. It's similar to the US, with the national government. that the ministry basis when it comes to you know, this piece but they're going to have some kind of have that allows you to access information for all around How long has it been to put that together, Bring us through a little bit as to you know, systems all around the country to adopt this new way of changing a little bit, you know, Red Hat and any other partners. I like the scale in order to provide this way to communicate ap eyes to give You talking about all the AP eyes in there? the continuous integration system, a lot of menial tasks that they were doing everything you know You know, people are having to work from home. on the system because we are maintaining in open shift the whole family objects and violence There's the practice, the other providers of services. And the Dr in the province a little bit about kind of where you are with the overall project. We see the impact. We will get to see you at one of the Red

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alejandro Lopez AsanoPERSON

0.99+

Alejandro Lopez OsornioPERSON

0.99+

AlessandroPERSON

0.99+

Red HatORGANIZATION

0.99+

ArgentinaLOCATION

0.99+

AndrePERSON

0.99+

NikeORGANIZATION

0.99+

FBIORGANIZATION

0.99+

10 pointQUANTITY

0.99+

one billionQUANTITY

0.99+

16 teamQUANTITY

0.99+

three monthsQUANTITY

0.99+

three yearsQUANTITY

0.99+

Buenos Iris, ArgentinaLOCATION

0.99+

more than 37QUANTITY

0.99+

Red Hat Summit 2020EVENT

0.99+

stewPERSON

0.98+

bothQUANTITY

0.98+

Argentine Ministry of HealthORGANIZATION

0.98+

todayDATE

0.98+

MondayDATE

0.98+

Red hatEVENT

0.98+

singleQUANTITY

0.98+

first timeQUANTITY

0.98+

pandemicEVENT

0.98+

oneQUANTITY

0.98+

one sideQUANTITY

0.97+

Red Hat OpportunityTITLE

0.96+

ParamountORGANIZATION

0.96+

Each oneQUANTITY

0.95+

this yearDATE

0.94+

Ministry of CultureORGANIZATION

0.94+

Argentina,LOCATION

0.94+

Summit 2020EVENT

0.94+

Red Hat SummitEVENT

0.94+

100QUANTITY

0.92+

decadesQUANTITY

0.92+

AP Eyes BayORGANIZATION

0.89+

one patientQUANTITY

0.88+

CubeORGANIZATION

0.86+

Argentine Ministry of HealthORGANIZATION

0.84+

Universal ChallengerORGANIZATION

0.82+

Felker Healthcare SystemTITLE

0.77+

last three yearsDATE

0.77+

MinutemanPERSON

0.74+

thingsQUANTITY

0.7+

KalinaPERSON

0.69+

SafewayPERSON

0.69+

Dev OpsORGANIZATION

0.6+

nationalORGANIZATION

0.56+

sandcastleORGANIZATION

0.54+

CubeTITLE

0.54+

USLOCATION

0.54+

sevenQUANTITY

0.53+

WayORGANIZATION

0.5+

Nutanix .NEXT Morning Keynote Day1


 

Section 1 of 13 [00:00:00 - 00:10:04] (NOTE: speaker names may be different in each section) Speaker 1: Ladies and gentlemen our program will begin momentarily. Thank you. (singing) This presentation and the accompanying oral commentary may include forward looking statements that are subject to risks uncertainties and other factors beyond our control. Our actual results, performance or achievements may differ materially and adversely from those anticipated or implied by such statements because of various risk factors. Including those detailed in our annual report on form 10-K for the fiscal year ended July 31, 2017 filed with the SEC. Any future product or roadmap information presented is intended to outline general product direction and is not a commitment to deliver any functionality and should not be used when making any purchasing decision. (singing) Ladies and gentlemen please welcome Vice President Corporate Marketing Nutanix, Julie O'Brien. Julie O'Brien: All right. How about those Nutanix .NEXT dancers, were they amazing or what? Did you see how I blended right in, you didn't even notice I was there. [French 00:07:23] to .NEXT 2017 Europe. We're so glad that you could make it today. We have such a great agenda for you. First off do not miss tomorrow morning. We're going to share the outtakes video of the handclap video you just saw. Where are the customers, the partners, the Nutanix employee who starred in our handclap video? Please stand up take a bow. You are not going to want to miss tomorrow morning, let me tell you. That is going to be truly entertaining just like the next two days we have in store for you. A content rich highly interactive, number of sessions throughout our agenda. Wow! Look around, it is amazing to see how many cloud builders we have with us today. Side by side you're either more than 2,200 people who have traveled from all corners of the globe to be here. That's double the attendance from last year at our first .NEXT Conference in Europe. Now perhaps some of you are here to learn the basics of hyperconverged infrastructure. Others of you might be here to build your enterprise cloud strategy. And maybe some of you are here to just network with the best and brightest in the industry, in this beautiful French Riviera setting. Well wherever you are in your journey, you'll find customers just like you throughout all our sessions here with the next two days. From Sligro to Schroders to Societe Generale. You'll hear from cloud builders sharing their best practices and their lessons learned and how they're going all in with Nutanix, for all of their workloads and applications. Whether it's SAP or Splunk, Microsoft Exchange, unified communications, Cloud Foundry or Oracle. You'll also hear how customers just like you are saving millions of Euros by moving from legacy hypervisors to Nutanix AHV. And you'll have a chance to post some of your most challenging technical questions to the Nutanix experts that we have on hand. Our Nutanix technology champions, our MPXs, our MPSs. Where are all the people out there with an N in front of their certification and an X an R an S an E or a C at the end. Can you wave hello? You might be surprised to know that in Europe and the Middle East alone, we have more than 2,600 >> Julie: In Europe and the Middle East alone, we have more than 2,600 certified Nutanix experts. Those are customers, partners, and also employees. I'd also like to say thank you to our growing ecosystem of partners and sponsors who are here with us over the next two days. The companies that you meet here are the ones who are committed to driving innovation in the enterprise cloud. Over the next few days you can look forward to hearing from them and seeing some fantastic technology integration that you can take home to your data center come Monday morning. Together, with our partners, and you our customers, Nutanix has had such an exciting year since we were gathered this time last year. We were named a leader in the Gartner Magic Quadrant for integrated systems two years in a row. Just recently Gartner named us the revenue market share leader in their recent market analysis report on hyper-converged systems. We know enjoy more than 35% revenue share. Thanks to you, our customers, we received a net promoter score of more than 90 points. Not one, not two, not three, but four years in a row. A feat, I'm sure you'll agree, is not so easy to accomplish, so thank you for your trust and your partnership in us. We went public on NASDAQ last September. We've grown to more than 2,800 employees, more than 7,000 customers and 125 countries and in Europe and the Middle East alone, in our Q4 results, we added more than 250 customers just in [Amea 00:11:38] alone. That's about a third of all of our new customer additions. Today, we're at a pivotal point in our journey. We're just barely scratching the surface of something big and Goldman Sachs thinks so too. What you'll hear from us over the next two days is this: Nutanix is on it's way to building and becoming an iconic enterprise software company. By helping you transform your data center and your business with Enterprise Cloud Software that gives you the power of freedom of choice and flexibility in the hardware, the hypervisor and the cloud. The power of one click, one OS, any cloud. And now, to tell you more about the digital transformation that's possible in your business and your industry and share a little bit around the disruption that Nutanix has undergone and how we've continued to reinvent ourselves and maybe, if we're lucky, share a few hand clap dance moves, please welcome to stage Nutanix Founder, CEO and Chairman, Dheeraj Pandey. Ready? Alright, take it away [inaudible 00:13:06]. >> Dheeraj P: Thank you. Thank you, Julie and thank you every one. It looks like people are still trickling. Welcome to Acropolis. I just hope that we can move your applications to Acropolis faster than we've been able to move people into this room, actually. (laughs) But thank you, ladies and gentlemen. Thank you to our customers, to our partners, to our employees, to our sponsors, to our board members, to our performers, to everybody for their precious time. 'Cause that's the most precious thing you actually have, is time. I want to spend a little bit of time today, not a whole lot of time, but a little bit of time talking about the why of Nutanix. Like why do we exist? Why have we survived? Why will we continue to survive and thrive? And it's simpler than an NQ or category name, the word hyper-convergence, I think we are all complicated. Just thinking about what is it that we need to talk about today that really makes it relevant, that makes you take back something from this conference. That Nutanix is an obvious innovation, it's very obvious what we do is not very complicated. Because the more things change, the more they remain the same, so can we draw some parallels from life, from what's going on around us in our own personal lives that makes this whole thing very natural as opposed to "Oh, it's hyper-converged, it's a category, it's analysts and pundits and media." I actually think it's something new. It's not that different, so I want to start with some of that today. And if you look at our personal lives, everything that we had, has been digitized. If anything, a lot of these gadgets became apps, they got digitized into a phone itself, you know. What's Nutanix? What have we done in the last seven, eight years, is we digitized a lot of hardware. We made everything that used to be single purpose hardware look like pure software. We digitized storage, we digitized the systems manager role, an operations manager role. We are digitizing scriptures, people don't need to write scripts anymore when they automate because we can visually design automation with [com 00:15:36]. And we're also trying to make a case that the cloud itself is not just a physical destination. That it can be digitized and must be digitized as well. So we learn that from our personal lives too, but it goes on. Look at music. Used to be tons of things, if you used to go to [inaudible 00:15:55] Records, I'm sure there were European versions of [inaudible 00:15:57] Records as well, the physical things around us that then got digitized as well. And it goes on and on. We look at entertainment, it's very similar. The idea that if you go to a movie hall, the idea that you buy these tickets, the idea that we'd have these DVD players and DVDs, they all got digitized. Or as [inaudible 00:16:20] want to call it, virtualized, actually. That is basically happening in pretty much new things that we never thought would look this different. One of the most exciting things happening around us is the car industry. It's getting digitized faster than we know. And in many ways that we'd not even imagined 10 years ago. The driver will get digitized. Autonomous cars. The engine is definitely gone, it's a different kind of an engine. In fact, we'll re-skill a lot of automotive engineers who actually used to work in mechanical things to look at real chemical things like battery technologies and so on. A lot of those things that used to be physical are now in software in the car itself. Media itself got digitized. Think about a physical newspaper, or physical ads in newspapers. Now we talk about virtual ads, the digital ads, they're all over on websites and so on is our digital experience now. Education is no different, you know, we look back at the kind of things we used to do physically with physical things. Their now all digital. The experience has become that digital. And I can go on and on. You look at retail, you look at healthcare, look at a lot of these industries, they all are at the cusp of a digital disruption. And in fact, if you look at the data, everybody wants it. We all want a digital transformation for industries, for companies around us. In fact, the whole idea of a cloud is a highly digitized data center, basically. It's not just about digitizing servers and storage and networks and security, it's about virtualizing, digitizing the entire data center itself. That's what cloud is all about. So we all know that it's a very natural phenomenon, because it's happening around us and that's the obviousness of Nutanix, actually. Why is it actually a good thing? Because obviously it makes anything that we digitize and we work in the digital world, bring 10X more productivity and decision making efficiencies as well. And there are challenges, obviously there are challenges, but before I talk about the challenges of digitization, think about why are things moving this fast? Why are things becoming digitally disrupted quicker than we ever imagined? There are some reasons for it. One of the big reasons is obviously we all know about Moore's Law. The fact that a lot of hardware's been commoditized, and we have really miniaturized hardware. Nutanix today runs on a palm-sized server. Obviously it runs on the other end of the spectrum with high-end IBM power systems, but it also runs on palm-sized servers. Moore's Law has made a tremendous difference in the way we actually think about consuming software itself. Of course, the internet is also a big part of this. The fact that there's a bandwidth glut, there's Trans-Pacific cables and Trans-Atlantic cables and so on, has really connected us a lot faster than we ever imagined, actually, and a lot of this was also the telecom revolution of the '90s where we really produced a ton of glut for the internet itself. There's obviously a more subtle reason as well, because software development is democratizing. There's consumer-grade programming languages that we never imagined 10, 15, 20 years ago, that's making it so much faster to write- >> Speaker 1: 15-20 years ago that's making it so much faster to write code, with this crowdsourcing that never existed before with Githubs and things like that, open source. There's a lot more stuff that's happening that's outside the boundary of a corporation itself, which is making things so much faster in terms of going getting disrupted and writing things at 10x the speed it used to be 20 years ago. There is obviously this technology at the tip of our fingers, and we all want it in our mobile experience while we're driving, while we're in a coffee shop, and so on; and there's a tremendous focus on design on consumer-grade simplicity, that's making digital disruption that much more compressed in some of sense of this whole cycle of creative disruption that we talk about, is compressed because of mobility, because of design, because of API, the fact that machines are talking to machines, developers are talking to developers. We are going and miniaturizing the experience of organizations because we talk about micro-services and small two-pizza teams, and they all want to talk about each other using APIs and so on. Massive influence on this digital disruption itself. Of course, one of the reasons why this is also happening is because we want it faster, we want to consume it faster than ever before. And our attention spans are reducing. I like the fact that not many people are watching their cell phones right now, but you can imagine the multi-tasking mode that we are all in today in our lives, makes us want to consume things at a faster pace, which is one of the big drivers of digital disruption. But most importantly, and this is a very dear slide to me, a lot of this is happening because of infrastructure. And I can't overemphasize the importance of infrastructure. If you look at why did Google succeed, it was the ninth search engine, after eight of them before, and if you take a step back at why Facebook succeeded over MySpace and so on, a big reason was infrastructure. They believed in scale, they believed in low latency, they believed in being able to crunch information, at 10x, 100x, bigger scale than anyone else before. Even in our geopolitical lives, look at why is China succeeding? Because they've made infrastructure seamless. They've basically said look, governance is about making infrastructure seamless and invisible, and then let the businesses flourish. So for all you CIOs out there who actually believe in governance, you have to think about what's my first role? What's my primary responsibility? It's to provide such a seamless infrastructure, that lines of business can flourish with their applications, with their developers that can write code 10x faster than ever before. And a lot of these tenets of infrastructure, the fact of the matter is you need to have this always-on philosophy. The fact that it's breach-safe culture. Or the fact that operating systems are hardware agnostic. A lot of these tenets basically embody what Nutanix really stands for. And that's the core of what we really have achieved in the last eight years and want to achieve in the coming five to ten years as well. There's a nuance, and obviously we talk about digital, we talk about cloud, we talk about everything actually going to the cloud and so on. What are the things that could slow us down? What are the things that challenge us today? Which is the reason for Nutanix? Again, I go back to this very important point that the reason why we think enterprise cloud is a nuanced term, because the word "cloud" itself doesn't solve for a lot of the problems. The public cloud itself doesn't solve for a lot of the problems. One of the big ones, and obviously we face it here in Europe as well, is laws of the land. We have bureaucracy, which we need to deal with and respect; we have data sovereignty and computing sovereignty needs that we need to actually fulfill as well, while we think about going at breakneck speed in terms of disrupting our competitors and so on. So there's laws of the land, there's laws of physics. This is probably one of the big ones for what the architecture of cloud will look like itself, over the coming five to ten years. Our take is that cloud will need to be more dispersed than they have ever imagined, because computing has to be local to business operations. Computing has to be in hospitals and factories and shop floors and power plants and on and on and on... That's where you really can have operations and computing really co-exist together, cause speed is important there as well. Data locality is one of our favorite things; the fact that computing and data have to be local, at least the most relevant data has to be local as well. And the fact that electrons travel way faster when it's actually local, versus when you have to have them go over a Wide Area Network itself; it's one of the big reasons why we think that the cloud will actually be more nuanced than just some large data centers. You need to disperse them, you need to actually think about software (cloud is about software). Whether data plane itself could be dispersed and even miniaturized in small factories and shop floors and hospitals. But the control plane of the cloud is centralized. And that's the way you can have the best of both worlds; the control plane is centralized. You think as if you're managing one massive data center, but it's not because you're really managing hundreds or thousands of these sites. Especially if you think about edge-based computing and IoT where you really have your tentacles in tens of thousands of smaller devices and so on. We've talked about laws of the land, which is going to really make this digital transformation nuanced; laws of physics; and the third one, which is really laws of entropy. These are hackers that do this for adrenaline. These are parochial rogue states. These are parochial geo-politicians, you know, good thing I actually left the torture sign there, because apparently for our creative designer, geo-politics is equal to torture as well. So imagine one bad tweet can actually result in big changes to the way we actually live in this world today. And it's important. Geo-politics itself is digitized to a point where you don't need a ton of media people to go and talk about your principles and what you stand for and what you strategy for, for running a country itself is, and so on. And these are all human reasons, political reasons, bureaucratic reasons, compliance and regulations reasons, that, and of course, laws of physics is yet another one. So laws of physics, laws of the land, and laws of entropy really make us take a step back and say, "What does cloud really mean, then?" Cause obviously we want to digitize everything, and it all should appear like it's invisible, but then you have to nuance it for the Global 5000, the Global 10000. There's lots of companies out there that need to really think about GDPR and Brexit and a lot of the things that you all deal with on an everyday basis, actually. And that's what Nutanix is all about. Balancing what we think is all about technology and balancing that with things that are more real and practical. To deal with, grapple with these laws of the land and laws of physics and laws of entropy. And that's where we believe we need to go and balance the private and the public. That's the architecture, that's the why of Nutanix. To be able to really think about frictionless control. You want things to be frictionless, but you also realize that you are a responsible citizen of this continent, of your countries, and you need to actually do governance of things around you, which is computing governance, and data governance, and so on. So this idea of melding the public and the private is really about melding control and frictionless together. I know these are paradoxical things to talk about like how do you really have frictionless control, but that's the life you all lead, and as leaders we have to think about this series of paradoxes itself. And that's what Nutanix strategy, the roadmap, the definition of enterprise cloud is really thinking about frictionless control. And in fact, if anything, it's one of the things is also very interesting; think about what's disrupting Nutanix as a company? We will be getting disrupted along the way as well. It's this idea of true invisibility, the public cloud itself. I'd like to actually bring on board somebody who I have a ton of respect for, this leader of a massive company; which itself is undergoing disruption. Which is helping a lot of its customers undergo disruption as well, and which is thinking about how the life of a business analyst is getting digitized. And what about the laws of the land, the laws of physics, and laws of entropy, and so on. And we're learning a lot from this partner, massively giant company, called IBM. So without further ado, Bob Picciano. >> Bob Picciano: Thanks, >> Speaker 1: Thank you so much, Bob, for being here. I really appreciate your presence here- >> Bob Picciano: My pleasure! >> Speaker 1: And for those of you who actually don't know Bob, Bob is a Senior VP and General Manager at IBM, and is all things cognitive and obviously- >> Speaker 1: IBM is all things cognitive. Obviously, I learn a lot from a lot of leaders that have spent decades really looking at digital disruption. >> Bob: Did you just call me old? >> Speaker 1: No. (laughing) I want to talk about experience and talking about the meaning of history, because I love history, actually, you know, and I don't want to make you look old actually, you're too young right now. When you talk about digital disruption, we look at ourselves and say, "Look we are not extremely invisible, we are invisible, but we have not made something as invisible as the public clouds itself." And hence as I. But what's digital disruption mean for IBM itself? Now, obviously a lot of hardware is being digitized into software and cloud services. >> Bob: Yep. >> Speaker 1: What does it mean for IBM itself? >> Bob: Yeah, if you allow me to take a step back for a moment, I think there is some good foundational understanding that'll come from a particular point of view. And, you talked about it with the number of these dimensions that are affecting the way businesses need to consider their competitiveness. How they offer their capabilities into the market place. And as you reflected upon IBM, you know, we've had decades of involvement in information technology. And there's a big disruption going on in the information technology space. But it's what I call an accretive disruption. It's a disruption that can add value. If you were to take a step back and look at that digital trajectory at IBM you'd see our involvement with information technology in a space where it was all oriented around adding value and capability to how organizations managed inscale processes. Thinking about the way they were going to represent their businesses in a digital form. We came to call them applications. But it was how do you open an account, how do you process a claim, how do you transfer money, how do you hire an employee? All the policies of a company, the way the people used to do it mechanically, became digital representations. And that foundation of the digital business process is something that IBM helped define. We invented the role of the CIO to help really sponsor and enter in this notion that businesses could re represent themselves in a digital way and that allowed them to scale predictably with the qualities of their brand, from local operations, to regional operations, to international operations, and show up the same way. And, that added a lot of value to business for many decades. And we thrived. Many companies, SAP all thrived during that span. But now we're in a new space where the value of information technology is hitting a new inflection point. Which is not about how you scale process, but how you scale insight, and how you scale wisdom, and how you scale knowledge and learning from those operational systems and the data that's in those operational systems. >> Speaker 1: How's it different from 1993? We're talking about disruption. There was a time when IBM reinvented itself, 20-25 years ago. >> Bob: Right. >> Speaker 1: And you said it's bigger than 25 years ago. Tell us more. >> Bob: You know, it gets down. Everything we know about that process space right down to the very foundation, the very architecture of the CPU itself and the computer architecture, the von Neumann architecture, was all optimized on those relatively static scaled business processes. When you move into the notion where you're going to scale insight, scale knowledge, you enter the era that we call the cognitive era, or the era of intelligence. The algorithms are very different. You know the data semantically doesn't integrate well across those traditional process based pools and reformation. So, new capabilities like deep learning, machine learning, the whole field of artificial intelligence, allows us to reach into that data. Much of it unstructured, much of it dark, because it hasn't been indexed and brought into the space where it is directly affecting decision making processes in a business. And you have to be able to apply that capability to those business processes. You have to rethink the computer, the circuitry itself. You have to think about how the infrastructure is designed and organized, the network that is required to do that, the experience of the applications as you talked about have to be very natural, very engaging. So IBM does all of those things. So as a function of our transformation that we're on now, is that we've had to reach back, all the way back from rethinking the CPU, and what we dedicate our time and attention to. To our services organization, which is over 130,000 people on the consulting side helping organizations add digital intelligence to this notion of a digital business. Because, the two things are really a confluence of what will make this vision successful. >> Speaker 1: It looks like massive amounts of change for half a million people who work with the company. >> Bob: That's right. >> Speaker 1: I'm sure there are a lot of large customers out here, who will also read into this and say, "If IBM feels disrupted ... >> Bob: Uh hm >> Speaker 1: How can we actually stay not vulnerable? Actually there is massive amounts of change around their own competitive landscape as well. >> Bob: Look, I think every company should feel vulnerable right. If you're at this age, this cognitive era, the age of digital intelligence, and you're not making a move into being able to exploit the capabilities of cognition into the business process. You are vulnerable. If you're at that intersection, and your competitor is passing through it, and you're not taking action to be able to deploy cognitive infrastructure in conjunction with the business processes. You're going to have a hard time keeping up, because it's about using the machines to do the training to augment the intelligence of our employees of our professionals. Whether that's a lawyer, or a doctor, an educator or whether that's somebody in a business function, who's trying to make a critical business decision about risk or about opportunity. >> Speaker 1: Interesting, very interesting. You used the word cognitive infrastructure. >> Bob: Uh hm >> Speaker 1: There's obviously computer infrastructure, data infrastructure, storage infrastructure, network infrastructure, security infrastructure, and the core of cognition has to be infrastructure as well. >> Bob: Right >> Speaker 1: Which is one of the two things that the two companies are working together on. Tell us more about the collaboration that we are actually doing. >> Bob: We are so excited about our opportunity to add value in this space, so we do think very differently about the cognitive infrastructure that's required for this next generation of computing. You know I mentioned the original CPU was built for very deterministic, very finite operations; large precision floating point capabilities to be able to accurately calculate the exact balance, the exact amount of transfer. When you're working in the field of AI in cognition. You actually want variable precision. Right. The data is very sparse, as opposed to the way that deterministic or scorecastic operations work, which is very dense or very structured. So the algorithms are redefining the processes that the circuitry actually has to run. About five years ago, we dedicated a huge effort to rethink everything about the chip and what we made to facilitate an orchestra of participation to solve that problem. We all know the GPU has a great benefit for deep learning. But the GPU in many cases, in many architectures, specifically intel architectures, it's dramatically confined by a very small amount of IO bandwidth that intel allows to go on and off the chip. At IBM, we looked at all 686 roughly square millimeters of our chip and said how do we reuse that square area to open up that IO bandwidth? So the innovation of a GPU or a FPGA could really be utilized to it's maximum extent. And we could be an orchestrator of all of the diverse compute that's going to be necessary for AI to really compel these new capabilities. >> Speaker 1: It's interesting that you mentioned the fact that you know power chips have been redefined for the cognitive era. >> Bob: Right, for Lennox for the cognitive era. >> Speaker 1: Exactly, and now the question is how do you make it simple to use as well? How do you bring simplicity which is where ... >> Bob: That's why we're so thrilled with our partnership. Because you talked about the why of Nutanix. And it really is about that empowerment. Doing what's natural. You talked about the benefits of calm and being able to really create that liberation of an information technology professional, whether it's in operations or in development. Having the freedom of action to make good decisions about defining the infrastructure and deploying that infrastructure and not having to second guess the physical limitations of what they're going to have to be dealing with. >> Speaker 1: That's why I feel really excited about the fact that you have the power of software, to really meld the two forms together. The intel form and the power form comes together. And we have some interesting use cases that our CIO Randy Phiffer is also really exploring, is how can a power form serve as a storage form for our intel form. >> Bob: Sure. >> Speaker 1: It can serve files and mocks and things like that. >> Bob: Any data intensive application where we have seen massive growth in our Lennox business, now for our business, Lennox is 20% of the revenue of our power systems. You know, we started enabling native Lennox distributions on top of little Indian ones, on top of the power capabilities just a few years ago, and it's rocketed. And the reason for that if for any data intensive application like a data base, a no sequel database or a structured data base, a dupe in the unstructured space, they typically run about three to four times better price performance on top of Lennox on power, than they will on top of an intel alternative. >> Speaker 1: Fascinating. >> Bob: So all of these applications that we're talking about either create or consume a lot of data, have to manage a lot of flexibility in that space, and power is a tremendous architecture for that. And you mentioned also the cohabitation, if you will, between intel and power. What we want is that optionality, for you to utilize those benefits of the 3X better price performance where they apply and utilize the commodity base where it applies. So you get the cost benefits in that space and the depth and capability in the space for power. >> Speaker 1: Your tongue in cheek remark about commodity intel is not lost on people actually. But tell us about... >> Speaker 1: Intel is not lost on people actually. Tell us about ... Obviously we digitized Linux 10, 15 years ago with [inaudible 00:40:07]. Have you tried to talk about digitizing AIX? That is the core of IBM's business for the last 20, 25, 30 years. >> Bob: Again, it's about this ability to compliment and extend the investments that businesses have made during their previous generations of decision making. This industry loves to talk about shifts. We talked about this earlier. That was old, this is new. That was hard, this is easy. It's not about shift, it's about using the inflection point, the new capability to extend what you already have to make it better. And that's one thing that I must compliment you, and the entire Nutanix organization. It's really empowering those applications as a catalog to be deployed, managed, and integrated in a new way, and to have seamless interoperability into the cloud. We see the AIX workload just having that same benefit for those businesses. And there are many, many 10's of thousands around the world that are critically dependent on every element of their daily operations and productivity of that operating platform. But to introduce that into that network effect as well. >> Speaker 1: Yeah. I think we're looking forward to how we bring the same cloud experience on AIX as well because as a company it keeps us honest when we don't scoff at legacy. We look at these applications the last 10, 15, 20 years and say, "Can we bring them into the new world as well?" >> Bob: Right. >> Speaker 1: That's what design is all about. >> Bob: Right. >> Speaker 1: That's what Apple did with musics. We'll take an old world thing and make it really new world. >> Bob: Right. >> Speaker 1: The way we consume things. >> Bob: That governance. The capability to help protect against the bad actors, the nefarious entropy players, as you will. That's what it's all about. That's really what it takes to do this for the enterprise. It's okay, and possibly easier to do it in smaller islands of containment, but when you think about bringing these class of capabilities into an enterprise, and really helping an organization drive both the flexibility and empowerment benefits of that, but really be able to depend upon it for international operations. You need that level of support. You need that level of capability. >> Speaker 1: Awesome. Thank you so much Bob. Really appreciate you coming. [crosstalk 00:42:14] Look forward to your [crosstalk 00:42:14]. >> Bob: Cheers. Thank you. >> Speaker 1: Thanks again for all of you. I know that people are sitting all the way up there as well, which is remarkable. I hope you can actually see some of the things that Sunil and the team will actually bring about, talk about live demos. We do real stuff here, which is truly live. I think one of the requests that I have is help us help you navigate the digital disruption that's upon you and your competitive landscape that's around you that's really creating that disruption. Thank you again for being here, and welcome again to Acropolis. >> Speaker 3: Ladies and gentlemen, please welcome Chief Product and Development Officer, Nutanix Sunil Potti. >> Sunil Potti: Okay, so I'm going to just jump right in because I know a bunch of you guys are here to see the product as well. We are a lot of demos lined up for you guys, and we'll try to mix in the slides, and the demos as well. Here's just an example of the things I always bring up in these conferences to look around, and say in the last few months, are we making progress in simplifying infrastructure? You guys have heard this again and again, this has been our mantra from the beginning, that the hotter things get, the more differentiated a company like Nutanix can be if we can make things simple, or keep things simple. Even though I like this a lot, we found something a little bit more interesting, I thought, by our European marketing team. If you guys need these tea bags, which you will need pretty soon. It's a new tagline for the company, not really. I thought it was apropos. But before I get into the product and the demos, to give you an idea. Every time I go to an event you find ways to memorialize the event. You meet people, you build relationships, you see something new. Last night, nothing to do with the product, I sat beside someone. It was a customer event. I had no idea who I was sitting beside. He was a speaker. How many of you guys know him, by the way? Sir Ranulph Fiennes. Few hands. Good for you. I had no idea who I was sitting beside. I said, "Oh, somebody called Sir. I should be respectful." It's kind of hard for me to be respectful, but I tried. He says, "No, I didn't do anything in the sense. My grandfather was knighted about 100 years ago because he was the governor of Antigua. And when he dies, his son becomes." And apparently Sir Ranulph's dad also died in the war, and so that's how he is a sir. But then I started looking it up because he's obviously getting ready to present. And the background for him is, in my opinion, even though the term goes he's the World's Greatest Living Explorer. I would have actually called it the World's Number One Stag, and I'll tell you why. Really, you should go look it up. So this guy, at the age of 21, gets admitted to Special Forces. If you're from the UK, this is as good as it gets, SAS. Six, seven years into it, he rebels, helps out his local partner because he doesn't like a movie who's building a dam inside this pretty village. And he goes and blows up a dam, and he's thrown out of that Special Forces. Obviously he's in demolitions. Goes all the way. This is the '60's, by the way. Remember he's 74 right now. The '60's he goes to Oman, all by himself, as the only guy, only white guy there. And then around the '70's, he starts truly exploring, truly exploring. And this is where he becomes really, really famous. You have to go see this in real life, when he sees these videos to really appreciate the impact of this guy. All by himself, he's gone across the world. He's actually gone across Antarctica. Now he tells me that Antarctica is the size of China and India put together, and he was prepared for -50 to 60 degrees, and obviously he got -130 degrees. Again, you have to see the videos, see his frostbite. Two of his fingers are cut off, by the way. He hacksawed them himself. True story. And then as he, obviously, aged, his body couldn't keep up with him, but his will kept up with him. So after a recent heart attack, he actually ran seven marathons. But most importantly, he was telling me this story, at 65 he wanted to do something different because his body was letting him down. He said, "Let me do something easy." So he climbed Mount Everest. My point being, what is this related to Nutanix? Is that if Nutanix is a company, without technology, allows to spend more time on life, then we've accomplished a piece of our vision. So keep that in mind. Keep that in mind. Now comes the boring part, which is the product. The why, what, how of Nutanix. Neeris talked about this. We have two acts in this company. Invisible Infrastructure was what we started off. You heard us talk about it. How did we do it? Using one-click technologies by converging infrastructure, computer storage, virtualization, et cetera, et cetera. What we are now about is about changing the game. Saying that just like we'd applicated what powers Google and Amazon inside the data center, could we now make them all invisible? Whether it be inside or outside, could we now make clouds invisible? Clouds could be made invisible by a new level of convergence, not about computer storage, but converging public and private, converging CAPEX and OPEX, converging consumption models. And there, beyond our core products, Acropolis and Prism, are these new products. As you know, we have this core thesis, right? The core thesis says what? Predictable workloads will stay inside the data center, elastic workloads will go outside, as long as the experience on both sides is the same. So if you can genuinely have a cloud-like experience delivered inside a data center, then that's the right a- >> Speaker 1: Genuinely have a cloud like experience developed inside the data center. And that's the right answer of predictable workloads. Absolutely the answer of elastic workloads, doesn't matter whether security or compliance. Eventually a public cloud will have a data center right beside your region, whether through local partner or a top three cloud partner. And you should use it as your public cloud of choice. And so, our goal is to ensure that those two worlds are converged. And that's what Calm does, and we'll talk about that. But at the same time, what we found in late 2015, we had a bunch of customers come to us and said "Look, I love this, I love the fact that you're going to converge public and private and all that good stuff. But I have these environments and these apps that I want to be delivered as a service but I want the same operational tooling. I don't want to have two different environments but I don't want to manage my data centers. Especially my secondary data centers, DR data centers." And that's why we created Xi, right? And you'll hear a lot more about this, obviously it's going to start off in the U.S but very rapidly launch in Europe, APJ globally in the next 9-12 months. And so we'll spend some quality time on those products as well today. So, from the journey that we're at, we're starting with the score cloud that essentially says "Look, your public and private needs to be the same" We call that the first instantiation of your cloud architectures and we're essentially as a company, want to build this enterprise cloud operating system as a fabric across public and private. But that's just the starting point. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. Just like you have a public and a private cloud in the core data centers and so forth, you'll need a similar experience inside your remote office branch office, inside your DR data centers, inside your branches, and it won't stop there. It'll go all the way to the edge. All we're already seeing this right? Not just in the army where your forward operating bases in Afghanistan having a three note cluster sitting inside a tent. But we're seeing this in a variety of enterprise scenarios. And here's an example. So, here's a customer, global oil and gas company, has couple of primary data centers running Nutanix, uses GCP as a core public cloud platform, has a whole bunch of remote offices, but it also has this interesting new edge locations in the form of these small, medium, large size rigs. And today, they're in the process of building a next generation cloud architecture that's completely dispersed. They're using one node, coming out on version 5.5 with Nutanix. They're going to use two nodes, they're going to throw us three nods, multicultural architectures. Day one, they're going to centrally manage it using Prism, with one click upgrades, right? And then on top of that, they're also now provisioning using Calm, purpose built apps for the various locations. So, for example, there will be a re control app at the edge, there's an exploration data lag in Google and so forth. My point being that increasingly this architecture that we're talking about is happening in real time. It's no longer just an existing cellular civilization data center that's being replatformed to look like a private cloud and so forth, or a hybrid cloud. But the fact that you're going into this multi cloud era is getting excel bated, the more someone consumes AWL's GCP or any public cloud, the more they're excel bating their internal transformation to this multi cloud architecture. And so that's what we're going to talk about today, is this construct of ONE OS and ONE Click, and when you think about it, every company has a standard stack. So, this is the only slide you're going to see from me today that's a stack, okay? And if you look at the new release coming out, version 5.5, it's coming out imminently, easiest way to say it is that it's got a ton of functionality. We've jammed as much as we can onto one slide and then build a product basically, okay? But I would encourage you guys to check out the release, it's coming out shortly. And we can go into each and every feature here, we'd be spending a lot of time but the way that we look at building Nutanix products as many of you know, it is not feature at a time. It's experience at a time. And so, when you really look at Nutanix using a lateral view, and that's how we approach problems with our customers and partners. We think about it as a life cycle, all the way from learning to using, operating, and then getting support and experiences. And today, we're going to go through each of these stages with you. And who better to talk about it than our local version of an architect, Steven Poitras please come up on stage. I don't know where you are, Steven come on up. You tucked your shirt in? >> Speaker 2: Just for you guys today. >> Speaker 1: Okay. Alright. He's sort of putting on his weight. I know you used a couple of tight buckles there. But, okay so Steven so I know we're looking for the demo here. So, what we're going to do is, the first step most of you guys know this, is we've been quite successful with CE, it's been a great product. How many of you guys like CE? Come on. Alright. I know you had a hard time downloading it yesterday apparently, there's a bunch of guys had a hard time downloading it. But it's been a great way for us not just to get you guys to experience it, there's more than 25,000 downloads and so forth. But it's also a great way for us to see new features like IEME and so forth. So, keep an eye on CE because we're going to if anything, explode the way that we actually use as a way to get new features out in the next 12 months. Now, one thing beyond CE that we did, and this was something that we did about ... It took us about 12 months to get it out. While people were using CE to learn a lot, a lot of customers were actually getting into full blown competitive evals, right? Especially with hit CI being so popular and so forth. So, we came up with our own version called X-Ray. >> Speaker 2: Yup. >> Speaker 1: What does X-Ray do before we show it? >> Speaker 2: Yeah. Absolutely. So, if we think about back in the day we were really the only ACI platform out there on the market. Now there are a few others. So, to basically enable the customer to objectively test these, we came out with X-Ray. And rather than talking about the slide let's go ahead and take a look. Okay, I think it's ready. Perfect. So, here's our X-Ray user interface. And essentially what you do is you specify your targets. So, in this case we have a Nutanix 80150 as well as some of our competitors products which we've actually tested. Now we can see on the left hand side here we see a series of tests. So, what we do is we go through and specify certain workloads like OLTP workloads, database colocation, and while we do that we actually inject certain test cases or scenarios. So, this can be snapshot or component failures. Now one of the key things is having the ability to test these against each other. So, what we see here is we're actually taking a OLTP workload where we're running two virtual machines, and then we can see the IOPS OLTP VM's are actually performing here on the left hand side. Now as we're actually go through this test we perform a series of snapshots, which are identified by these red lines here. Now as you can see, the Nutanix platform, which is shown by this blue line, is purely consistent as we go through this test. However, our competitor's product actually degrades performance overtime as these snapshots are taken. >> Speaker 1: Gotcha. And some of these tests by the way are just not about failure or benchmarking, right? It's a variety of tests that we have that makes real life production workloads. So, every couple of months we actually look at our production workloads out there, subset those two cases and put it into X-Ray. So, X-Ray's one of those that has been more recently announced into the public. But it's already gotten a lot of update. I would strongly encourage you, even if you an existing Nutanix customer. It's a great way to keep us honest, it's a great way for you to actually expand your usage of Nutanix by putting a lot of these real life tests into production, and as and when you look at new alternatives as well, there'll be certain situations that we don't do as well and that's a great way to give us feedback on it. And so, X-Ray is there, the other one, which is more recent by the way is a fact that most of you has spent many days if not weeks, after you've chosen Nutanix, moving non-Nutanix workloads. I.e. VMware, on three tier architectures to Atrio Nutanix. And to do that, we took a hard look and came out with a new product called Xtract. >> Speaker 2: Yeah. So essentially if we think about what Nutanix has done for the data center really enables that iPhone like experience, really bringing it simplicity and intuitiveness to the data center. Now what we wanted to do is to provide that same experience for migrating existing workloads to us. So, with Xtract essentially what we've done is we've scanned your existing environment, we've created design spec, we handled the migration process ... >> Steven: ... environment, we create a design spec. We handle for the migration process as well as the cut over. Now, let's go ahead and take a look in our extract user interface here. What we can see is we have a source environment. In this case, this is a VC environment. This can be any VC, whether it's traditional three tier or hypherconverged. We also see our Nutanix target environments. Essentially, these are our AHV target clusters where we're going to be migrating the data and performing the cut over to you. >> Speaker 2: Gotcha. Steven: The first thing that we do here is we go ahead and create a new migration plan. Here, I'm just going to specify this as DB Wave 2. I'll click okay. What I'm doing here is I'm selecting my target Nutanix cluster, as well as my target Nutanix container. Once I'll do that, I'll click next. Now in this case, we actually like to do it big. We're actually going to migrate some production virtual machines over to this target environment. Here, I'm going to select a few windows instances, which are in our database cluster. I'll click next. At this point, essentially what's occurring is it's going through taking a look at these virtual machines as well as taking a look at the target environment. It takes a look at the resources to ensure that we actually have enough, an ample capacity to facilitate the workload. The next thing we'll do is we'll go ahead and type in our credentials here. This is actually going to be used for logging into the virtual machine. We can do a new device driver installation, as well as get any static IP configuration. Well specify our network mapping. Then from there, we'll click next. What we'll do is we'll actually save and start. This will go through create the migration plan. It'll do some analysis on these virtual machines to ensure that we can actually log in before we actually start migrating data. Here we have a migration, which has been in progress. We can see we have a few virtual machines, obviously some Linux, some Windows here. We've cut over a few. What we do to actually cut over these VMS, is go ahead select the VMS- Speaker 2: This is the actual task of actually doing the final stage of cut over. Steven: Yeah, exactly. That's one of the nice things. Essentially, we can migrate the data whenever we want. We actually hook into the VADP API's to do this. Then every 10 minutes, we send over a delta to sync the data. Speaker 2: Gotcha, gotcha. That's how one click migration can now be possible. This is something that if you guys haven't used this, this has been out in the wild, just for a month or so. Its been probably one of our bestselling, because it's free, bestselling features of the recent product release. I've had customers come to me and say, "Look, there are situations where its taken us weeks to move data." That is now minutes from the operator perspective. Forget where the director, or the VP, it's the line architecture and operator that really loves these tools, which is essentially the core of Nutanix. That's one of our core things, is to make sure that if we can keep the engineer and the architect truly happy, then everything else will be fine for us, right? That's extract. Then we have a lot of things, right? We've done the usual things, there's a tunnel functionality on day zero, day one, day two, kind of capabilities. Why don't we start with something around Prism Central, now that we can do one click PC installs? We can do PC scale outs, we can go from managing thousands of VMS, tens of thousands of VMS, while doing all the one click operations, right? Steven: Yep. Speaker 2: Why don't we take a quick look at what's new in Prism Central? Steven: Yep. Absolutely. Here, we can see our Prism element interface. As you mentioned, one of the key things we added here was the ability to deploy Prism Central very simply just with a few clicks. We'll actually go through a distributed PC scale of deployment here. Here, we're actually going to deploy, as this is a new instance. We're going to select our 5.5 version. In this case, we're going to deploy a scale out Prism Central cluster. Obviously, availability and up-time's very critical for us, as we're mainly distributed systems. In this case we're going to deploy a scale-out PC cluster. Here we'll select our number of PC virtual machines. Based upon the number of VMS, we can actually select our size of VM that we'd deploy. If we want to deploy 25K's report, we can do that as well. Speaker 2: Basically a thousand to tens of thousands of VM's are possible now. Steven: Yep. That's a nice thing is you can start small, and then scale out as necessary. We'll select our PC network. Go ahead and input our IP address. Now, we'll go to deploy. Now, here we can see it's actually kicked off the deployment, so it'll go provision these virtual machines to apply the configuration. In a few minutes, we'll be up and running. Speaker 2: Right. While Steven's doing that, one of the things that we've obviously invested in is a ton of making VM operations invisible. Now with Calm's, what we've done is to up level that abstraction. Two applications. At the end of the day, more and more ... when you go to AWS, when you go to GCP, you go to [inaudible 01:04:56], right? The level of abstractions now at an app level, it's cloud formations, and so forth. Essentially, what Calm's able to do is to give you this marketplace that you can go in and self-service [inaudible 01:05:05], create this internal cloud like environment for your end users, whether it be business owners, technology users to self-serve themselves. The process is pretty straightforward. You, as an operator, or an architect, or [inaudible 01:05:16] create these blueprints. Consumers within the enterprise, whether they be self-service users, whether they'll be end business users, are able to consume them for a simple marketplace, and deploy them on whether it be a private cloud using Nutanix, or public clouds using anything with public choices. Then, as a single frame of glass, as operators you're doing conversed operations, at an application centric level between [inaudible 01:05:41] across any of these clouds. It's this combination of producer, consumer, operator in a curated sense. Much like an iPhone with an app store. It's the core construct that we're trying to get with Calm to up level the abstraction interface across multiple clouds. Maybe we'll do a quick demo of this, and then get into the rest of the stuff, right? Steven: Sure. Let's check it out. Here we have our Prism Central user interface. We can see we have two Nutanix clusters, our cloudy04 as well as our Power8 cluster. One of the key things here that we've added is this apps tab. I'm clicking on this apps tab, we can see that we have a few [inaudible 01:06:19] solutions, we have a TensorFlow solution, a [inaudible 01:06:22] et cetera. The nice thing about this is, this is essentially a marketplace where vendors as well as developers could produce these blueprints for consumption by the public. Now, let's actually go ahead and deploy one of these blueprints. Here we have a HR employment engagement app. We can see we have three different tiers of services part of this. Speaker 2: You need a lot of engagement at HR, you know that. Okay, keep going. Steven: Then the next thing we'll do here is we'll go and click on. Based upon this, we'll specify our blueprint name, HR app. The nice thing when I'm deploying is I can actually put in back doors. We'll click clone. Now what we can see here is our blueprint editor. As a developer, I could actually go make modifications, or even as an in-user given the simple intuitive user interface. Speaker 2: This is the consumers side right here, but it's also the [inaudible 01:07:11]. Steven: Yep, absolutely. Yeah, if I wanted to make any modifications, I could select the tier, I could scale out the number of instances, I could modify the packages. Then to actually deploy, all I do is click launch, specify HR app, and click create. Speaker 2: Awesome. Again, this is coming in 5.5. There's one other feature, by the way, that is coming in 5.5 that's surrounding Calm, and Prism Pro, and everything else. That seems to be a much awaited feature for us. What was that? Steven: Yeah. Obviously when we think about multi-tenant, multi-cloud role based access control is a very critical piece of that. Obviously within the organization, we're going to have multiple business groups, multiple units. Our back's a very critical piece. Now, if we go over here to our projects, we can see in this scenario we just have a single project. What we've added is if you want to specify certain roles, in this case we're going to add our good friend John Doe. We can add them, it could be a user or group, but then we specify their role. We can give a developer the ability to edit and create these blueprints, or consumer the ability to actually provision based upon. Speaker 2: Gotcha. Basically in 5.5, you'll have role based access control now in Prism and Calm burned into that, that I believe it'll support custom role shortly after. Steven: Yep, okay. Speaker 2: Good stuff, good stuff. I think this is where the Nutanix guys are supposed to clap, by the way, so that the rest of the guys can clap. Steven: Thank you, thank you. Okay. What do we have? Speaker 2: We have day one stuff, obviously there's a ton of stuff that's coming in core data path capabilities that most of you guys use. One of the most popular things is synchronous replication, especially in Europe. Everybody wants to do [Metro 01:08:49] for whatever reason. But we've got something new, something even more enhanced than Metro, right? Steven: Yep. Speaker 2: Do you want to talk a little bit about it? Steven: Yeah, let's talk about it. If we think about what we had previously, we started out with a synchronous replication. This is essentially going to be your higher RPO. Then we moved into Metro cluster, which was RPO zero. Those are two ins of the gamete. What we did is we introduced new synchronous replication, which really gives you the best of both worlds where you have very, very decreased RPO's, but zero impact in line mainstream performance. Speaker 2: That's it. Let's show something. Steven: Yeah, yeah. Let's do it. Here, we're back at our Prism Element interface. We'll go over here. At this point, we provisioned our HR app, the next thing we need to do is to protect that data. Let's go here to protection domain. We'll create a new PD for our HR app. Speaker 2: You clearly love HR. Steven: Spent a lot of time there. Speaker 2: Yeah, yeah, yeah. Steven: Here, you can see we have our production lamp DBVM. We'll go ahead and protect that entity. We can see that's protected. The next thing we'll do is create a schedule. Now, what would you say would be a good schedule we should actually shoot for? Speaker 2: I don't know, 15 minutes? Steven: 15 minutes is not bad. But I ... Section 7 of 13 [01:00:00 - 01:10:04] Section 8 of 13 [01:10:00 - 01:20:04] (NOTE: speaker names may be different in each section) Speaker 1: ... 15 minutes. Speaker 2: 15 minutes is not bad, but I think the people here deserve much better than that, so I say let's shoot for ... what about 15 seconds? Speaker 1: Yeah. They definitely need a bathroom break, so let's do 15 seconds. Speaker 2: Alright, let's do 15 seconds. Speaker 1: Okay, sounds good. Speaker 2: K. Then we'll select our retention policy and remote cluster replicate to you, which in this case is wedge. And we'll go ahead and create the schedule here. Now at this point we can see our protection domain. Let's go ahead and look at our entities. We can see our database virtual machine. We can see our 15 second schedule, our local snapshots, as well as we'll start seeing our remote snapshots. Now essentially what occurs is we take two very quick snapshots to essentially see the initial data, and then based upon that then we'll start taking our continuous 15 second snaps. Speaker 1: 15 seconds snaps, and obviously near sync has less of impact than synchronous, right? From an architectural perspective. Speaker 2: Yeah, and that's a nice thing is essentially within the cluster it's truly pure synchronous, but externally it's just a lagged a-sync. Speaker 1: Gotcha. So there you see some 15 second snapshots. So near sync is also built into five-five, it's a long-awaited feature. So then, when we expand in the rest of capabilities, I would say, operations. There's a lot of you guys obviously, have started using Prism Pro. Okay, okay, you can clap. You can clap. It's okay. It was a lot of work, by the way, by the core data pad team, it was a lot of time. So Prism Pro ... I don't know if you guys know this, Prism Central now run from zero percent to more than 50 percent attach on install base, within 18 months. And normally that's a sign of true usage, and true value being supported. And so, many things are new in five-five out on Prism Pro starting with the fact that you can do data[inaudible 01:11:49] base lining, alerting, so that you're not capturing a ton of false positives and tons of alerts. We go beyond that, because we have this core machine-learning technology power, we call it cross fit. And, what we've done is we've used that as a foundation now for pretty much all kinds of operations benefits such as auto RCA, where you're able to actually map to particular [inaudible 01:12:12] crosses back to who's actually causing it whether it's the network, a computer, and so forth. But then the last thing that we've also done in five-five now that's quite different shading, is the fact that you can now have a lot of these one-click recommendations and remediations, such as right-sizing, the fact that you can actually move around [inaudible 01:12:28] VMs, constrained VMs, and so forth. So, I now we've packed a lot of functionality in Prism Pro, so why don't we spend a couple of minutes quickly giving a sneak peak into a few of those things. Speaker 2: Yep, definitely. So here we're back at our Prism Central interface and one of the things we've added here, if we take a look at one of our clusters, we can see we have this new anomalies portion here. So, let's go ahead and select that and hop into this. Now let's click on one of these anomaly events. Now, essentially what the system does is we monitor all the entities and everything running within the system, and then based upon that, we can actually determine what we expect the band of values for these metrics to be. So in this scenario, we can see we have a CPU usage anomaly event. So, normal time, we expect this to be right around 86 to 100 percent utilization, but at this point we can see this is drastically dropped from 99 percent to near zero. So, this might be a point as an administrator that I want to go check out this virtual machine, ensure that certain services and applications are still up and running. Speaker 1: Gotcha, and then also it changes the baseline based on- Speaker 2: Yep. Yeah, so essentially we apply machine-learning techniques to this, so the system will dynamically adjust based upon the value adjustment. Speaker 1: Gotcha. What else? Speaker 2: Yep. So the other thing here that we mentioned was capacity planning. So if we go over here, we can take a look at our runway. So in this scenario we have about 30 days worth of runway, which is most constrained by memory. Now, obviously, more nodes is all good for everyone, but we also want to ensure that you get the maximum value on your investment. So here we can actually see a few recommendations. We have 11 overprovision virtual machines. These are essentially VMs which have more resources than are necessary. As well as 19 inactives, so these are dead VMs essentially that haven't been powered on and not utilized. We can also see we have six constrained, as well as one bully. So, constrained VMs are essentially VMs which are requesting more resources than they actually have access to. This could be running at 100 percent CPU utilization, or 100 percent memory, or storage utilization. So we could actually go in and modify these. Speaker 1: Gotcha. So these are all part of the auto remediation capabilities that are now possible? Speaker 2: Yeah. Speaker 1: What else, do you want to take reporting? Speaker 2: Yeah. Yeah, so I know reporting is a very big thing, so if we think about it, we can't rely on an administrator to constantly go into Prism. We need to provide some mechanism to allow them to get emailed reports. So what we've done is we actually autogenerate reports which can be sent via email. So we'll go ahead and add one of these sample reports which was created today. And here we can actually get specific detailed information about our cluster without actually having to go into Prism to get this. Speaker 1: And you can customize these reports and all? Speaker 2: Yep. Yeah, if we hop over here and click on our new report, we can actually see a list of views we could add to these reports, and we can mix and match and customize as needed. Speaker 1: Yeah, so that's the operational side. Now we also have new services like AFS which has been quite popular with many of you folks. We've had hundreds of customers already on it live with SMB functionality. You want to show a couple of things that is new in five-five? Speaker 2: Yeah. Yep, definitely. So ... let's wait for my screen here. So one of the key things is if we looked at that runway tab, what we saw is we had over a year's worth of storage capacity. So, what we saw is customers had the requirement for filers, they had some excess storage, so why not actually build a software featured natively into the cluster. And that's essentially what we've done with AFS. So here we can see we have our AFS cluster, and one of the key things is the ability to scale. So, this particular cluster has around 3.1 or 3.16 billion files, which are running on this AFS cluster, as well as around 3,000 active concurrent sessions. Speaker 1: So basically thousands of concurrent sessions with billions of files? Speaker 2: Yeah, and the nice thing with this is this is actually only a four node Nutanix cluster, so as the cluster actually scales, these numbers will actually scale linearly as a function of those nodes. Speaker 1: Gotcha, gotcha. There's got to be one more bullet here on this slide so what's it about? Speaker 2: Yeah so, obviously the initial use case was realistically for home folders as well as user profiles. That was a good start, but it wasn't the only thing. So what we've done is we've actually also introduced important and upcoming release of NFS. So now you can now use NFS to also interface with our [crosstalk 01:16:44]. Speaker 1: NFS coming soon with AFS by the way, it's a big deal. Big deal. So one last thing obviously, as you go operationalize it, we've talked a lot of things on features and functions but one of the cool things that's always been seminal to this company is the fact that we all for really good customer service and support experience. Right now a lot of it is around the product, the people, the support guys, and so forth. So fundamentally to the product we have found ways using Pulse to instrument everything. With Pulse HD that has been allowed for a little bit longer now. We have fine grain [inaudible 01:17:20] around everything that's being done, so if you turn on this functionality you get a lot of information now that we built, we've used when you make a phone call, or an email, and so forth. There's a ton of context now available to support you guys. What we've now done is taken that and are now externalizing it for your own consumption, so that you don't have to necessarily call support. You can log in, look at your entire profile across your own alerts, your own advisories, your own recommendations. You can look at collective intelligence now that's coming soon which is the fact that look, here are 50 other customers just like you. These are the kinds of customers that are using workloads like you, what are their configuration profiles? Through this centralized customer insights portal you going to get a lot more insight, not just about your own operations, but also how everybody else is also using it. So let's take a quick look at that upcoming functionality. Speaker 2: Yep. Absolutely. So this is our customer 360 portal, so as [inaudible 01:18:18] mentioned, as a customer I can actually log in here, I can get a high-level overview of my existing environment, my cases, the status of those cases, as well as any relevant announcements. So, here based upon my cluster version, if there's any updates which are available, I can then see that here immediately. And then one of the other things that we've added here is this insights page. So essentially this is information that previously support would leverage to essentially proactively look out to the cluster, but now we've exposed this to you as the customer. So, clicking on this insights tab we can see an overview of our environment, in this case we have three Nutanix clusters, right around 550 virtual machines, and over here what's critical is we can actually see our cases. And one of the nice things about this is these area all autogenerated by the cluster itself, so no human interaction, no manual intervention was required to actually create these alerts. The cluster itself will actually facilitate that, send it over to support, and then support can get back out to you automatically. Speaker 1: K, so look for customer insights coming soon. And obviously that's the full life cycle. One cool thing though that's always been unique to Nutanix was the fact that we had [inaudible 01:19:28] security from day one built-in. And [inaudible 01:19:31] chunk of functionality coming in five-five just around this, because every release we try to insert more and more security capabilities, and the first one is around data. What are we doing? Speaker 2: Yeah, absolutely. So previously we had support for data at rest encryption, but this did have the requirement to leverage self-encrypting drives. These can be very expensive, so what we've done, typical to our fashion is we've actually built this in natively via software. So, here within Prism Element, I can go to data at rest encryption, and then I can go and edit this configuration here. Section 8 of 13 [01:10:00 - 01:20:04] Section 9 of 13 [01:20:00 - 01:30:04] (NOTE: speaker names may be different in each section) Steve: Encryption and then I can go and edit this configuration here. From here I could add my CSR's. I can specify KMS server and leverage native software base encryption without the requirement of SED's. Sunil: Awesome. So data address encryption [inaudible 01:20:15] coming soon, five five. Now data security is only one element, the other element was around network security obviously. We've always had this request about what are we doing about networking, what are we doing about network, and our philosophy has always been simple and clear, right. It is that the problem in networking is not the data plan. Problem in networking is the control plan. As in, if a packing loss happens to the top of an ax switch, what do we do? If there's a misconfigured board, what do we do? So we've invested a lot in full blown new network visualization that we'll show you a preview of that's all new in five five, but then once you can visualize you can take action, so you can actually using our netscape API's now in five five. You can optovision re lands on the switch, you can update reps on your load balancing pools. You can update obviously rules on your firewall. And then we've taken that to the next level, which is beyond all that, just let you go to AWS right now, what do you do? You take 100 VM's, you put it in an AWS security group, boom. That's how you get micro segmentation. You don't need to buy expensive products, you don't need to virtualize your network to get micro segmentation. That's what we're doing with five five, is built in one click micro segmentation. That's part of the core product, so why don't we just quickly show that. Okay? Steve: Yeah, let's take a look. So if we think about where we've been so far, we've done the comparison test, we've done a migration over to a Nutanix. We've deployed our new HR app. We've protected it's data, now we need to protect the network's. So one of the things you'll see that's new here is this security policies. What we'll do is we'll actually go ahead and create a new security policy and we'll just say this is HR security policy. We'll specify the application type, which in this case is HR. Sunil: HR of course. Steve: Yep and we can see our app instance is automatically populated, so based upon the number of running instances of that blueprint, that would populate that drop-down. Now we'll go ahead and click next here and what we can see in the middle is essentially those three tiers that composed that app blueprint. Now one of the important things is actually figuring out what's trying to communicate with this within my existing environment. So if I take a look over here on my left hand side, I can essentially see a few things. I can see a Ha Proxy load balancer is trying to communicate with my app here, that's all good. I want to allow that. I can see some sort of monitoring service is trying to communicate with all three of the tiers. That's good as well. Now the last thing I can see here is this IP address which is trying to access my database. Now, that's not designed and that's not supposed to happen, so what we'll do is we'll actually take a look and see what it's doing. Now hopping over to this database virtual machine or the hack VM, what we can see is it's trying to perform a brute force log in attempt to my MySQL database. This is not good. We can see obviously it can connect on the socket, however, it hasn't guessed the right password. In order to lock that down, we'll go back to our policies here and we're going to click deny. Once we've done that, we'll click next and now we'll go to Apply Now. Now we can see our newly created security policy and if we hop back over to this VM, we can now see it's actually timing out and what this means is that it's not able to communicate with that database virtual machine due to micro segmentation actively blocking that request. Sunil: Gotcha and when you go back to the Prism site, essentially what we're saying now is, it's as simple as that, to set up micro segmentation now inside your existing clusters. So that's one click micro segmentation, right. Good stuff. One other thing before we let Steve walk off the stage and then go to the bathroom, but is you guys know Steve, you know he spends a lot time in the gym, you do. Right. He and I share cubes right beside each other by the way just if you ever come to San Jose Nutanix corporate headquarters, you're always welcome. Come to the fourth floor and you'll see Steve and Sunil beside each other, most of the time I'm not in the cube, most of the time he's in the gym. If you go to his cube, you'll see all kinds of stuff. Okay. It's true, it's true, but the reason why I brought this up, was Steve recently became a father, his first kid. Oh by the way this is, clicker, this is how his cube looks like by the way but he left his wife and his new born kid to come over here to show us a demo, so give him a round of applause. Thank you, sir. Steve: Cool, thanks, Sunil. That was fun. Sunil: Thank you. Okay, so lots of good stuff. Please try out five five, give us feedback as you always do. A lot of sessions, a lot of details, have fun hopefully for the rest of the day. To talk about how their using Nutanix, you know here's one of our favorite customers and partners. He normally comes with sunglasses, I've asked him that I have to be the best looking guy on stage in my keynotes, so he's going to try to reduce his charm a little bit. Please come on up, Alessandro. Thank you. Alessandro R.: I'm delighted to be here, thank you so much. Sunil: Maybe we can stand here, tell us a little bit about Leonardo. Alessandro R.: About Leonardo, Leonardo is a key actor of the aerospace defense and security systems. Helicopters, aircraft, the fancy systems, the fancy electronics, weapons unfortunately, but it's also a global actor in high technology field. The security information systems division that is the division I belong to, 3,000 people located in Italy and in UK and there's several other countries in Europe and the U.S. $1 billion dollar of revenue. It has a long a deep experience in information technology, communications, automation, logical and physical security, so we have quite a long experience to expand. I'm in charge of the security infrastructure business side. That is devoted to designing, delivering, managing, secure infrastructures services and secure by design solutions and platforms. Sunil: Gotcha. Alessandro R.: That is. Sunil: Gotcha. Some of your focus obviously in recent times has been delivering secure cloud services obviously. Alessandro R.: Yeah, obviously. Sunil: Versus traditional infrastructure, right. How did Nutanix help you in some of that? Alessandro R.: I can tell something about our recent experience about that. At the end of two thousand ... well, not so recent. Sunil: Yeah, yeah. Alessandro R.: At the end of 2014, we realized and understood that we had to move a step forward, a big step and a fast step, otherwise we would drown. At that time, our newly appointed CEO confirmed that the IT would be a core business to Leonardo and had to be developed and grow. So we decided to start our digital transformation journey and decided to do it in a structured and organized way. Having clear in mind our targets. We launched two programs. One analysis program and one deployments programs that were essentially transformation programs. We had to renew ourselves in terms of service models, in terms of organization, in terms of skills to invest upon and in terms of technologies to adopt. We were stacking a certification of technologies that adopted, companies merged in the years before and we have to move forward and to rationalize all these things. So we spent a lot of time analyzing, comparing technologies, and evaluating what would fit to us. We had two main targets. The first one to consolidate and centralize the huge amount of services and infrastructure that were spread over 52 data centers in Italy, for Leonardo itself. The second one, to update our service catalog with a bunch of cloud services, so we decided to update our data centers. One of our building block of our new data center architecture was Nutanix. We evaluated a lot, we had spent a lot of time in analysis, so that wasn't a bet, but you are quite pioneers at those times. Sunil: Yeah, you took a lot of risk right as an Italian company- Alessandro R.: At this time, my colleague used to say, "Hey, Alessandro, think it over, remember that not a CEO has ever been fired for having chose IBM." I apologize, Bob, but at that time, when Nutanix didn't run on [inaudible 01:29:27]. We have still a good bunch of [inaudible 01:29:31] in our data center, so that will be the chance to ... Audience Member: [inaudible 01:29:37] Alessandro R.: So much you must [inaudible 01:29:37] what you announced it. Sunil: So you took a risk and you got into it. Alessandro R.: Yes, we got into, we are very satisfied with the results we have reached. Sunil: Gotcha. Alessandro R.: Most of the targets we expected to fulfill have come and so we are satisfied, but that doesn't mean that we won't go on asking you a big discount ... Sunil: Sure, sure, sure, sure. Alessandro R.: On price list. Sunil: Sure, sure, so what's next in terms of I know there are some interesting stuff that you're thinking. Alessandro R.: The next- Section 9 of 13 [01:20:00 - 01:30:04] Section 10 of 13 [01:30:00 - 01:40:04] (NOTE: speaker names may be different in each section) Speaker 1: So what's next, in terms of I know you have some interesting stuff that you're thinking of. Speaker 2: The next, we have to move forward obviously. The name Leonardo is inspired to Leonardo da Vinci, it was a guy that in terms of innovation and technology innovation had some good ideas. And so, I think, that Leonardo with Nutanix could go on in following an innovation target and following really mutual ... Speaker 1: Partnership. Speaker 2: Useful partnership, yes. We surely want to investigate the micro segmentation technologies you showed a minute ago because we have some looking, particularly by the economical point of view ... Speaker 1: Yeah, the costs and expenses. Speaker 2: And we have to give an alternative to the technology we are using. We want to use more intensively AHV, again as an alternative solution we are using. We are selecting a couple of services, a couple of quite big projects to build using AHV talking of Calm we are very eager to understand the announcement that they are going to show to all of us because the solution we are currently using is quite[crosstalk 01:31:30] Speaker 1: Complicated. Speaker 2: Complicated, yeah. To move a step of automation to elaborate and implement[inaudible 01:31:36] you spend 500 hours of manual activities that's nonsense so ... Speaker 1: Manual automation. Speaker 2: (laughs) Yes, and in the end we are very interested also in the prism features, mostly the new features that you ... Speaker 1: Talked about. Speaker 2: You showed yesterday in the preview because one bit of benefit that we received from the solution in the operations field means a bit plus, plus to our customer and a distinctive plus to our customs so we are very interested in that ... Speaker 1: Gotcha, gotcha. Thanks for taking the risk, thanks for being a customer and partner. Speaker 2: It has been a pleasure. Speaker 1: Appreciate it. Speaker 2: Bless you, bless you. Speaker 1: Thank you. So, you know obviously one OS, one click was one of our core things, as you can see the tagline doesn't stop there, it also says "any cloud". So, that's the rest of the presentation right now it's about; what are we doing, to now fulfill on that mission of one OS, one cloud, one click with one support experience across any cloud right? And there you know, we talked about Calm. Calm is not only just an operational experience for your private cloud but as you can see it's a one-click experience where you can actually up level your apps, set up blueprints, put SLA's and policies, push them down to either your AWS, GCP all your [inaudible 01:33:00] environments and then on day one while you can do one click provisioning, day two and so forth you will see new and new capabilities such as, one-click migration and mobility seeping into the product. Because, that's the end game for Calm, is to actually be your cloud autonomy platform right? So, you can choose the right cloud for the right workload. And talk about how they're building a multi cloud architecture using Nutanix and partnership a great pleasure to introduce my other good Italian friend Daniele, come up on stage please. From Telecom Italia Sparkle. How are you sir? Daniele: Not too bad thank you. Speaker 1: You want an espresso, cappuccino? Daniele: No, no later. Speaker 1: You all good? Okay, tell us a little about Sparkle. Daniele: Yeah, Sparkle is a fully owned subsidy of Telecom Italia group. Speaker 1: Mm-hmm (affirmative) Daniele: Spinned off in 2003 with the mission to develop the wholesale and multinational corporate and enterprise business abroad. Huge network, as you can see, hundreds of thousands of kilometers of fiber optics spread between; south east Asia to Europe to the U.S. Most of it proprietary part of it realized on some running cables. Part of them proprietary part of them bilateral part of them[inaudible 01:34:21] with other operators. 37 countries in which we have offices in the world, 700 employees, lean and clean company ... Speaker 1: Wow, just 700 employees for all of this. Daniele: Yep, 1.4 billion revenues per year more or less. Speaker 1: Wow, are you a public company? Daniele: No, fully owned by TIM so far. Speaker 1: So, what is your experience with Nutanix so far? Daniele: Well, in a way similar to what Alessandro was describing. To operate such a huge network as you can see before, and to keep on bringing revenues for the wholesale market, while trying to turn the bar toward the enterprise in a serious way. Couple of years ago the management team realized that we had to go through a serious transformation, not just technological but in terms of the way we build the services to our customers. In terms of how we let our customer feel the Sparkle experience. So, we are moving towards cloud but we are moving towards cloud with connectivity attached to it because it's in our cord as a provider of Telecom services. The paradigm that is driving today is the on-demand, is the dynamic and in order to get these things we need to move to software. Most of the network must become invisible as the Nutanix way. So, we decided instead of creating patchworks onto our existing systems, infrastructure, OSS, BSS and network systems, to build a new data center from scratch. And the paradigm being this new data center, the mantra was; everything is software designed, everything must be easy to manage, performance capacity planning, everything must be predictable and everything to be managed by few people. Nutanix is at the moment the baseline of this data center for what concern, let's say all the new networking tools, meaning as the end controllers that are taking care of automation and programmability of the network. Lifecycle service orchestrator, network orchestrator, cloud automation and brokerage platform and everything at the moment runs on AHV because we are forcing our vendors to certify their application on AHV. The only stack that is not at the moment AHV based is on a specific cloud platform because there we were really looking for the multi[inaudible 01:37:05]things that you are announcing today. So, we hope to do the migration as soon as possible. Speaker 1: Gotcha, gotcha. And then looking forward you're going to build out some more data center space, expose these services Daniele: Yeah. Speaker 1: For the customers as well as your internal[crosstalk 01:37:21] Daniele: Yeah, basically yes for sure we are going to consolidate, to invest more in the data centers in the markets on where we are leader. Italy, Turkey and Greece we are big data centers for [inaudible 01:37:33] and cloud, but we believe that the cloud with all the issues discussed this morning by Diraj, that our locality, customer proximity ... we think as a global player having more than 120 pops all over the world, which becomes more than 1000 in partnerships, that the pop can easily be transformed in a data center, so that we want to push the customer experience of what we develop in our main data centers closer to them. So, that we can combine traditional infrastructure as a service with the new connectivity services every single[inaudible 01:38:18] possibly everything running. Speaker 1: I mean, it makes sense, I mean I think essentially in some ways to summarize it's the example of an edge cloud where you're pushing a micro-cloud closer to the customers edge. Daniele: Absolutely. Speaker 1: Great stuff man, thank you so much, thank you so much. Daniele: Pleasure, pleasure. Thank you. Speaker 1: So, you know a couple of other things before we get in the next demo is the fact that in addition to Calm from multi-cloud management we have Zai, we talked about for extended enterprise capabilities and something for you guys to quickly understand why we have done this. In a very simple way is if you think about your enterprise data center, clearly you have a bunch of apps there, a bunch of public clouds and when you look at the paradigm you currently deploy traditional apps, we call them mode one apps, SAP, Exchange and so forth on your enterprise. Then you have next generation apps whether it be [inaudible 01:39:11] space, whether it be Doob or whatever you want to call it, lets call them mode two apps right? And when you look at these two types of apps, which are the predominant set, most enterprises have a combination of mode one and mode two apps, most public clouds primarily are focused, initially these days on mode two apps right? And when people talk about app mobility, when people talk about cloud migration, they talk about lift and shift, forklift [inaudible 01:39:41]. And that's a hard problem I mean, it's happening but it's a hard problem and ends up that its just not a one time thing. Once you've forklift, once you move you have different tooling, different operation support experience, different stacks. What if for some of your applications that mattered ... Section 10 of 13 [01:30:00 - 01:40:04] Section 11 of 13 [01:40:00 - 01:50:04] (NOTE: speaker names may be different in each section) Speaker 1: What if, for some of your applications that matter to you, that are your core enterprise apps that you can retain the same toolimg, the same operational experience and so forth. And that is what we achieve to do with Xi. It is truly making hybrid invisible, which is a next act for this company. It'll take us a few years to really fulfill the vision here, but the idea here is that you shouldn't think about public cloud as a different silo. You should think of it as an extension of your enterprise data centers. And for any services such as DR, whether it would be dev test, whether it be back-up, and so-forth. You can use the same tooling, same experience, get a public cloud-like capability without lift and shift, right? So it's making this lift and shift invisible by, soft of, homogenizing the data plan, the network plan, the control plan is what we really want to do with Xi. Okay? And we'll show you some more details here. But the simplest way to understand this is, think of it as the iPhone, right? D has mentioned this a little bit. This is how we built this experience. Views IOS as the core, IP, we wrap it up with a great package called the iPhone. But then, a few years into the iPhone era, came iTunes and iCloud. There's no apps, per se. That's fused into IOS. And similarly, think about Xi that way. The more you move VMs, into an internet-x environment, stuff like DR comes burnt into the fabric. And to give us a sneak peek into a bunch of the com and Xi cable days, let me bring back Binny who's always a popular guys on stage. Come on up, Binny. I'd be surprised in Binny untucked his shirt. He's always tucking in his shirt. Binny Gill: Okay, yeah. Let's go. Speaker 1: So first thing is com. And to show how we can actually deploy apps, not just across private and public clouds, but across multiple public clouds as well. Right? Binny Gill: Yeah, basically, you know com is about simplifying the disparity between various public clouds out there. So it's very important for us to be able to take one application blueprint and then quickly deploy in whatever cloud of your choice. Without understanding how one cloud is different. Speaker 1: Yeah, that's the goal. Binny Gill: So here, if you can see, I have market list. And by the way, this market list is a great partner community interest. And every single sort of apps come up here. Let me take a sample app here, Hadoop. And click launch. And now where do you want me to deploy? Speaker 1: Let's start at GCP. Binny Gill: GCP, okay. So I click on GCP, and let me give it a name. Hadoop. GCP. Say 30, right. Clear. So this is one click deployment of anything from our marketplace on to a cloud of your choice. Right now, what the system is doing, is taking the intent-filled description of what the application should look like. Not just the infrastructure level but also within the merchant machines. And it's creating a set of work flows that it needs to go deploy. So as you can see, while we were talking, it's loading the application. Making sure that the provisioning workflows are all set up. Speaker 1: And so this is actually, in real time it's actually extracting out some of the GCP requirements. It's actually talking to GCP. Setting up the constructs so that we can actually push it up on the GCP personally. Binny Gill: Right. So it takes a couple of minutes. It'll provision. Let me go back and show you. Say you worked with deploying AWS. So you Hadoop. Hit address. And that's it. So again, the same work flow. Speaker 1: Same process, I see. Binny Gill: It's going to now deploy in AWS. Speaker 1: See one of the keys things is that we actually extracted out all the isms of each of these clouds into this logical substrate. Binny Gill: Yep. Speaker 1: That you can now piggy-back off of. Binny Gill: Absolutely. And it makes it extremely simple for the average consumer. And you know we like more cloud support here over time. Speaker 1: Sounds good. Binny Gill: Now let me go back and show you an app that I had already deployed. Now 13 days ago. It's on GCP. And essentially what I want to show you is what is the view of the application. Firstly, it shows you the cost summary. Hourly, daily, and how the cost is going to look like. The other is how you manage it. So you know one click ways of upgrading, scaling out, starting, deleting, and so on. Speaker 1: So common actions, but independent of the type of clouds. Binny Gill: Independent. And also you can act with these actions over time. Right? Then services. It's learning two services, Hadoop slave and Hadoop master. Hadoop slave runs fast right now. And auditing. It shows you what are the important actions you've taken on this app. Not just, for example, on the IS front. This is, you know how the VMs were created. But also if you scroll down, you know how the application was deployed and brought up. You know the slaves have to discover each other, and so on. Speaker 1: Yeah got you. So find game invisibility into whatever you were doing with clouds because that's been one of the complaints in general. Is that the cloud abstractions have been pretty high level. Binny Gill: Yeah. Speaker 1: Yeah. Binny Gill: Yeah. So that's how we make the differences between the public clouds. All go away for the Indias of ... Speaker 1: Got you. So why don't we now give folks ... Now a lot of this stuff is coming in five, five so you'll see that pretty soon. You'll get your hands around it with AWS and tree support and so forth. What we wanted to show you was emerging alpha version that is being baked. So is a real production code for Xi. And why don't we just jump right in to it. Because we're running short of time. Binny Gill: Yep. Speaker 1: Give folks a flavor for what the production level code is already being baked around. Binny Gill: Right. So the idea of the design is make sure it's not ... the public cloud is no longer any different from your private cloud. It's a true seamless extension of your private cloud. Here I have my test environment. As you can see I'm running the HR app. It has the DB tier and the Web tier. Yeah. Alright? And the DB tier is running Oracle DB. Employee payroll is the Web tier. And if you look at the availability zones that I have, this is my data center. Now I want to protect this application, right? From disaster. What do I do? I need another data center. Speaker 1: Sure. Binny Gill: Right? With Xi, what we are doing is ... You go here and click on Xi Cloud Services. Speaker 1: And essentially as the slide says, you are adding AZs with one click. Binny Gill: Yeps so this is what I'm going to do. Essentially, you log in using your existing my.nutanix.com credentials. So here I'm going to use my guest credentials and log in. Now while I'm logging in what's happening is we are creating a seamless network between the two sides. And then making the Xi cloud availability zone appear. As if it was my own. Right? Speaker 1: Gotcha. Binny Gill: So in a couple of seconds what you'll notice this list is here now I don't have just one availability zone, but another one appears. Speaker 1: So you have essentially, real time now, paid a one data center doing an availability zone. Binny Gill: Yep. Speaker 1: Cool. Okay. Let's see what else we can do. Binny Gill: So now you think about VR setup. Now I'm armed with another data center, let's do DR Center. Now DR set-up is going to be extremely simple. Speaker 1: Okay but it's also based because on the fact that it is the same stack on both sides. Right? Binny Gill: It's the same stack on both sides. We have a secure network lane connecting the two sides, on top of the secure network plane. Now data can flow back and forth. So now applications can go back and forth, securely. Speaker 1: Gotcha, okay. Let's look at one-click DR. Binny Gill: So for one-click DR set-up. A couple of things we need to know. One is a protection rule. This is the RPO, where does it apply to? Right? And the connection of the replication. The other one is recovery plans, in case disaster happens. You know, how do I bring up my machines and application work-order and so on. So let me first show you, Protection Rule. Right? So here's the protection rule. I'll create one right now. Let me call it Platinum. Alright, and source is my own data center. Destination, you know Xi appears now. Recovery point objective, so maybe in a one hour these snapshots going to the public cloud. I want to retain three in the public side, three locally. And now I select what are the entities that I want to protect. Now instead of giving VMs my name, what I can do is app type employee payroll, app type article database. It covers both the categories of the application tiers that I have. And save. Speaker 1: So one of the things here, by the way I don't know if you guys have noticed this, more and more of Nutanix's constructs are being eliminated to become app-centric. Of course is VM centric. And essentially what that allows one to do is to create that as the new service-level API/abstraction. So that under the cover over a period of time, you may be VMs today, maybe containers tomorrow. Or functions, the day after. Binny Gill: Yep. What I just did was all that needs to be done to set up replication from your own data center to Xi. So we started off with no data center to actually replication happening. Speaker 1: Gotcha. Binny Gill: Okay? Speaker 1: No, no. You want to set up some recovery plans? Binny Gill: Yeah so now set up recovery plan. Recovery plans are going to be extremely simple. You select a bunch of VMs or apps, and then there you can say what are the scripts you want to run. What order in which you want to boot things. And you know, you can set up access these things with one click monthly or weekly and so on. Speaker 1: Gotcha. And that sets up the IPs as well as subnets and everything. Binny Gill: So you have the option. You can maintain the same IPs on frame as the move to Xi. Or you can make them- Speaker 1: Remember, you can maintain your own IPs when you actually use the Xi service. There was a lot of things getting done to actually accommodate that capability. Binny Gill: Yeah. Speaker 1: So let's take a look at some of- Binny Gill: You know, the same thing as VPC, for example. Speaker 1: Yeah. Binny Gill: You need to possess on Xi. So, let's create a recovery plan. A recovery plan you select the destination. Where does the recovery happen. Now, after that Section 11 of 13 [01:40:00 - 01:50:04] Section 12 of 13 [01:50:00 - 02:00:04] (NOTE: speaker names may be different in each section) Speaker 1: ... does the recovery happen. Now, after that you have to think of what is the runbook that you want to run when disaster happens, right? So you're preparing for that, so let me call "HR App Recovery." The next thing is the first stage. We're doing the first stage, let me add some entities by categories. I want to bring up my database first, right? Let's click on the database and that's it. Speaker 2: So essentially, you're building the script now. Speaker 1: Building the script- Speaker 2: ... on the [inaudible 01:50:30] Speaker 1: ... but in a visual way. It's simple for folks to understand. You can add custom script, add delay and so on. Let me add another stage and this stage is about bringing up the web tier after the database is up. Speaker 2: So basically, bring up the database first, then bring up the web tier, et cetera, et cetera, right? Speaker 1: That's it. I've created a recovery plan. I mean usually it's complicated stuff, but we made it extremely simple. Now if you click on "Recovery Points," these are snapshots. Snapshots of your applications. As you can see, already the system has taken three snapshots in response to the protection rule that we had created just a couple minutes ago. And these are now being seeded to Xi data centers. Of course this takes time for seeding, so what I have is a setup already and that's the production environment. I'll cut over to that. This is my production environment. Click "Explore," now you see the same application running in production and I have a few other VMs that are not protected. Let's go to "Recovery Points." It has been running for sometime, these recover points are there and they have been replicated to Xi. Speaker 2: So let's do the failover then. Speaker 1: Yeah, so to failover, you'll have to go to Xi so let me login to Xi. This time I'll use my production account for logging into Xi. I'm logging in. The first thing that you'll see in Xi is a dashboard that gives you a quick summary of what your DR testing has been so far, if there are any issues with the replication that you have and most importantly the monthly charges. So right now I've spent with my own credit card about close to 1,000 bucks. You'll have to refund it quickly. Speaker 2: It depends. If the- Speaker 1: If this works- Speaker 2: IF the demo works. Speaker 1: Yeah, if it works, okay. As you see, there are no VMs right now here. If I go to the recovery points, they are there. I can click on the recovery plan that I had created and let's see how hard it's going to be. I click "Failover." It says three entities that, based on the snapshots, it knows that it can recovery from source to destination, which is Xi. And one click for the failover. Now we'll see what happens. Speaker 2: So this is essentially failing over my production now. Speaker 1: Failing over your production now. [crosstalk 01:52:53] If you click on the "HR App Recovery," here you see now it started the recovery plan. The simple recovery plan that we had created, it actually gets converted to a series of tasks that the system has to do. Each VM has to be hydrated, powered on in the right order and so on and so forth. You don't have to worry about any of that. You can keep an eye on it. But in the meantime, let's talk about something else. We are doing failover, but after you failover, you run in Xi as if it was your own setup and environment. Maybe I want to create a new VM. I create a VM and I want to maybe extend my HR app's web tier. Let me name it as "HR_Web_3." It's going to boot from that disk. Production network, I want to run it on production network. We have production and test categories. This one, I want to give it employee payroll category. Now it applies the same policies as it's peers will. Here, I'm going to create the VM. As you can see, I can already see some VMs coming up. There you go. So three VMs from on-prem are now being filled over here while the fourth VM that I created is already being powered. Speaker 2: So this is basically realtime, one-click failover, while you're using Xi for your [inaudible 01:54:13] operations as well. Speaker 1: Exactly. Speaker 2: Wow. Okay. Good stuff. What about- Speaker 1: Let me add here. As the other cloud vendors, they'll ask you to make your apps ready for their clouds. Well we tell our engineers is make our cloud ready for your apps. So as you can see, this failover is working. Speaker 2: So what about failback? Speaker 1: All of them are up and you can see the protection rule "platinum" has been applied to all four. Now let's look at this recovery plan points "HR_Web_3" right here, it's already there. Now assume the on-prem was already up. Let's go back to on-prem- Speaker 2: So now the scenario is, while Binny's coming up, is that the on-prem has come back up and we're going to do live migration back as in a failback scenario between the data centers. Speaker 1: And how hard is it going to be. "HR App Recovery" the same "HR App Recovery", I click failover and the system is smart enough to understand the direction is reversed. It's also smart enough to figure out "Hey, there are now the four VMs are there instead of three." Xi to on-prem, one-click failover again. Speaker 2: And it's rerunning obviously the same runbook but in- Speaker 1: Same runbook but the details are different. But it's hidden from the customer. Let me go to the VMs view and do something interesting here. I'll group them by availability zone. Here you go. As you can see, this is a hybrid cloud view. Same management plane for both sides public and private. There are two availability zones, the Xi availability zone is in the cloud- Speaker 2: So essentially you're moving from the top- Speaker 1: Yeah, top- Speaker 2: ... to the bottom. Speaker 1: ... to the bottom. Speaker 2: That's happening in the background. While this is happening, let me take the time to go and look at billing in Xi. Speaker 1: Sure, some of the common operations that you can now see in a hybrid view. Speaker 2: So you go to "Billing" here and first let me look at my account. And account is a simple page, I have set up active directory and you can add your own XML file, upload it. You can also add multi-factor authentication, all those things are simple. On the billing side, you can see more details about how did I rack up $966. Here's my credit card. Detailed description of where the cost is coming from. I can also download previous versions, builds. Speaker 1: It's actually Nutanix as a service essentially, right? Speaker 2: Yep. Speaker 1: As a subscription service. Speaker 2: Not only do we go to on-prem as you can see, while we were talking, two VMs have already come back on-prem. They are powered off right now. The other two are on the wire. Oh, there they are. Speaker 1: Wow. Speaker 2: So now four VMs are there. Speaker 1: Okay. Perfect. Sometimes it works, sometimes it doesn't work, but it's good. Speaker 2: It always works. Speaker 1: Always works. All right. Speaker 2: As you can see the platinum protection rule is now already applied to them and now it has reversed the direction of [inaudible 01:57:12]- Speaker 1: Remember, we showed one-click DR, failover, failback, built into the product when Xi ships to any Nutanix fabric. You can start with DSX on premise, obviously when you failover to Xi. You can start with AHV, things that are going to take the same paradigm of one-click operations into this hybrid view. Speaker 2: Let's stop doing lift and shift. The era has come for click and shift. Speaker 1: Binny's now been promoted to the Chief Marketing Officer, too by the way. Right? So, one more thing. Speaker 2: Okay. Speaker 1: You know we don't stop any conferences without a couple of things that are new. The first one is something that we should have done, I guess, a couple of years ago. Speaker 2: It depends how you look at it. Essentially, if you look at the cloud vendors, one of the key things they have done is they've built services as building blocks for the apps that run on top of them. What we have done at Nutanix, we've built core services like block services, file services, now with Calm, a marketplace. Now if you look at [inaudible 01:58:14] applications, one of the core building pieces is the object store. I'm happy to announce that we have the object store service coming up. Again, in true Nutanix fashion, it's going to be elastic. Speaker 1: Let's- Speaker 2: Let me show you. Speaker 1: Yeah, let's show it. It's something that is an object store service by the way that's not just for your primary, but for your secondary. It's obviously not just for on-prem, it's hybrid. So this is being built as a next gen object service, as an extension of the core fabric, but accommodating a bunch of these new paradigms. Speaker 2: Here is the object browser. I've created a bunch of buckets here. Again, object stores can be used in various ways: as primary object store, or for secondary use cases. I'll show you both. I'll show you a Hadoop use case where Hadoop is using this as a primary store and a backup use case. Let's just jump right in. This is a Hadoop bucket. AS you can see, there's a temp directory, there's nothing interesting there. Let me go to my Hadoop VM. There it is. And let me run a Hadoop job. So this Hadoop job essentially is going to create a bunch of files, write them out and after that do map radius on top. Let's wait for the job to start. It's running now. If we go back to the object store, refresh the page, now you see it's writing from benchmarks. Directory, there's a bunch of files that will write here over time. This is going to take time. Let's not wait for it, but essentially, it is showing Hadoop that uses AWS 3 compatible API, that can run with our object store because our object store exposes AWS 3 compatible APIs. The other use case is the HYCU backup. As you can see, that's a- Section 12 of 13 [01:50:00 - 02:00:04] Section 13 of 13 [02:00:00 - 02:13:42] (NOTE: speaker names may be different in each section) Vineet: This is the hycu back up ... As you can see, that's a back-up software that can back-up WSS3. If you point it to Nutanix objects or it can back-up there as well. There are a bunch of back-up files in there. Now, object stores, it's very important for us to be able to view what's going on there and make sure there's no objects sprawled because once it's easy to write objects, you just accumulate a lot of them. So what we wanted to do, in true Nutanix style, is give you a quick overview of what's happening with your object store. So here, as you can see, you can look at the buckets, where the load is, you can look at the bucket sizes, where the data is, and also what kind of data is there. Now this is a dashboard that you can optimize, and customize, for yourself as well, right? So that's the object store. Then we go back here, and I have one more thing for you as well. Speaker 2: Okay. Sounds good. I already clicked through a slide, by the way, by mistake, but keep going. Vineet: That's okay. That's okay. It is actually a quiz, so it's good for people- Speaker 2: Okay. Sounds good. Vineet: It's good for people to have some clues. So the quiz is, how big is my SAP HANA VM, right? I have to show it to you before you can answer so you don't leak the question. Okay. So here it is. So the SAP HANA VM here vCPU is 96. Pretty beefy. Memory is 1.5 terabytes. The question to all of you is, what's different in this screen? Speaker 2: Who's a real Prism user here, by the way? Come on, it's got to be at least a few. Those guys. Let's see if they'll notice something. Vineet: What's different here? Speaker 3: There's zero CVM. Vineet: Zero CVM. Speaker 2: That's right. Yeah. Yeah, go ahead. Vineet: So, essentially, in the Nutanix fabric, every server has to run a [inaudible 02:01:48] machine, right? That's where the storage comes from. I am happy to announce the Acropolis Compute Cloud, where you will be able to run the HV on servers that are storage-less, and add it to your existing cluster. So it's a compute cloud that now can be managed from Prism Central, and that way you can preserve your investments on your existing server farms, and add them to the Nutanix fabric. Speaker 2: Gotcha. So, essentially ... I mean, essentially, imagine, now that you have the equivalent of S3 and EC2 for the enterprise now on Premisis, like you have the equivalent compute and storage services on JCP and AWS, and so forth, right? So the full flexibility for any kind of workload is now surely being available on the same Nutanix fabric. Thanks a lot, Vineet. Before we wrap up, I'd sort of like to bring this home. We've announced a pretty strategic partnership with someone that has always inspired us for many years. In fact, one would argue that the genesis of Nutanix actually was inspired by Google and to talk more about what we're actually doing here because we've spent a lot of time now in the last few months to really get into the product capabilities. You're going to see some upcoming capabilities and 55X release time frame. To talk more about that stuff as well as some of the long-term synergies, let me invite Bill onstage. C'mon up Bill. Tell us a little bit about Google's view in the cloud. Bill: First of all, I want to compliment the demo people and what you did. Phenomenal work that you're doing to make very complex things look really simple. I actually started several years ago as a product manager in high availability and disaster recovery and I remember, as a product manager, my engineers coming to me and saying "we have a shortage of our engineers and we want you to write the fail-over routines for the SAP instance that we're supporting." And so here's the PERL handbook, you know, I haven't written in PERL yet, go and do all that work to include all the network setup and all that work, that's amazing, what you are doing right there and I think that's the spirit of the partnership that we have. From a Google perspective, obviously what we believe is that it's time now to harness the power of scale security and these innovations that are coming out. At Google we've spent a lot of time in trying to solve these really large problems at scale and a lot of the technology that's been inserted into the industry right now. Things like MapReduce, things like TenserFlow algorithms for AI and things like Kubernetes and Docker were first invented at Google to solve problems because we had to do it to be able to support the business we have. You think about search, alright? When you type in search terms within the search box, you see a white screen, what I see is all the data-center work that's happening behind that and the MapReduction to be able to give you a search result back in seconds. Think about that work, think about that process. Taking and pursing those search terms, dividing that over thousands of [inaudible 02:05:01], being able to then search segments of the index of the internet and to be able to intelligent reduce that to be able to get you an answer within seconds that is prioritized, that is sorted. How many of you, out there, have to go to page two and page three to get the results you want, today? You don't because of the power of that technology. We think it's time to bring that to the consumer of the data center enterprise space and that's what we're doing at Google. Speaker 2: Gotcha, man. So I know we've done a lot of things now over the last year worth of collaboration. Why don't we spend a few minutes talking through a couple things that we're started on, starting with [inaudible 02:05:36] going into com and then we'll talk a little bit about XI. Bill: I think one of the advantages here, as we start to move up the stack and virtualize things to your point, right, is virtual machines and the work required of that still takes a fair amount of effort of which you're doing a lot to reduce, right, you're making that a lot simpler and seamless across both On-Prem and the cloud. The next step in the journey is to really leverage the power of containers. Lightweight objects that allow you to be able to head and surface functionality without being dependent upon the operating system or the VM to be able to do that work. And then having the orchestration layer to be able to run that in the context of cloud and On-Prem We've been very successful in building out the Kubernetes and Docker infrastructure for everyone to use. The challenge that you're solving is how to we actually bridge the gap. How do we actually make that work seamlessly between the On-Premise world and the cloud and that's where our partnership, I think, is so valuable. It's cuz you're bringing the secret sauce to be able to make that happen. Speaker 2: Gotcha, gotcha. One last thing. We talked about Xi and the two companies are working really closely where, essentially the Nutanix fabric can seamlessly seep into every Google platform as infrastructure worldwide. Xi, as a service, could be delivered natively with GCP, leading to some additional benefits, right? Bill: Absolutely. I think, first and foremost, the infrastructure we're building at scale opens up all sorts of possibilities. I'll just use, maybe, two examples. The first one is network. If you think about building out a global network, there's a lot of effort to do that. Google is doing that as a byproduct of serving our consumers. So, if you think about YouTube, if you think about there's approximately a billion hours of YouTube that's watched every single day. If you think about search, we have approximately two trillion searches done in a year and if you think about the number of containers that we run in a given week, we run about two billion containers per week. So the advantage of being able to move these workloads through Xi in a disaster recovery scenario first is that you get to take advantage of the scale. Secondly, it's because of the network that we've built out, we had to push the network out to the edge. So every single one of our consumers are using YouTube and search and Google Play and all those services, by the way we have over eight services today that have more than a billion simultaneous users, you get to take advantage of that network capacity and capability just by moving to the cloud. And then the last piece, which is a real advantage, we believe, is that it's not just about the workloads you're moving but it's about getting access to new services that cloud preventers, like Google, provide. For example, are you taking advantage like the next generation Hadoop, which is our big query capability? Are you taking advantage of the artificial intelligence derivative APIs that we have around, the video API, the image API, the speech-to-text API, mapping technology, all those additional capabilities are now exposed to you in the availability of Google cloud that you can now leverage directly from systems that are failing over and systems that running in our combined environment. Speaker 2: A true converged fabric across public and private. Bill: Absolutely. Speaker 2: Great stuff Bill. Thank you, sir. Bill: Thank you, appreciate it. Speaker 2: Good to have you. So, the last few slides. You know we've talked about, obviously One OS, One Click and eCloud. At the end of the day, it's pretty obvious that we're evaluating the move from a form factor perspective, where it's not just an OS across multiple platforms but it's also being distributed genuinely from consuming itself as an appliance to a software form factor, to subscription form factor. What you saw today, obviously, is the fact that, look you know we're still continuing, the velocity has not slowed down. In fact, in some cases it's accelerated. If you ask my quality guys, if you ask some of our customers, we're coming out fast and furious with a lot of these capabilities. And some of this directly reflects, not just in features, but also in performance, just like a public cloud, where our performance curve is going up while our price-performance curve is being more attractive over a period of time. And this is balancing it with quality, it is what differentiates great companies from good companies, right? So when you look at the number of nodes that have been shipping, it was around ten more nodes than where we were a few years ago. But, if you look at the number of customer-found defects, as a percentage of number of nodes shipped it is not only stabilized, it has actually been coming down. And that's directly reflected in the NPS part. That most of you guys love. How many of you guys love your Customer Support engineers? Give them a round of applause. Great support. So this balance of velocity, plus quality, is what differentiates a company. And, before we call it a wrap, I just want to leave you with one thing. You know, obviously, we've talked a lot about technology, innovation, inspiration, and so forth. But, as I mentioned, from last night's discussion with Sir Ranulph, let's think about a few things tonight. Don't take technology too seriously. I'll give you a simple story that he shared with me, that puts things into perspective. The year was 1971. He had come back from Aman, from his service. He was figuring out what to do. This was before he became a world-class explorer. 1971, he had a job interview, came down from Scotland and applied for a role in a movie. And he failed that job interview. But he was selected from thousands of applicants, came down to a short list, he was a ... that's a hint ... he was a good looking guy and he lost out that role. And the reason why I say this is, if he had gotten that job, first of all I wouldn't have met him, but most importantly the world wouldn't have had an explorer like him. The guy that he lost out to was Roger Moore and the role was for James Bond. And so, when you go out tonight, enjoy with your friends [inaudible 02:12:06] or otherwise, try to take life a little bit once upon a time or more than once upon a time. Have fun guys, thank you. Speaker 5: Ladies and gentlemen please make your way to the coffee break, your breakout sessions will begin shortly. Don't forget about the women's lunch today, everyone is welcome. Please join us. You can find the details in the mobile app. Please share your feedback on all sessions in the mobile app. There will be prizes. We will see you back here and 5:30, doors will open at 5, after your last breakout session. Breakout sessions will start sharply at 11:10. Thank you and have a great day. Section 13 of 13 [02:00:00 - 02:13:42]

Published Date : Nov 9 2017

SUMMARY :

of the globe to be here. And now, to tell you more about the digital transformation that's possible in your business 'Cause that's the most precious thing you actually have, is time. And that's the way you can have the best of both worlds; the control plane is centralized. Speaker 1: Thank you so much, Bob, for being here. Speaker 1: IBM is all things cognitive. and talking about the meaning of history, because I love history, actually, you know, We invented the role of the CIO to help really sponsor and enter in this notion that businesses Speaker 1: How's it different from 1993? Speaker 1: And you said it's bigger than 25 years ago. is required to do that, the experience of the applications as you talked about have Speaker 1: It looks like massive amounts of change for Speaker 1: I'm sure there are a lot of large customers Speaker 1: How can we actually stay not vulnerable? action to be able to deploy cognitive infrastructure in conjunction with the business processes. Speaker 1: Interesting, very interesting. and the core of cognition has to be infrastructure as well. Speaker 1: Which is one of the two things that the two So the algorithms are redefining the processes that the circuitry actually has to run. Speaker 1: It's interesting that you mentioned the fact Speaker 1: Exactly, and now the question is how do you You talked about the benefits of calm and being able to really create that liberation fact that you have the power of software, to really meld the two forms together. Speaker 1: It can serve files and mocks and things like And the reason for that if for any data intensive application like a data base, a no sequel What we want is that optionality, for you to utilize those benefits of the 3X better Speaker 1: Your tongue in cheek remark about commodity That is the core of IBM's business for the last 20, 25, 30 years. what you already have to make it better. Speaker 1: Yeah. Speaker 1: That's what Apple did with musics. It's okay, and possibly easier to do it in smaller islands of containment, but when you Speaker 1: Awesome. Thank you. I know that people are sitting all the way up there as well, which is remarkable. Speaker 3: Ladies and gentlemen, please welcome Chief But before I get into the product and the demos, to give you an idea. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. So, what we're going to do is, the first step most of you guys know this, is we've been Now one of the key things is having the ability to test these against each other. And to do that, we took a hard look and came out with a new product called Xtract. So essentially if we think about what Nutanix has done for the data center really enables and performing the cut over to you. Speaker 1: Sure, some of the common operations that you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Binny GillPERSON

0.99+

DanielePERSON

0.99+

IBMORGANIZATION

0.99+

EuropeLOCATION

0.99+

BinnyPERSON

0.99+

StevenPERSON

0.99+

JuliePERSON

0.99+

NutanixORGANIZATION

0.99+

ItalyLOCATION

0.99+

UKLOCATION

0.99+

Telecom ItaliaORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

100 percentQUANTITY

0.99+

GartnerORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AlessandroPERSON

0.99+

2003DATE

0.99+

SunilPERSON

0.99+

GoogleORGANIZATION

0.99+

20%QUANTITY

0.99+

Steven PoitrasPERSON

0.99+

15 secondsQUANTITY

0.99+

1993DATE

0.99+

LeonardoPERSON

0.99+

LennoxORGANIZATION

0.99+

hundredsQUANTITY

0.99+

SixQUANTITY

0.99+

two companiesQUANTITY

0.99+

John DoePERSON

0.99+

AWSORGANIZATION

0.99+