Image Title

Search Results for brownfield:

Chris Falloon, Dell Technologies | MWC Barcelona 2023


 

(bright gentle music) >> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (bright gentle music) >> Hey, everyone. Good to see you. Lisa Martin here with Dave Vellante. This is theCUBE's coverage, day one of MWC 23 from Barcelona, and we're having a great day so far. The theme of this conference, Dave, is velocity. I feel like we've been shot out of a cannon of CUBE content already on day one. We've been talking with... Today's ecosystem day. We've been talking about the ecosystem, the importance of open ecosystem, and why. And we're going to be unpacking that a little bit more next. >> You know, Lisa, what used to be Mobile World Congress and is now MWC, it was never really intended to be sort of a consumer show, but with the ascendancy of smartphones. It kind of... They sucked all the air out of the room. >> Lisa: Yeah. >> But really, we're seeing the enterprise come really into focus now as the telco stack disaggregates, and enterprise is complicated. >> Enterprise is complicated, telecom is complicated. We have a guest here to unpack that with us. Chris Falloon joins us the Senior Managing Director of telecom practice at Dell. Chris, welcome to theCUBE. >> Thanks very much for having me. >> So you've been in the telecom industry for a long time. Talk about some of the things that you've witnessed over the last couple of decades and really help us understand the complexity that is telecom. >> Yeah. Well, nothing, nothing more complex. Look, I got... I was privileged to start my career in telco 20 years ago in Canada working with other telecoms globally. And so I got a good picture of how they operate, what's important to them. But I was... It's come full circle for me. I got into IT and come all the way back now to helping telcos figure out how to operate. And so it's been a great journey. >> What are some of the- >> Dave: You kno- >> Oh sorry, Dave. >> Dave: Please, go ahead. >> I was just going to say unpack some of the complexity that we see now. Obviously, we think telecom, we... And you talked about the consumerization... We have this expectation that we can get anything on our mobile devices 24/7 from any part of the world, but there's a lot of complexity in the industry as it's evolving. What are some of the complexities and how is Dell helping address that? >> Look, I think the transformation from traditional monolithic architectures to cloud-based architectures is maybe the most... The single largest complex transformation any industry's done in the last 20 years. And it's not just a technology transformation, it's critically an operational transformation. And so I think that's really at the heart of it is we've seen a real shift this year. From conversations last year were around how this stuff gets turned on, "Can it work?", "Does it work?", to a conversation around "How does it work?", "How do I operationalize it?", "What are the implications to my teams?". And so we've got teams struggling with knowledge and competency gaps. We've got people figuring out how to get this stuff working at scale. >> Yeah, so I mean, you think about Telcos, you know, a lot of engineers, but a lot of the stuff is done kind of, I call it, in the basement. >> Yeah. >> Kind of hidden, right? And they make it work, right? And that transformation that you're talking about toward this more agile, open ecosystem, moving fast, cloud-native, new services coming in, new monetization models. That does require a different operating model. How similar, given your background in both, you know, IT and Telco, how similar is it to the transformation that occurred in IT in terms of the operation- Operating model, which some companies are still going through? >> Look, I think we're privileged actually to be able to do this 10 years after IT went through it. And there's a lot of patterns that are definitely the same. There's no question there's differences. The applications are far different, the timing and and issues in the RAN are far different, and the distributed size of these deployments is different. But the learnings around how to deploy cloud-native technology, how to organize around these platforms, and back to the operationalization, how to deploy them and operate them at scale, it took IT a decade to figure that out. And hopefully, with the learnings that we've got from that we can rush through it here in a few years or less. >> One of the other big differences, of course, is public policy and regulation, right? You don't really have that so much in the IT world. >> Chris: Right. >> Sometimes you have no regulation. >> Lisa: Yeah. >> You know, Google, Facebook, do whatever you want and we'll figure it out 20 years later. How much of a factor is that in terms of the complexity and are the new Greenfield players... Are they bound by similar sort of restrictions or can they move faster? What's the dynamic there? >> Look, there's no question that Greenfield is faster than Brownfield. Doesn't matter whether that's telco or IT. >> Dave: Yeah, yeah, sure. >> I think the... I think we're at a place in history where we're watching some of the early movers testing some of these theories. But I would tell you just... Again, just in the last few days leading up to this event talking with our customers and our partners, it's clear that even the first movers are struggling with the operational complexity of these platforms. And as a... You know, I think Dell's position in IT for the last decade as a platform systems integrator is very much going to continue to play out in the... In... We're being asked to play that role here as we try to bring some of the cloud-native operating competencies to the to the table. >> Hmm. >> And where are you having customer conversations these days? Is it at... Is it at the IT level? Is it higher sense tel... Networking is essential for any business in any organization to be able to deliver what the end user is demanding. >> Of course. Look, I... We've seen a real shift as I mentioned from the technology proof points to the operational proof points. How do we... How do we make sure that not only the business case is valid, but that we can maintain these new changes in these new operating models at scale at the right operating cost? And those are very healthy conversations because the success of this transformation to cloud architecture and edge computing and everything else is predicated on the idea that we can get cloud running at scale in the network. But I think the... It's very much use case driven and we're going to see... We're finally seeing some edge use cases that are driving consumption of those edge use cases, for sure. >> You know, I said earlier, I was in the keynotes and it took 45 minutes to get to the topic of security. >> Hmm. >> It was I think the third or fourth, or even fifth speaker. Finally, 45 minutes in, mention security. And I think that's because security's kind of a given in this world. It's a hardened environment. >> Chris: Yep. >> But that security model changes as well. The cloud brings a shared responsibility model. If it's multicloud, which it is, then it's shared responsibility across multiple clouds. >> Chris: Yeah. >> You know, you've got now developers who are being asked to be responsible for security. So that's another part of the complexity. We're kind of unpacking complexity here, aren't we? >> Chris: That's right. >> Just throwing more things in the cake. >> Look, I... Security is... It's an indication of this shift from what to how, very much includes security. And I think we're seeing security come to the forefront. Dell has a... We, you know, our philosophy is intrinsic security at all levels of the deployment. Everything from the infrastructure all the way through to the delivery and the management. >> Chris: And through the supply chain. >> And through supply chain. All the way through to the delivery of our technology integrated with other people's technology to ensure that the security's intrinsic in those deployments. And those integrations, as we're getting more and more involved in zero-touch deployments and helping carriers stand up these cloud platforms at scale, one of the ways to make sure that it's done repeatably and securely is to integrate those things at the factory or have your, you know, have your infrastructure partner take accountability for doing some of that pre-Day Zero. >> Well, the lab announcement that you guys have is... I've wrote about this. That pretty key, I think, because if you can certify in the lab... That's only other big differences. We talk a lot about the similarities between, you know, enterprise tech of the nineties and the disaggregation of the enterprise stack. But you didn't have so-called converged infrastructure back then. And even when you had converged infrastructure, it was like a skew that was bolted on. Now, you've got engineered systems. You're starting with engineered systems, but you've got to have a lab, so that the ecosystem and you've got self-certification. Those, I think, are key investments that... If you're thinking why Dell... A comp... You need a company like Dell who's got the resources to make those investments and actually kind of force that through. >> Chris: Yeah. >> Dave: Yeah. >> That's right. I think we're... You know, the value of the la... Again, the learnings from these last 10 years of integration is just... That understanding what the major blockers are should provide us with an accelerated roadmap for solving some of these problems as we encounter them over the next year or two in telecoms, no question. >> There's always regional differences in telecom, right? In the United States, you know, years ago, decades ago, sort of, you know, blew apart the telco industry. I would argue, many would I think as well, that that actually made the US less competitive. You got... Certainly have, you know, national interests around the world, across the European continent, certainly in APAC as well. How do you see that of, of... What are you hearing from those different regions? How do you see that affecting the adoption of some of the new technologies that you guys are promoting? >> Yeah, look, there's leaders... There's leaders and laggards in every market, I would say. I think we've been at this now, trying to stand up some of these cloud infrastructures and cloud RAN projects and virtual RAN projects. We've been at that now long enough to know that there's not so much regional patterns as there are patterns of companies that believe deeply that these architectures are going to lead to the right type of innovation and allow them to, you know, to build new markets and new sources of revenue. And those that are deeply committed to that structure are the ones willing to lean in and sort of blaze a path, right? So I would say that pattern is definitely emerged. I don't... We don't see... The larger the organization, certainly the larger the carrier, the deeper their resources on engineering and their ability to pivot and train those resources to become cloud-capable. That's a factor. We see a lot of conversations. Dell's got a very large Day 2 managed services business on the IT side. And, and as we pivot those Day 2 managed services, practices into managing cloud platforms and edge cloud platforms, I think it's the companies that don't have the depth or the skill or the experience are the ones that are that are asking us for the help there, for sure. >> How much has Dell been able to leverage? I mean, in the telecom systems business, I see, you know, a lot of new faces at Dell, a lot of folks like yourself that have telco experience. How about the services business? Were you able to sort of realign your existing folks or is it similar, you had to bring in people from the industry? >> It's both actually. So the... In services, it's critical because they... The org... The industry desperately needs systems integration across the board. And I think if we can convince the industry to treat telco clouds as a horizontal platform, then the idea of a platform integrator is a, you know, is definitely... It's valued. And in fact, it's required, I think, for the success of these projects. The services team at Dell is comprised of the folks who obviously run the pieces of the services business that are really no different in their construct. Building telco clouds is not that different from building IT clouds, so the elements are the same. Those teams are... Those teams persist. But definitely, the apps are different, and the support is different, and the requirements for uptime and availability are different. And so we've brought in services specialists to sort of... Just to create the glue between the customers and our existing sales depth. >> Do you have a favorite customer story that really articulates the value of what Dell is able to deliver in telecom with the inherent complexities that we talked about? >> Yeah. Look, it's not that well-known, but you know, the Day Zero Zero-Touch deployment factory integration capabilities that Dell has, we've been deploying that in IT for years. And, you know, we're... We've got a couple of projects globally now where we're not only designing and testing the stack in our labs and with our partners, but we're loading that stack in a known good architecture into third party and Dell hardware in a factory integration setting and shipping it to site with really nothing left to do but connect power and connectivity. And so from an engineering standpoint, the complexity of deploying cloud into thousands of data centers, we have examples of that that are being shipped continent by continent and and being deployed in a... In days and weeks as opposed to months. And so I think the... Taking some of the pain out of deployment and taking some of the... Building some repeatability into those deployments is a very big deal. Those are... Those are great, great projects. The next stage of that, of course, is helping them get to a place where the operations of those platforms is just as easy as the deployment. >> What's going to be different? Go to head... Look ahead to 2030. Let's go backwards from there. What's the world going to be like? What do people need to know in terms of what's coming? >> That's a great question. If... I think if I... If I could see that far ahead, I wouldn't probably be sitting here. (Chris and Lisa laughs) >> Dave: Yeah, but you have wisdom. >> Yeah. >> You know, the experience. >> If we play back... If we play back what's happened in the data centers, you know, in the IT data centers and you mentioned the, you know, the disaggregated systems shift that happened a decade ago. You know, those... Once the applications rearchitected to cloud-native architectures and could take advantage of the platform changes... Once the resiliency is built into the application instead of into the platforms, these things become more and more touchless. And I think the real double digit payback on this shift to cloud-native, we haven't begun to talk about it yet because we haven't... We're not anywhere close to the level of automation that can be achieved once we get to true cloud-native and microservices-based application architecture. That's a big shift and it's going to take a while. It took companies like SAP and others almost a decade to get that done. I think it'll happen faster here, but it's going to take us some time. >> Some of the things that you've heard... This is only day one of the conference, but anything that you've heard today or that you're looking forward to hearing in terms of how telecom is evolving and kind of playing catch-up? >> Yeah, look, I... We really believe this is the year that the edge use cases come alive. I think we're... We're... We've been... Almost every conversation I've been in, we've been asked, you know, sort of where's the... "Where are these use cases that are driving actual deployments and revenue?" and that sort of... And I think carriers are very much interested in trying to figure out customer edge, very much trying to figure out their own edge. Dell, of course, has both of those edges in mind. We've got a very large enterprise edge business unit, as well as our telco BU. And so, that's... I think this is the year we really start to figure out where those... We're seeing good deployments now in production at scale, and I think this is the year that starts to really take shape. >> Well, and it seems like... Just in hearing some of the carriers talk, they want to avoid what happened with the over-the-top vendors, okay. And they want to monetize the data that they have about the network. Looks like they want to charge for API access. >> Chris: Yep. >> 'Kay, developers are going to love that, right? Especially at the volumes that we're seeing here. But I feel like there's a, you know, potential blind spot of disruption coming, you know, like the over-the-top vendors, you know, that created all this innovation. I could see developers... Whether it's at the edge or new services, that customers really want to buy, they really value. Different than, "Hey, I own this data and you need it. I'm going to charge ya for it." versus "Hey, I'm going to create something that's really compelling." You know, an analog would be Netflix or other services that you get with maybe it's private wireless that can do some things. And, you know, that to me is the interesting opportunity here that I feel like is a blind spot for traditional telcos. 'Cause they've kind of got that mindset of, "Okay, you know, we're going to monetize. Let's do it." But they don't have that creativity mindset yet, you know? >> This industry has been given an opportunity to monetize almost every major transformation in technology, and many of them have slipped through our fingers, right? And this one is different because it's inextricably tied to the network. And I think the, you know... If... You mentioned mobile phones earlier I mean, I think what we saw in innovation in mobile was that we had no idea what was going to happen at the edge of that edge until someone created it. And so you have to have those in operating environments have to show up before the developers will spend the time to test them out and figure out what works. And so I... We haven't begun to believe, even understand I don't think, what's coming once we figure out a way to get ultra low latency, reliable connectivity at the edge. >> And I think developers have that open canvas and they're going to paint- >> That's right. >> What that edge looks like. And that's what... I mean, I kind of get concerned about... You know, to me the way to deal with developers, you give 'em a platform. Say, "Go create." >> Chris: That's right. >> As opposed to "Okay, pay to get access.", which you're going to have to, but I mean, there's other third parties that are going to fund that. I get it. >> Chris: Yeah. >> But there's a big open field that is going to get plowed here. >> Yes. >> And it's going to throw off some, you know, serious benefits to consumers. >> Yeah, and that's what we all want. We have that expectation that- >> Chris: Absolutely. >> It's going to... There's going to be a... With them... It's going to be, "What's in it for me?", right? >> "What's in it for me?" Yeah, that's right. >> Absolutely. >> Chris: That's right. >> Chris, I was going to say thank you so much. You want to add one more thing? >> Chris: No, I'm good. Thank you. >> I was just going to thank you so much for stopping by and talking to us about Dell's presence in telecom, how you're helping customers manage the complexity and the opportunities that really are there. We appreciate your insights and your time. >> Thanks so much, I really appreciate it. >> Dave: Thank you. >> Lisa: All right, our pleasure. >> Thanks, guys. >> For our guest and Dave Vellante, I'm Lisa Martin. You're watching "theCUBE" live in Barcelona at MWC 23. Dave and I will be right back with our next guest. (bright gentle music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. We've been talking about the ecosystem, They sucked all the air out of the room. as the telco stack disaggregates, the Senior Managing Director Talk about some of the all the way back now What are some of the complexities "What are the implications to my teams?". but a lot of the stuff is done kind of, is it to the transformation But the learnings around how to deploy One of the other big and are the new Greenfield players... question that Greenfield it's clear that even the first movers Is it at the IT level? that not only the business case is valid, get to the topic of security. And I think that's because But that security So that's another part of the complexity. at all levels of the deployment. All the way through to the delivery so that the ecosystem and You know, the value of the la... of some of the new technologies that don't have the depth I mean, in the telecom systems business, the industry to treat telco and testing the stack What's the world going to be like? If I could see that far ahead, of the platform changes... Some of the things that you've heard... that the edge use cases come alive. Just in hearing some of the carriers talk, like the over-the-top vendors, you know, And I think the, you know... You know, to me the way that are going to fund that. that is going to get plowed here. And it's going to We have that expectation that- There's going to be a... "What's in it for me?" Chris, I was going to Chris: No, I'm good. and the opportunities our pleasure. Dave and I will be right

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

TelcoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Chris FalloonPERSON

0.99+

TelcosORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

GreenfieldORGANIZATION

0.99+

DellORGANIZATION

0.99+

CanadaLOCATION

0.99+

BrownfieldORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

LisaPERSON

0.99+

BarcelonaLOCATION

0.99+

FacebookORGANIZATION

0.99+

thirdQUANTITY

0.99+

45 minutesQUANTITY

0.99+

last yearDATE

0.99+

KayPERSON

0.99+

fourthQUANTITY

0.99+

NetflixORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

theCUBETITLE

0.99+

2030DATE

0.99+

todayDATE

0.99+

thousandsQUANTITY

0.99+

MWC 23EVENT

0.99+

Mobile World CongressEVENT

0.99+

bothQUANTITY

0.99+

United StatesLOCATION

0.98+

MWCEVENT

0.98+

20 years laterDATE

0.98+

telcoORGANIZATION

0.98+

decades agoDATE

0.98+

theCUBEORGANIZATION

0.98+

EuropeanLOCATION

0.98+

TodayDATE

0.97+

a decade agoDATE

0.97+

SAPORGANIZATION

0.95+

MWC 23LOCATION

0.95+

day oneQUANTITY

0.94+

last decadeDATE

0.94+

ninetiesDATE

0.94+

fifth speakerQUANTITY

0.93+

this yearDATE

0.93+

singleQUANTITY

0.92+

OneQUANTITY

0.92+

20 years agoDATE

0.91+

oneQUANTITY

0.91+

Manish Singh, Dell Technologies & Doug Wolff, Dell Technologies | MWC Barcelona 2023


 

>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Welcome to the Fira in Barcelona, everybody. This is theCUBE's coverage of MWC 23, day one of that coverage. We have four days of wall-to-wall action going on, the place is going crazy. I'm here with Dave Nicholson, Lisa Martin is also in the house. Today's ecosystem day, and we're really excited to have Manish Singh who's the CTO of the Telecom Systems Business unit at Dell Technologies. He's joined by Doug Wolf who's the head of strategy for the Telecom Systems Business unit at Dell. Gents, welcome. What a show. I mean really the first major MWC or used to be Mobile World Congress since you guys have launched your telecom business, you kind of did that sort of in the Covid transition, but really exciting, obviously a huge, huge venue to match the huge market. So Manish, how did you guys get into this? What did you see? What was the overall thinking to get Dell into this business? >> Manish: Yeah, well, I mean just to start with you know, if you look at the telecom ecosystem today, the service providers in particular, they are looking for network transformation, driving more disaggregation into their network so that they can get better utilization of the infrastructure, but then also get more agility, more cloud native characteristics onto their, for their networks in particular. And then further on, it's important for them to really start to accelerate the pace of innovation on the networks itself, to start more supply chain diversity, that's one of the challenges that they've been having. And so there've been all these market forces that have been really getting these service providers to really start to transform the way they have built the infrastructure in the past, which was legacy monolithic architectures to more cloud native disaggregated. And from a Dell perspective, you know, that really gives us the permission to play, to really, given all the expertise on the work we have done in the IT with all the IT transformations to leverage all that expertise and bring that to the service providers and really help them in accelerating their network transformation. So that's where the journey started. We've been obviously ever since then working on expanding the product portfolio on our compute platforms to bring Teleco great compute platforms with more capabilities than we can talk about that. But then working with partners and building the ecosystem to again create this disaggregated and open ecosystem that will be more cloud native and really meet the objective that the service providers are after. >> Dave Vellante: Great, thank you. So, Doug the strategy obviously is to attack this market, as Manish said, from an open standpoint, that's sort of new territory. It's like a little bit like the wild, wild west. So maybe you could double click on what Manish was saying from a, from a strategy standpoint, yes, the Telecos need to be more flexible, they need to be more open, but they also need this reliability piece. So talk about that from a strategy standpoint of what you guys saw. >> Doug: Yeah, absolutely. As Manish mentioned, you know, Dell getting into open systems isn't something new. You know, Dell has been kind of playing in that world for years and years, but the opportunity in Telecom that came was opening of the RAN, the core network, the edge, all of these with 5G really created a wide opening for us. So we started developing products and solutions, you know, built our first Telecom grade servers for open RAN over the last year, we'll talk about those at the show. But you know, as, as Manish mentioned, an open ecosystem is new to Telecom. I've been in the Telecom business along with Manish for, you know, 25 plus years and this is a new thing that they're embarking on. So started with virtualization about five, six years ago, and now moving to cloud native architectures on the core, suddenly there's this need to have multiple parties partner really well, share specifications, and put that together for an operator to consume. And I think that's just the start of really where all the challenges are and the opportunities that we see. >> Where are we in this transition cycle? When the average consumer hears 5G, feels like it's been around for a long time because it was hyped beforehand. >> Doug: Yeah. >> If you're talking about moving to an open infrastructure model from a proprietary closed model, when is the opportunity for Dell to become part of that? Is it, are there specific sites that have already transitioned to 5G, therefore they've either made the decision to be open or not? Or are there places where the 5G transition has taken place, and they might then make a transition to open brand with 5G? Where, where are we in that cycle? What does the opportunity look like? >> I'll kind of take it from the typology of the operator, and I'm sure Manish will build on this, but if I look back on the core, started to get virtualized you know, back around 2015-16 with some of the lead operators like AT&T et cetera. So Dell has been partnering with those operators for some years. So it really, it's happening on the core, but it's moving with 5G to more of a cloud-like architecture, number one. And number two, they're going beyond just virtualizing the network. You know, they previously had used OpenStack and most of them are migrating to more of a cloud native architecture that Manish mentioned. And that is a bit different in terms of there's more software vendors in that ecosystem because the software is disaggregated also. So Dell's been playing in the core for a number of years, but we brought out new solutions we've announced at the show for the core. And the parts that are really starting that transition of maybe where the core was back in 2015 is on the RAN and on the edge in particular. >> Because NFV kind of predated the ascendancy of cloud. >> Exactly, yeah. >> Right, so it really didn't have the impact that people had hoped. And there's some, when you look back, 'cause it's not same wine, new bottle as the open systems movement, there are a lot of similarities but you know, you mentioned cloud, and cloud native, you really didn't have, back in the nineties, true engineered systems. You didn't really have AI that, you know, to speak of at the sort of volume of the data that we have. So Manish, from a CTO's perspective, how are you attacking some of those differences in bringing that to market? >> Manish: Yeah, I mean, I think you touched on some very important points there. So first of all, the duck's point, a lot of this transformation started in the core, right? And as the technology evolution progress, the opportunities opened up. It has now come into the edge and the radio access network as well, in particular with open RAN. And so when we talk about the disaggregation of the infrastructure from the software itself and an open ecosystem, this now starts to create the opportunity to accelerate innovation. And I really want to pick up on the point that you'd said on AI, for example. AI and machine learning bring a whole new set of capabilities and opportunities for these service providers to drive better optimization, better performance, better sustainability and energy efficiency on their infrastructure, on and on and on. But to really tap into these technologies, they really need to open that up to third parties implementation solutions that are coming up. And again, the end objective remains to accelerate that innovation. Now that said, all these things need to be brought together, right? And delivered and deployed in the network without any degradation in the KPIs and actually improving the performance on different vectors, right? So this is what the current state of play is. And with this aggregation I'm definitely a believer that all these new technologies, including AI, machine learning, and there's a whole area, host area of problems that can be solved and attacked and are actually getting attacked by applying AI and machine learning onto these networks. >> Open obviously is good. Nobody's ever going to, you know, argue that open is a bad thing. It's like democracy is a good thing, right? At least amongst us. And so, but, the RAN, the open RAN, has to be as reliable and performant, right, as these, closed networks. Or maybe not, maybe it doesn't have to be identical. Just has to be close enough in order for that tipping point to occur. Is that a fair summarization? What are you guys hearing from carriers in terms of their willingness to sort of put their toe in the water and, and what could we expect in terms of the maturity model of, of open RAN and adoption? >> Right, so I mean I think on, on performance that, that's a tough one. I think the operators will demand performance and you've seen experiments, you've really seen more of the Greenfield operators kind of launch. >> Okay. >> Doug: Open RAN or vRAN type solutions. >> So they're going to disrupt. >> Doug: Yeah, they're going to disrupt. >> Yeah. >> Doug: And there's flexibility in an open RAN architecture also for 5G that they, that they're interested in and I think the Brownfield operators are too, but let's say maybe the Greenfield jump first in terms of doing that from a mass deployment perspective. But I still think that it's going to be critical to meet very similar SLAs and end user performance. And, you know, I think that's where, you know, maturity of that model is what's required. I think Brownfield operators are conservative in terms of, you know, going with something they know, but the opportunities and the benefits of that architecture and building new flexible, potentially cost advantaged over time solutions, that's what the, where the real interest is going forward. >> And new services that you can introduce much more quickly. You know, the interesting thing about Dell to me, you don't compete with the carriers, the public cloud vendors though, the carriers are concerned about them sort of doing an end run on them. So you provide a potential partnership for the carriers that's non-threatening, right? 'Cause you're, you're an arms dealer, you're selling hardware and software, right? But, but how do you see that? Because we heard in the keynote today, one of the Teleco, I think it was the chairman of Telefonica said, you know, cloud guys can't do this alone. You know, they need, you know, this massive, you know, build out. And so, what do you think about that in terms of your relationship with the carriers not being threatening? I mean versus say potentially the cloud guys, who are also your partners, I understand, it's a really interesting dynamic, isn't it? >> Manish: Yeah, I mean I think, you know, I mean, the way I look at it, the carriers actually need someone like Dell who really come in who can bring in the right capabilities, the right infrastructure, but also bring in the ecosystem together and deliver a performance solution that they can deploy and that they can trust, number one. Number two, to your point on cloud, I mean, from a Dell perspective, you know, we announced our Dell Telecom Multicloud Foundation and as part of that last year in September, we announced what we call is the Dell Telecom Infrastructure Blocks. The first one we announced with Wind River, and this is, think of it as the, you know, hardware and the cashier all pre-integrated with lot of automation around it, factory integrated, you know, delivered to customers in an integrated model with all the licenses, everything. And so it starts to solve the day zero, day one, day two integration deployment and then lifecycle management for them. So to broaden the discussion, our view is it's a multicloud world, the future is multicloud where you can have different clouds which can be optimized for different workloads. So for example, while our work with Wind River initially was very focused on virtualization of the radio access network, we just announced our infrastructure block with Red Hat, which is very much targeted and optimized for core network and edge, right? So, you know, there are different workflows which will require different capabilities also. And so, you know, again, we are bringing those things to these service providers to again, bring those cloud characteristics and cloud native architecture for their network. >> And It's going to be hybrid, to your point. >> David N.: And you, just hit on something, you said cloud characteristics. >> Yeah. >> If you look at this through the lens of kind of the general world of IT, sometimes when people hear the word cloud, they immediately leap to the idea that it's a hyperscale cloud provider. In this scenario we're talking about radio towers that have intelligence living on them and physically at the base. And so the cloud characteristics that you're delivering might be living physically in these remote locations all over the place, is that correct? >> Yeah, I mean that, that's true. That will definitely happen over time. But I think, I think we've seen the hyperscalers enter, you know, public cloud providers, enter at the edge and they're dabbling maybe with private, but I think the public RAN is another further challenge. I think that maybe a little bit down the road for them. So I think that is a different characteristic that you're talking about managing the macro RAN environment. >> Manish: If I may just add one more perspective of this cloud, and I mean, again, the hyperscale cloud, right? I mean that world's been great when you can centralize a lot of compute capability and you can then start to, you know, do workload aggregation and use the infrastructure more efficient. When it comes to Telecom, it is inherently it distributed architecture where you have access, you talked about radio access, your port, and it is inherently distributed because it has to provide the coverage and capacity. And so, you know, it does require different kind of capabilities when you're going out and about, and this is where I was talking about things like, you know, we just talked, we just have been working on our bare metal orchestration, right? This is what we are bringing is a capability where you can actually have distributed infrastructure, you can deploy, you can actually manage, do lifecycle management, in a distributed multicloud form. So it does require, you know, different set of capabilities that need to be enabled. >> Some, when talking about cloud, would argue that it's always been information technology, it always will be information technology, and especially as what we might refer to as public cloud or hyperscale cloud providers, are delivering things essentially on premises. It's like, well, is that cloud? Because it feels like some of those players are going to be delivering physical infrastructure outside of their own data centers in order to address this. It seems the nature, the nature of the beast is that some of these things need to be distributed. So it seems perfectly situated for Dell. That's why you guys are both at Dell now and not working for other Telecom places, right? >> Exactly. Exactly, yes. >> It's definitely an exciting space. It's transformed, the networks are under transformation and I do think that Dell's very well positioned to, to really help the customers, the service providers in accelerating their transformation journey with an open ecosystem. >> Dave V.: You've got the brand, and the breadth, and the resources to actually attract an ecosystem. But I wonder if you could sort of take us through your strategy of ecosystem, the challenges that you've seen in developing that ecosystem and what the vision is that ultimately, what's the outcome going to be of that open ecosystem? >> Yeah, I can start. So maybe just to give you the big picture, right? I mean the big picture, is disaggregation with performance, right, TCO models to the service providers, right? And it starts at the infrastructure layer, builds on bringing these cloud capabilities, the cast layer, right? Bringing the right accelerators. All of this requires to pull the ecosystem. So give you an example on the infrastructure in a Teleco grade servers like XR8000 with Sapphire, the new intel processors that we've just announced, and an extended array of servers. These are Teleco grade, short depth, et cetera. You know, the Teleco great characteristic. Working with the partners like Marvel for bringing in the accelerators in there, that's important to again, drive the performance and optimize for the TCO. Working then with partners like Wind River, Red Hat, et cetera, to bring in the cast capabilities so you can start to see how this ecosystem starts to build up. And then very recently we announced our private 5G solution with AirSpan and Expeto on the core site. So bringing those workloads together. Similarly, we have an open RAN solution we announce with Fujitsu. So it's, it's open, it's disaggregated, but bringing all these together. And one of the last things I would say is, you know, to make all this happen and make all of these, we've also been putting together our OTEL, our open Telecom ecosystem lab, which is very much geared, really gives this open ecosystem a playground where they can come in and do all that heavy lifting, which is anyways required, to do the integration, optimization, and board. So put all these capabilities in place, but the end goal, the end vision again, is that cloud native disaggregated infrastructure that starts to innovate at the speed of software and scales at the speed of cloud. >> And this is different than the nineties. You didn't have something like OTEL back then, you know, you didn't have the developer ecosystem that you have today because on top of everything that you just said, Manish, are new workloads and new applications that are going to be developed. Doug, anything you'd add to what Manish said? >> Doug: Yeah, I mean, as Manish said, I think adding to the infrastructure layers, which are, you know, critical for us to, to help integrate, right? Because we kind of took a vertical Teleco stack and we've disaggregated it, and it's gotten a little bit more complex. So our Solutions Dell Technology infrastructure block, and our lab infrastructure with OTEL, helps put those pieces together. But without the software players in this, you know, that's what we really do, I think in OTEL. And that's just starting to grow. So integrating with those software providers with that integration is something that the operators need. So we fill a gap there in terms of either providing engineered solutions so they can readily build on or actually bringing in that software provider. And I think that's what you're going to see more from us going forward is just extending that ecosystem even further. More software players effectively. >> In thinking about O-RAN, are they, is it possible to have the low latency, the high performance, the reliability capabilities that carriers are used to and the flexibility? Or can you sort of prioritize one over the other from a go to market and rollout standpoint and optimize one, maybe get a foothold in the market? How do you see that balance? >> Manish: Oh the answer is absolutely yes you can have both We are on that journey, we are on that journey. This is where all these things I was talking about in terms of the right kind of accelerators, right kind of capabilities on the infrastructure, obviously retargeting the software, there are certain changes, et cetera that need to be done on the software itself to make it more cloud native. And then building all the surrounding capabilities around the CICD pipeline and all where it's not just day zero or day one that you're doing the cloud-like lifecycle management of this infrastructure. But the answer to your point, yes, absolutely. It's possible, the technology is there, and the ecosystem is coming together, and that's the direction. Now, are there challenges? Absolutely there are challenges, but directionally that's the direction the industry is moving to. >> Dave V.: I guess my question, Manish, is do they have to go in lockstep? Because I would argue that the public cloud when it first came out wasn't nearly as functional as what I could get from my own data center in terms of recovery, you know, backup and recovery is a perfect example and it took, you know, a decade plus to get there. But it was the flexibility, and the openness, and the developer affinity, the programmability, that attracted people. Do you see O-RAN following a similar path? Or does it, my question is does it have to have that carrier class reliability today? >> David N.: Everything on day one, does it have to have everything on day one? >> Yeah, I mean, I would say, you know, like again, the Greenfield operators I think we're, we're willing do a little bit more experimentation. I think the operators, Brownfield operators that have existing, you know, deployments, they're going to want to be closer. But I think there's room for innovation here. And clearly, you know, Manish came from, from Meta and we're, we've been very involved with TIP, we're very involved with the O-RAN alliance, and as Manish mentioned, with all those accelerators that we're working with on our infrastructure, that is a space that we're trying to help move the ball forward. So I think you're seeing deployments from mainstream operators, but it's maybe not in, you know, downtown New York deployment, they're more rural deployments. I think that's getting at, you know, kind of your question is there's maybe a little bit more flexibility there, they get to experiment with the technology and the flexibility and then I think it will start to evolve >> Dave V.: And that's where the disruption's going to come from, I think. >> David N.: Well, where was the first place you could get reliable 4K streaming of video content? It wasn't ABC, CBS, NBC. It was YouTube. >> Right. >> So is it possible that when you say Greenfield, are a lot of those going to be what we refer to as private 5G networks where someone may set up a private 5G network that has more functions and capabilities than the public network? >> That's exactly where I was going is that, you know, that that's why you're seeing us getting very active in 5G solutions that Manish mentioned with, you know, Expeto and AirSpan. There's more of those that we haven't publicly announced. So I think you'll be seeing more announcements from us, but that is really, you know, a new opportunity. And there's spectrum there also, right? I mean, there's public and private spectrum. We plan to work directly with the operators and do it in their spectrum when needed. But we also have solutions that will do it, you know, on non-public spectrum. >> So let's close out, oh go ahead. You you have something to add there? >> I'm just going to add one more point to Doug's point, right? Is if you look on the private 5G and the end customer, it's the enterprise, right? And they're, they're not a service provider. They're not a carrier. They're more used to deploying, you know, enterprise infrastructure, maintaining, managing that. So, you know, private 5G, especially with this open ecosystem and with all the open run capabilities, it naturally tends to, you know, blend itself very well to meet those requirements that the enterprise would have. >> And people should not think of private 5G as a sort of a replacement for wifi, right? It's to to deal with those, you know, intense situations that can afford the additional cost, but absolutely require the reliability and the performance and, you know, never go down type of scenario. Is that right? >> Doug: And low latencies usually, the primary characteristics, you know, for things like Industry 4.0 manufacturing requirements, those are tough SLAs. They're just, they're different than the operator SLAs for coverage and, you know, cell performance. They're now, you know, Five9 type characteristics, but on a manufacturing floor. >> That's why we don't use wifi on theCUBE to broadcast, we need a hard line. >> Yeah, but why wouldn't it replace wifi over time? I mean, you know, I still have a home phone number that's hardwired to align, but it goes to a voicemail. We don't even have handset anymore for it, yeah. >> I think, well, unless the cost can come down, but I think that wifi is flexible, it's cheap. It's, it's kind of perfect for that. >> Manish: And it's good technology. >> Dave V.: And it works great. >> David N.: For now, for now. >> Dave V.: But you wouldn't want it in those situations, and you're arguing that maybe. >> I'm saying eventually, what, put a sim in a device, I don't know, you know, but why not? >> Yeah, I mean, you know, and Dell offers, you know, from our laptop, you know, our client side, we do offer wifi, we do offer 4G and 5G solutions. And I think those, you know, it's a volume and scale issue, I think for the cost structure you're talking about. >> Manish: Come to our booth and see the connected laptop. >> Dave V.: Well let's, let's close on that. Why don't you guys talk a little bit about what you're going on at the show, I did go by the booth, you got a whole big lineup of servers. You got some, you know, cool devices going on. So give us the rundown and you know, let's end with the takeaways here. >> The simple rundown, a broad range of new powered servers, broad range addressing core, edge, RAN, optimized for those with all the different kind of acceleration capabilities. You can see that, you can see infrastructure blocks. These are with Wind River, with Red Hat. You can see OTEL, the open telecom ecosystem lab where all that playground, the integration, the real work, the real sausage makings happening. And then you will see some interesting solutions in terms of co-creation that we are doing, right? So you, you will see all of that and not to forget the connected laptops. >> Dave V.: Yeah, yeah, cool. >> Doug: Yeah and, we mentioned it before, but just to add on, I think, you know, for private 5G, you know, we've announced a few offers here at the show with partners. So with Expeto and AirSpan in particular, and I think, you know, I just want to emphasize the partnerships that we're doing. You know, we're doing some, you know, fundamental integration on infrastructure, bare metal and different options for the operators to get engineered systems. But building on that ecosystem is really, the move to cloud native is where Dell is trying to get in front of. And we're offering solutions and a much larger ecosystem to go after it. >> Dave V.: Great. Manish and Doug, thanks for coming on the program. It was great to have you, awesome discussion. >> Thank you for having us. >> Thanks for having us. >> All right, Dave Vellante for Dave Nicholson and Lisa Martin. We're seeing the disaggregation of the Teleco network into open ecosystems with integration from companies like Dell and others. Keep it right there for theCUBE's coverage of MWC 23. We'll be right back. (upbeat tech music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. I mean really the first just to start with you know, of what you guys saw. for open RAN over the last year, When the average consumer hears 5G, and on the edge in particular. the ascendancy of cloud. in bringing that to market? So first of all, the duck's point, And so, but, the RAN, the open RAN, the Greenfield operators but the opportunities and the And new services that you and this is, think of it as the, you know, And It's going to be you said cloud characteristics. and physically at the base. you know, public cloud providers, So it does require, you know, the nature of the beast Exactly, yes. the service providers in and the resources to actually So maybe just to give you ecosystem that you have today something that the operators need. But the answer to your and it took, you know, a does it have to have that have existing, you know, deployments, going to come from, I think. you could get reliable 4K but that is really, you You you have something to add there? that the enterprise would have. It's to to deal with those, you know, the primary characteristics, you know, we need a hard line. I mean, you know, I still the cost can come down, Dave V.: But you wouldn't And I think those, you know, and see the connected laptop. So give us the rundown and you know, and not to forget the connected laptops. the move to cloud native is where Dell coming on the program. of the Teleco network

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

Dave VellantePERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

FujitsuORGANIZATION

0.99+

ABCORGANIZATION

0.99+

2015DATE

0.99+

DellORGANIZATION

0.99+

Doug WolfPERSON

0.99+

OTELORGANIZATION

0.99+

CBSORGANIZATION

0.99+

Manish SinghPERSON

0.99+

NBCORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

David N.PERSON

0.99+

AT&TORGANIZATION

0.99+

MarvelORGANIZATION

0.99+

AirSpanORGANIZATION

0.99+

BrownfieldORGANIZATION

0.99+

TelefonicaORGANIZATION

0.99+

GreenfieldORGANIZATION

0.99+

TelecoORGANIZATION

0.99+

ManishORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

ExpetoORGANIZATION

0.99+

Wind RiverORGANIZATION

0.99+

YouTubeORGANIZATION

0.99+

last yearDATE

0.99+

Dave V.PERSON

0.99+

ManishPERSON

0.99+

MWC 23EVENT

0.99+

Doug WolffPERSON

0.99+

firstQUANTITY

0.99+

Dell Telecom Multicloud FoundationORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

SeptemberDATE

0.99+

Mobile World CongressEVENT

0.99+

25 plus yearsQUANTITY

0.99+

O-RANORGANIZATION

0.99+

TelecosORGANIZATION

0.98+

todayDATE

0.98+

Driving Business Results with Cloud Transformation | Aditi Banerjee and Todd Edmunds


 

>> Welcome back to the program. My name is Dave Valante and in this session, we're going to explore one of the more interesting topics of the day. IoT for Smart Factories. And with me are, Todd Edmunds,the Global CTO of Smart Manufacturing Edge and Digital Twins at Dell Technologies. That is such a cool title. (chuckles) I want to be you. And Dr. Aditi Banerjee, who's the Vice President, General Manager for Aerospace Defense and Manufacturing at DXC Technology. Another really cool title. Folks, welcome to the program. Thanks for coming on. >> Thanks Dave. >> Thank you. Great to be here. >> Nice to be here. >> Todd, let's start with you. We hear a lot about Industry 4.0, Smart Factories, IIoT. Can you briefly explain, what is Industry 4.0 all about and why is it important for the manufacturing industry? >> Yeah. Sure, Dave. You know, it's been around for quite a while and it's gone by multiple different names, as you said. Industry 4.0, Smart Manufacturing, Industrial IoT, Smart Factory. But it all really means the same thing, its really applying technology to get more out of the factories and the facilities that you have to do your manufacturing. So, being much more efficient, implementing really good sustainability initiatives. And so, we really look at that by saying, okay, what are we going to do with technology to really accelerate what we've been doing for a long, long time? So it's really not- it's not new. It's been around for a long time. What's new is that manufacturers are looking at this, not as a one-of, two-of individual Use Case point of view but instead they're saying, we really need to look at this holistically, thinking about a strategic investment in how we do this. Not to just enable one or two Use Cases, but enable many many Use Cases across the spectrum. I mean, there's tons of them out there. There's Predictive maintenance and there's OEE, Overall Equipment Effectiveness and there's Computer Vision and all of these things are starting to percolate down to the factory floor, but it needs to be done in a little bit different way and really to really get those outcomes that they're looking for in Smart Factory or Industry 4.0 or however you want to call it. And truly transform, not just throw an Industry 4.0 Use Case out there but to do the digital transformation that's really necessary and to be able to stay relevant for the future. I heard it once said that you have three options. Either you digitally transform and stay relevant for the future or you don't and fade into history. Like, 52% of the companies that used to be on the Fortune 500 since 2000. Right? And so, really that's a key thing and we're seeing that really, really being adopted by manufacturers all across the globe. >> Yeah. So, Aditi, it's like digital transformation is almost synonymous with business transformation. So, is there anything you'd add to what Todd just said? >> Absolutely. Though, I would really add that what really drives Industry 4.0 is the business transformation. What we are able to deliver in terms of improving the manufacturing KPIs and the KPIs for customer satisfaction, right? For example, improving the downtime or decreasing the maintenance cycle of the equipments or improving the quality of products, right? So, I think these are lot of business outcomes that our customers are looking at while using Industry 4.0 and the technologies of Industry 4.0 to deliver these outcomes. >> So, Aditi, I wonder if I could stay with you and maybe this is a bit esoteric but when I first first started researching IoT and Industrial IoT 4.0, et cetera, I felt, well, there could be some disruptions in the ecosystem. I kind of came to the conclusion that large manufacturing firms, Aerospace Defense companies the firms building out critical infrastructure actually had kind of an incumbent advantage and a great opportunity. Of course, then I saw on TV somebody now they're building homes with 3D printers. It like blows your mind. So that's pretty disruptive. But, so- But they got to continue, the incumbents have to continue to invest in the future. They're well-capitalized. They're pretty good businesses, very good businesses but there's a lot of complexities involved in kind of connecting the old house to the new addition that's being built, if you will, or this transformation that we're talking about. So, my question is, how are your customers preparing for this new era? What are the key challenges that they're facing in the the blockers, if you will? >> Yeah, I mean the customers are looking at Industry 4.0 for Greenfield Factories, right? That is where the investments are going directly into building the factories with the new technologies, with the new connectivities, right? For the machines, for example, Industrial IoT having the right type of data platforms to drive computational analytics and outcomes, as well as looking at Edge versus Cloud type of technologies, right? Those are all getting built in the Greenfield Factories. However, for the Install-Based Factories, right? That is where our customers are looking at how do I modernize these factories? How do I connect the existing machine? And that is where some of the challenges come in on the legacy system connectivity that they need to think about. Also, they need to start thinking about cybersecurity and operation technology security because now you are connecting the factories to each other. So, cybersecurity becomes top of mind, right? So, there is definitely investment that is involved. Clients are creating roadmaps for digitizing and modernizing these factories and investments in a very strategic way. So, perhaps they start with the innovation program and then they look at the business case and they scale it up, right? >> Todd, I'm glad you did brought up security, because if you think about the operations technology folks, historically they air-gaped the systems, that's how they created security. That's changed. The business came in and said, 'Hey, we got to connect. We got to make it intelligence.' So, that's got to be a big challenge as well. >> It absolutely is, Dave. And, you know, you can no longer just segment that because really to get all of those efficiencies that we talk about, that IoT and Industrial IoT and Industry 4.0 promise, you have to get data out of the factory but then you got to put data back in the factory. So, no longer is it just firewalling everything is really the answer. So, you really have to have a comprehensive approach to security, but you also have to have a comprehensive approach to the Cloud and what that means. And does it mean a continuum of Cloud all the way down to the Edge, right down to the factory? It absolutely does. Because no one approach has the answer to everything. The more you go to the Cloud the broader the attack surface is. So, what we're seeing is a lot of our customers approaching this from kind of that hybrid right ones run anywhere on the factory floor down to the Edge. And one of the things we're seeing too, is to help distinguish between what is the Edge and bridge that gap between, like, Dave, you talked about IT and OT and also help what Aditi talked about is the Greenfield Plants versus the Brownfield Plants that they call it, that are the legacy ones and modernizing those. It's great to kind of start to delineate what does that mean? Where's the Edge? Where's the IT and the OT? We see that from a couple of different ways. We start to think about really two Edges in a manufacturing floor. We talk about an Industrial Edge that sits... or some people call it a Far Edge or a Thin Edge, sits way down on that plant, consists of industrial hardened devices that do that connectivity. The hard stuff about how do I connect to this obsolete legacy protocol and what do I do with it? And create that next generation of data that has context. And then we see another Edge evolving above that, which is much more of a data and analytics and enterprise grade application layer that sits down in the factory itself; that helps figure out where we're going to run this? Does it connect to the Cloud? Do we run Applications On-Prem? Because a lot of times that On-Prem Application it needs to be done. 'Cause that's the only way that it's going to work because of security requirements, because of latency requirements performance and a lot of times, cost. It's really helpful to build that Multiple-Edge strategy because then you kind of, you consolidate all of those resources, applications, infrastructure, hardware into a centralized location. Makes it much, much easier to really deploy and manage that security. But it also makes it easier to deploy new Applications, new Use Cases and become the foundation for DXC'S expertise and Applications that they deliver to our customers as well. >> Todd, how complex are these projects? I mean, I feel like it's kind of the the digital equivalent of building the Hoover Dam. I mean, its.. so yeah. How long does a typical project take? I know it varies, but what are the critical success factors in terms of delivering business value quickly? >> Yeah, that's a great question in that we're- you know, like I said at the beginning, this is not new. Smart Factory and Industry 4.0 is not new. It's been, it's people have been trying to implement the Holy Grail of Smart Factory for a long time. And what we're seeing is a switch, a little bit of a switch or quite a bit of a switch to where the enterprises and the IT folks are having a much bigger say and they have a lot to offer to be able to help that complexity. So, instead of deploying a computer here and a Gateway there and a Server there, I mean, you go walk into any manufacturing plant and you can see Servers sitting underneath someone's desk or a PC in a closet somewhere running a critical production application. So, we're seeing the enterprise have a much bigger say at the table, much louder voice at the table to say, we've been doing this enterprise all the time. We know how to really consolidate, bring Hyper-Converged Applications, Hyper-Converged Infrastructure to really accelerate these kind of applications. Really accelerate the outcomes that are needed to really drive that Smart Factory and start to bring that same capabilities down into the Mac on the factory floor. That way, if you do it once to make it easier to implement, you can repeat that. You can scale that. You can manage it much easily and you can then bring that all together because you have the security in one centralized location. So, we're seeing manufacturers that first Use Case may be fairly difficult to implement and we got to go down in and see exactly what their problems are. But when the infrastructure is done the correct way when that- Think about how you're going to run that and how are you going to optimize the engineering. Well, let's take that what you've done in that one factory and then set. Let's make that across all the factories including the factory that we're in, then across the globe. That makes it much, much easier. You really do the hard work once and then repeat. Almost like cookie cutter. >> Got it. Thank you. >> Aditi, what about the skillsets available to apply these to these projects? You got to have knowledge of digital, AI, Data, Integration. Is there a talent shortage to get all this stuff done? >> Yeah, I mean, definitely. Lot different types of skillsets are needed from a traditional manufacturing skillset, right? Of course, the basic knowledge of manufacturing is important. But the digital skillsets like IoT, having a skillset in in different Protocols for connecting the machines, right? That experience that comes with it. Data and Analytics, Security, Augmented Virtual Reality Programming. Again, looking at Robotics and the Digital Twin. So, the... It's a lot more connectivity software, data-driven skillsets that are needed to Smart Factory to life at scale. And, you know, lots of firms are recruiting these types of resources with these skill sets to accelerate their Smart Factory implementation, as well as consulting firms like DXC Technology and others. We recruit, we train our talent to provide these services. >> Got it. Aditi, I wonder if we could stay on you. Let's talk about the partnership between DXC and Dell. What are you doing specifically to simplify the move to Industry 4.0 for customers? What solutions are you offering? How are you working together, Dell and DXC to bring these to market? >> Yeah, Dell and DXC have a very strong partnership and we work very closely together to create solutions, to create strategies and how we are going to jointly help our clients, right? So, areas that we have worked closely together is Edge Compute, right? How that impacts the Smart Factory. So, we have worked pretty closely in that area. We're also looked at Vision Technologies. How do we use that at the Edge to improve the quality of products, right? So, we have several areas that we collaborate in and our approaches that we want to bring solutions to our client and as well as help them scale those solutions with the right infrastructure, the right talent and the right level of security. So, we bring a comprehensive solution to our clients. >> So, Todd, last question. Kind of similar but different, you know. Why Dell, DXC, pitch me? What's different about this partnership? Where are you confident that you're going to be to deliver the best value to customers? >> Absolutely. Great question. You know, there's no shortage of Bespoke Solutions that are out there. There's hundreds of people that can come in and do individual Use Cases and do these things and just, and that's where it ends. What Dell and DXC Technology together bring to the table is we do the optimization of the engineering of those previously Bespoke Solutions upfront, together. The power of our scalable enterprise grade structured industry standard infrastructure, as well as our expertise in delivering package solutions that really accelerate with DXC's expertise and reputation as a global trusted advisor. Be able to really scale and repeat those solutions that DXC is so really, really good at. And Dell's infrastructure and our, 30,000 people across the globe that are really, really good at that scalable infrastructure to be able to repeat. And then it really lessens the risk that our customers have and really accelerates those solutions. So it's again, not just one individual solutions it's all of the solutions that not just drive Use Cases but drive outcomes with those solutions. >> Yeah, you're right. The partnership has gone, I mean I first encountered it back in, I think it was 2010. May of 2010. We had guys both on the, I think you were talking about converged infrastructure and I had a customer on, and it was actually the manufacturing customer. It was quite interesting. And back then it was how do we kind of replicate what's coming in the Cloud? And you guys have obviously taken it into the digital world. Really want to thank you for your time today. Great conversation and love to have you back. >> Thank you so much. It was a pleasure speaking with you. I agree. >> All right, keep it right there for more discussions that educate and inspire on "The Cube."

Published Date : Feb 16 2023

SUMMARY :

Welcome back to the program. Great to be here. the manufacturing industry? and the facilities that you add to what Todd just said? and the KPIs for customer the incumbents have to continue that they need to think about. So, that's got to be a the answer to everything. of the the digital equivalent and they have a lot to offer Thank you. to apply these to these projects? and the Digital Twin. to simplify the move to and the right level of security. the best value to customers? it's all of the solutions love to have you back. Thank you so much. for more discussions that educate

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ValantePERSON

0.99+

DavePERSON

0.99+

DellORGANIZATION

0.99+

DXCORGANIZATION

0.99+

Aditi BanerjeePERSON

0.99+

ToddPERSON

0.99+

oneQUANTITY

0.99+

Todd EdmundsPERSON

0.99+

2010DATE

0.99+

May of 2010DATE

0.99+

DXC TechnologyORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Greenfield FactoriesORGANIZATION

0.99+

52%QUANTITY

0.99+

30,000 peopleQUANTITY

0.99+

AditiPERSON

0.99+

twoQUANTITY

0.98+

firstQUANTITY

0.98+

2000DATE

0.98+

EdgeORGANIZATION

0.97+

todayDATE

0.97+

Smart FactoryORGANIZATION

0.97+

three optionsQUANTITY

0.97+

two Use CasesQUANTITY

0.96+

bothQUANTITY

0.96+

Digital TwinORGANIZATION

0.95+

hundreds of peopleQUANTITY

0.95+

one factoryQUANTITY

0.95+

MacCOMMERCIAL_ITEM

0.95+

AerospaceORGANIZATION

0.95+

Smart FactoryORGANIZATION

0.95+

Hoover DamLOCATION

0.94+

Vision TechnologiesORGANIZATION

0.92+

Edge ComputeORGANIZATION

0.91+

Digital TwinsORGANIZATION

0.91+

one individualQUANTITY

0.86+

Smart Manufacturing Edge andORGANIZATION

0.83+

two EdgesQUANTITY

0.83+

Aerospace DefenseORGANIZATION

0.77+

Greenfield PlantsORGANIZATION

0.76+

Brownfield PlantsORGANIZATION

0.7+

CasesQUANTITY

0.67+

CloudTITLE

0.64+

Vice PresidentPERSON

0.62+

GeneralPERSON

0.54+

IIoTORGANIZATION

0.52+

InstallORGANIZATION

0.51+

4.0TITLE

0.47+

CubeTITLE

0.47+

Smart FactoriesORGANIZATION

0.46+

FortuneORGANIZATION

0.45+

FactoriesORGANIZATION

0.37+

4.0EVENT

0.34+

4.0ORGANIZATION

0.34+

Industry 4.0ORGANIZATION

0.32+

4.0OTHER

0.31+

500QUANTITY

0.24+

Phil Mottram & David Hughes, HPE | HPE Discover 2022


 

>>The cube presents HPE discover 2022 brought to you by HPE. >>Welcome back to the Venetian convention center. You're watching the Cube's coverage of HPE discover 2022. The first discover live discover in three years, 2019 was the last one. The cube we were just talking about. This has been at H HP discover. Now HPE since 2011, my co-host John furrier. We're pleased to welcome Phil Maru. Who's the executive vice president and general manager of HPE Aruba. And he's joined by David Hughes, the chief product and technology officer at HPE Aruba gentleman. Welcome to the cube. Good to see you. Thank you. Thank >>You. >>Okay, so you guys talk a lot, Phil, about the intelligent edge. Yep. Okay. What do you, what do you mean by that? >>Yeah, so we, well, we're kind of focused on, is providing technology to customers that sits out at the edge and typically the edge would be, uh, any location out of the data center or out of the cloud. So for the most part, our customers would deploy our technology either in their office premises or maybe retail premises shops, uh, maybe deploying out of the home where their employees are on a factory floor. And we're really talking about technology to connect both people and devices back to, um, systems and technology throughout an organization. So, but >>I, I, you know, sometimes I call it the near edge and the far edge yeah. Near, near edge. Maybe as we saw home Depot up on the stage yesterday far, Edge's like space. Right. You're including all of that. Right. That's >>Edge. >>Yeah. And actually we, we, we, you know, we've got a broad range of technology that actually works within the data center as well. So, you know, what we are focused on is providing, uh, network technology, software and services. And, you know, for the most part, our heritage is at the edge, but it's more pervasive than that. So >>If you have the edge, you got connectivity and power, that's an edge. How much, um, is the physical world being connected now you're seeing robotics automation. Yeah. Ex and with machine learning specifically in compute, really driving a new acceleration at the edge. What you, how do you guys view that? What's your reaction? Yeah. >>I think, look, it, I think as connectivity is improving and that's both in terms of wifi connectivity, so, you know, wifi technology continues to, uh, advance and also you've got this new kind of private 5g area, just generally connectivity is becoming more pervasive and that's helping some industries that haven't previously embraced it. And I think industrial is, is one of the big ones. So, you know, historically it was difficult for kind of car manufacturers to really enable a factory floor. But now the connectivity is connectivity is better. That gives them the opportunity to be able to really change how they do things. So >>David, if you do take an outside in view, mm-hmm <affirmative>, uh, and, and, and when you talk to customers, what are they telling you and how is that informing your product strategy? >>Yeah, well, you >>Know, I think there's, there's several themes we hear. One is, you know, it's really important, better work from anywhere they wanna enable their employees, um, to get the same experience, whether they're at home or on the road or in their branch office or at headquarters. Um, you know, people are also concerned that as they deploy, deploy all of this IOT and pursuit of digital transformation, they don't want those devices to be a weak point where someone breaks into one device and moves naturally, um, across the network. So they want to have this great experience for their customers and their users, but they wanna make sure that they're not compromising security, um, in any way. And so it's about getting that balance between ease of use and, and security. That's one of the primary things we hear, >>You know, Dave, one of the things we talked about many, many years ago was when hybrid and was starting to come out multi-cloud was on the, on the table early on. Uh, we were, we were saying, Hey, the data center is just a big edge, right? I mean, if you have cloud operations and you see what's going on with GreenLake here now, the momentum hybrid cloud is cloud operations, right? An edge off data centers to a big edge on premises. And you got the edge as you have cloud operations, like say GreenLake, plugging in partners and diverse environments. You're connecting, not just branch offices that are per perimeter based. You have no perimeter and you have now other companies connecting mm-hmm <affirmative> so you got data and you got network. How do you guys see that transition as GreenLake has a very big ecosystem part of it, partners and whatnot. >>Yeah. So, you know, I think for us, um, the ecosystem of partners that we have is critical in terms of delivering what our customers need. And, you know, I think one of the really important areas is around verticals. So, um, you know, when you think about different verticals, they have similar problems, but you need to tailor the solutions. Um, to each of those, you know, we are talking a bit about devices and people. When you look at say a healthcare environment, there can be 30 devices there for each patient. And, um, so there's connecting all those devices securely, but we have partners that will help pull all of that together that may be focused on, um, you know, medical environment that may focused on stadiums. They may be focused on industrial. Um, so having partners that understand those verticals and working closely with them to deliver solutions is important in our go to market. >>So another kind of product question and related to what you just said, David, I got connectivity, speed, reliability, cost security, or maybe a missing something. But you, you said earlier, you gonna gotta balance those. How do you do that? And do you do that for the specific use cases? Like for instance, you just mentioned stadiums and 81 and how do you balance those and, and do you tailor those for the use cases? >>Yeah, well, I think it depends on the customer and different people have different views about where they need to be. So some people are, are so afraid about security. They wanna be air gapped and completely separate than the internet. That would be one extreme mm-hmm <affirmative> other people, you know, look at it and see what's happening with COVID with everyone working from home with people being able to work from Starbucks or the airport. And they're beginning to think, well, why is the branch that much different? And so what I think we are seeing is, you know, a reevaluation of how people connect to, um, the apps they're using and, uh, you know, you, you, you've probably for sure heard people talking about zero trust, talking about micro segmentation. You know, I think what we we see is that people wanna be able to build a network in a way where rather than any device being able to talk to any device or any person, which is where the internet started, we wanna build to build networks where people or devices can only talk to the destinations that are necessary for them to do their job. >>And so a lot of the technology that we are building into the network is really about making security intrinsic by limiting what can talk to what that's >>Actually micro, micro segmentations, zero trust, um, these all point to a modern, the modern network, as you say, Antonio Neri was just on the cube, talking about programmability, substrate, the words like that come to mind, what is the modern network look like? I mean, you have to be agile. You have to be programmable. You have to have security. Can you describe in your words, what does the modern network these days need to look like? How should customers think about architecting them? What are some of the table stakes and what are some of the differentiators that customers need to do to have a modern network? >>Yeah, well, you covered off a coup a few quarter, one there with clarity and so on. So let me pick one that you didn't mention. And, and I, you know, I think we are seeing, you know, a lot of interest around network as a service. And, you know, when we think about network as a service, we think about it broadly, um, you know, for consumers, we're getting more and more used to buying things as a service versus buying a thing. When you, when you get Alexa, you care about how well she answers your questions, you don't care about what CPU is or how much Ram Alexa has. And likewise with networking, people are caring about the outcomes of keeping their employees connected, keeping their, their devices and systems running. And so what for us, what NASA is all about is that shift of thinking about a network as being a collection of devices that get managed to being a framework for connectivity and running it from the point of view of those outcomes. >>And so whether, you know, it's about CapEx versus OPEX or about do it yourself, managing the network yourself versus outsourcing that, um, or it's about the, you know, Greenfield versus brownfield, each of our customers has got a different starting point, but they're all getting heading towards this destination of being able to treat their network as a service. And so that is, you know, a key area of innovation for us and whether it's big customers like home Depot that you heard about yesterday, um, where we kind of manage everything for them on a, as on a store basis, um, for connectivity, um, or, you know, the recent, um, skew based nest that we launched, which is a really scalable foundation for our partners to build nest offerings around. Um, we see this as a key part of network modernization. Yeah. >>And one of the things, again, that's great stuff. Uh, infrastructure is code, which was really kind of pioneer the DevOps movement in cloud kind of as platform level. And you got data ops now and AI at the top of the stack, we were always wondering when network as code was gonna come, uh, and where you actually have it, where it's programmable. I mean, we all know what policies do do. They're good. That's all great network as code. >>Yeah. >>And that's the concept that's like DevOps, it's like, make it work just seamlessly, just be always on. And >>Yeah. And smart, you know, people are always looking for the, for the easy button. Um, and so they want, they want things to operate easily. They want it to be easy to manage. And, you know, I actually think there's a little bit of a, um, a conflict between networkers code and the easy button, right? So it depends on the class of customers. Some customers like financials, for instance, have a huge software development organizations that are extremely capable that could, that can go with program ability that want things as code. But the majority of the, of, of the verticals that we deal with, um, don't have those big captive software organizations. And so they're really looking for automation and simplicity and they wanna outsource that problem. So in Aruba central, we have invested a lot to make it really easy for our customers to, um, get what they need, you know, is that movement of zero code. It's more like zero code. They want, they want something packaged now >>The headless networks. Yeah. Low code, no code >>Kind of thing. Yeah, that's right. And, you know, obviously for people that have the sophistication that want to, um, do the most advanced things, we have APIs. And so we support that kind of programmable way of doing things. But I'd say that that's that's, those are more specialized customers. So >>Phil, yeah. Uh, is that the strategy? I mean, David listed off a number of, of factors here is that Aruba's strategy to modernize networks to actually create the easy button through network as a service is as simple as dial tone. Is that how we >>Should think? I mean, the way I think about the strategy is I think about it as a triangle, really, along the bottom, we've got the products and services that we offer and we continue to add more products and services. We either buy companies such as silver peak a couple of years ago, or we build, uh, additional products and by, and by the way, that's in response to customers who are frustrated with some other suppliers and wanna move on mass over to, uh, companies like ourselves. So at the bottom layer of the product and services, and then the other side of the triangle one would be NAS, which we talked about, which is kind of move to buying network and as a service. And then the other side of the triangle is the platform, which for us is river central, which is part of HP GreenLake. And that's really all about, you know, kind of making it easy for customers to manage networks and Aruba central right now has got about 120,000 live customers on it. It connects to about 2 million devices and it's collecting a lot of data as well. So we anonymously collect data from all of our customers. We've got one and a half billion data points in the platform. And what we do is we let that data kind of look for anomalies and spot problems on the network before they happen for customers. >>So Aruba central predated, uh, uh, GreenLake GreenLake. Yeah. And, and so did you write to GreenLake through GreenLake APIs? How, what was the engineering work to accomplish that? >>Yeah, so really, um, Aruba central is kind of the Genesis of the GreenLake platform. So we took Aruba central and made it more generic okay. To build the GreenLake cloud platform. And you know, what we've done very recently is bring, bring Aruba into that unified infrastructure, along with storage and compute. So the same sign-on applies across all of HP's, um, products, the same way of managing licenses, managing devices. And so it provides us, uh, great foundation going forwards to, um, solve more comprehensively. Our customers automation requires. >>So, so just a quick follow. So Aruba actually was the main spring of GreenLake from the standpoint of okay. Sing, like you said, single sign on a platform that could evolve and become more, more generic. Yes. So, okay. So that was a nice little, um, bonus of the acquisition, you know, it's now the whole company >><laugh> Aruba taking over. >>Yeah. There's been a lot of work to, to, uh, you know, make it generic and, and widely applicable. Right. Yeah. Um, so, but >>You were purpose >>Built for yeah. Well it's foundational. Yes. So foundational for GreenLake, they built on top of it. Yeah. So you mentioned the data points, billions of data points. So I gotta ask you, cuz we're seeing this, um, copy more and more with machine learning, driving a lot of acceleration, cuz you can do simulations with machine learning and compute. We had Neil McDonal done earlier. He's a compute guy, you got networking. So with all this, um, these services and devices being put on and off the network humans, can't actually figure this out. You can discover what's on the network. How are you guys viewing the discovery and monitoring because there's no perimeter okay. On the network anymore. So I want to know what's out there. Um, how do you get through it? How does machine learning and AI play into this? >>Yeah. I mean, what we are trying to do is obviously flag trends for customers and say, Hey look, you know, we can either see something happening with your network. So there's a particular issue over here and we need to, I dunno, free up more capacity to solve that. Or we're looking at how their network is running and then comparing that with anonymized data from all of our other customers as well. So we're just helping find those problems. But yeah, you're right. I mean, I think it is becoming more of an issue for organizations, you know, how do you manage the network, >>But you see machine learning and AI playing a big part. >>Yeah, yeah. Yeah. I think, uh, AI massively and, and other technology advances as well that we make. So recently we, uh, also announced the availability of location awareness within our access points. And that might sound like a simple thing. But when network, when companies build out their networks, they often lose or they potentially could lose the records as to, well, where were the access points that we laid out and actually where are they not within, you know, 20 feet, but where actually are they? So we introduced kind of location, finding technology as well into our, uh, access points to make it easy for >>Customers. So Aruba one of the best, if not the best acquisition. I think that HP E has made, um, it's made by three par was, you know, good. It saved the storage business. Okay. That was more of a defensive play. Uh, but to see Aruba, it's a growth business. You guys report on it every quarter. Yeah. It's obviously a key ingredient to enable uh, uh, GreenLake and, and a that's another example, nimble was similar. We're much smaller sort of more narrow, but taking the AI ops piece and bringing it over. So it's, it was great to see HPE executing on some of its M and a as opposed to just leaving them alone and not really leveraging 'em. So guys, yeah. Congratulations really appreciate you guys coming on and explaining that. Congratulations on all the, all the great work and thanks for coming on the cube. Okay. >>Thank you guys. Yeah. Thanks for having us. >>All right, John, and I'll be back right after this short break. You're watching the cube, the leader in enterprise tech coverage from HPE Las Vegas, 2022. We'll be right back.

Published Date : Jun 29 2022

SUMMARY :

the chief product and technology officer at HPE Aruba gentleman. Okay, so you guys talk a lot, Phil, about the intelligent edge. So for the most part, our customers would deploy our technology either I, I, you know, sometimes I call it the near edge and the far edge yeah. And, you know, for the most part, our heritage is at the edge, If you have the edge, you got connectivity and power, that's an edge. So, you know, historically it was difficult for kind of car manufacturers to really Um, you know, people are also concerned that as they deploy, And you got the edge as you have cloud operations, like say GreenLake, plugging in partners and diverse environments. So, um, you know, when you think about different verticals, So another kind of product question and related to what you just said, David, I got connectivity, think we are seeing is, you know, a reevaluation of how people connect the modern network, as you say, Antonio Neri was just on the cube, talking about programmability, And, and I, you know, I think we are seeing, you know, a lot of interest around network And so that is, you know, a key area of innovation for us and whether And you got data ops now and AI at the And that's the concept that's like DevOps, it's like, make it work just seamlessly, for our customers to, um, get what they need, you know, is that movement of zero code. The headless networks. And, you know, obviously for people that have the sophistication that Uh, is that the strategy? you know, kind of making it easy for customers to manage networks and Aruba central right now has got And, and so did you write to GreenLake through GreenLake APIs? And you know, what we've done very recently is bring, bring Aruba into that unified infrastructure, you know, it's now the whole company Yeah. So you mentioned the data points, billions of data points. of an issue for organizations, you know, how do you manage the network, they not within, you know, 20 feet, but where actually are they? has made, um, it's made by three par was, you know, good. Thank you guys. You're watching the cube, the leader in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David HughesPERSON

0.99+

DavidPERSON

0.99+

30 devicesQUANTITY

0.99+

Neil McDonalPERSON

0.99+

Phil MaruPERSON

0.99+

20 feetQUANTITY

0.99+

HPEORGANIZATION

0.99+

JohnPERSON

0.99+

GreenLakeORGANIZATION

0.99+

John furrierPERSON

0.99+

2022DATE

0.99+

PhilPERSON

0.99+

Phil MottramPERSON

0.99+

Antonio NeriPERSON

0.99+

NASAORGANIZATION

0.99+

ArubaLOCATION

0.99+

DavePERSON

0.99+

three yearsQUANTITY

0.99+

2011DATE

0.99+

HPORGANIZATION

0.99+

yesterdayDATE

0.99+

StarbucksORGANIZATION

0.98+

about 2 million devicesQUANTITY

0.98+

bothQUANTITY

0.98+

each patientQUANTITY

0.98+

eachQUANTITY

0.98+

one deviceQUANTITY

0.98+

HPE ArubaORGANIZATION

0.97+

one and a half billion data pointsQUANTITY

0.97+

oneQUANTITY

0.97+

ArubaORGANIZATION

0.97+

OPEXORGANIZATION

0.97+

AlexaTITLE

0.96+

OneQUANTITY

0.96+

Las VegasLOCATION

0.96+

2019DATE

0.96+

HP EORGANIZATION

0.95+

home DepotORGANIZATION

0.94+

CapExORGANIZATION

0.94+

Aruba centralORGANIZATION

0.93+

zero codeQUANTITY

0.92+

many years agoDATE

0.92+

EdgeORGANIZATION

0.91+

billions of data pointsQUANTITY

0.91+

GreenLakeTITLE

0.9+

both peopleQUANTITY

0.89+

about 120,000 live customersQUANTITY

0.89+

couple of years agoDATE

0.84+

HP GreenLakeORGANIZATION

0.84+

first discoverQUANTITY

0.83+

H HP discoverORGANIZATION

0.78+

DevOpsTITLE

0.76+

brownfieldORGANIZATION

0.75+

Venetian convention centerLOCATION

0.73+

single signQUANTITY

0.72+

zero trustQUANTITY

0.71+

one extremeQUANTITY

0.67+

three parQUANTITY

0.66+

discover 2022COMMERCIAL_ITEM

0.62+

William Choe and Shane Corban | Aruba & Pensando Announce New Innovations


 

>>Hello and welcome to the power of and where H P E Aruba and Pensando are changing the game the way customers scale at the cloud and what's next in the evolution in switching everyone. I'm john ferrier with the Cuban. I'm here with Shane Corbyn, Director of Technical Product management. Pensando Williams show vice president Product management, Aruba HP Gentlemen, thank you for coming on and doing a deep dive and and going into the big news. So the first question I want to ask you guys is um, what do you guys see from a market customer perspective that kicked this project off? Amazing results over the past year or so. Where did it all come from? >>It's a great question, John So when we were doing our homework, there were actually three very clear customer challenges. First, security threats were largely spawned with from within the perimeter. In fact, four star highlights that 80% of threats originate within the internal network. Secondly, workloads are largely distributed, creating a ton of east west traffic and then lastly, network services such as firewalls load balancers. VPN aggregators are expensive. They're centralized and then ultimately result in service changing complexity. So everyone, >>so go ahead. Change. >>Yeah. Additionally, when we spoke to our customers after launching initially the distributed services platform, these compliance challenges clearly became apparent to us and while they saw the architectural value of adopting what the largest public cloud providers have done by putting a smart making each compute note to provide these state full services. Enterprise customers were still were struggling with the need to upgrade fleets and Brownfield servers and the associated per node cost of adding a spark nick to every compute node. Typically the traffic volumes for on a personal basis within an enterprise data center are significantly lower than cloud. Thus we saw an opportunity here to in conjunction with Aruba developed a new category of switching product um, to share the crossing capabilities of our unique intellectual property around our DPU across a rack of servers that Net Net delivers the same set of services through a new category of platform, enabling a distributed services architecture and ultimately addressing the compliance and uh, TCO generating huge TCO and ri for customers. >>You know, one of the things that we've been reporting on with you guys as well as the cloud scale, this is the volume of data and just the performance and scale I think the timing of the, of this partnership and the product development is right on point. You got the edge right around the corner more, more distributed nature of cloud operations, huge, huge change in the marketplace. So great timing on the origination story there. Great stuff. Tell me more about the platform itself. The details what's under the hood, the hardware. Os, what are the specs? >>Yeah, so we started with a very familiar premise, Ruba customers are already leveraging C X with an edge to cloud, common operating model and deploying Leaf and spy networks. Plus we're excited to introduce the industry's first distributed services switch where the first configuration has 48 25 gig ports with 100 gig uplinks running Aruba C X cloud native operating system. Pensando A six and software inside enabling layer four through seven staple services you want to elaborate on. >>Let me elaborate on that a little further. Um, you know, as we spoke, existing platforms and how customers were seeking to address these challenges were inherently limited by the diocese and that thus limited their scale and performance and ability in traditional switching platforms to deliver truly stable functions in in a switching platform. This was, you know, architecturally from the ground up. When we developed our DPU 1st and 2nd generation, we delivered it or we we we built it with staples services in in mind from the Gecko. We we leverage to clean state designed with RP four program with GPU, we evolved to our seven nanometer based DPU right now, which is essentially enabling software and silicon and this has generated a new level of performance scale flexibility and capability in terms of services this serves as the foundation for or 200 gig card where we're taking the largest cloud providers into production for. And the DPU itself is designed inherently to process state track state connections and state will flow is a very, very large scale without impacting performance. And in fact, the two of these deep you component service, their services foundation of the C X 10-K And this is how we enable states of functions in a switching platform. Functions like stable network network fire walling, stable segmentation, enhance programmable telemetry. Which we believe will bring a whole lot of value to our customers. And this is a, a platform that's inherently programmable from the ground up. We can we can build and and leverages platform to build new use cases around encryption, enabling state for load balancing, stable nash to name a few. But the key message here is this is this is a platform with the next generation of architecture is in mind is programmed but at all levels of the stack and that's what makes it fundamentally different than anything else. >>I want to just double click on that if you don't mind before we get to the competitive question because I think you brought up the state thing, I think this is worth calling out if you guys don't mind commenting more on this state issue because this is big cloud. Native developers right now want speed, they're shifting left at the Ci cd pipeline with program ability. So going down and having the program ability and having state is a really big deal. Can you guys just expand on that a little bit more and why it's important and how hard it really is to pull off. >>I I can start I guess. Well um it's very hard to pull off because of the sheer amount of connections you need to track when you're developing something like a state, full firewall or state from load balancer. A key component of that is managing the connections at very, very large scale and understanding what's happening with those connections at scale without impacting application performance. And this is fundamentally different. A traditional switching platform regardless of how it's deployed today in a six don't typically process and manage state like this. Memory resources within the shape aren't sufficient. Um the policy scale that you can implement on a platform aren't sufficient to address and fundamentally enable deployable fire walling or load balancing or other state services. >>That's exactly right. So the other kind of key point here is that if you think about the sophistication of different security threats, it does really require you to be able to look at the entire packet and more so be able to look at the entire flow and be able to log that history so that you can get much better heuristics around different anomalies. Security threats that are emerging today. >>That's a great great point. Thanks for bringing that extra extra point out, I would just add to this, we're reporting this all the time when silicon angle in the cube is that you know, the you know, the the automation wave that's coming with around data, you know, it's the center of data now, not date as soon as we heard earlier on with the presentation data drives automation having that enabled with state is a real big deal. So I think that's really worth calling out now. I got to ask the competition question, how is this different? I mean this is an evolution, I would say it's a revolution you guys are being humble um but how is this different from what customers can deploy today >>architecturally, if you take a look at it? So we've, we've spoken about the technology and fundamentally in the platform, what's unique in the architecture but foundational e when customers deploy stable services, they're typically deployed leveraging traditional big box appliances for east west or workload based agents which seek to implement stable security for each East west architectural, what we're enabling is staples services like fire walling, segmentation can scale with the fabric and are delivered at the optimal point for east west which is through the Leaf for access their of the network and we do this for any type of workload. Being deployed on a virtualized compute node being deployed on a containerized, our worker node being deployed on bare metal agnostic of topology. It can be in the access layer of a three tier design and a data center. It can be in the leaf layer of the excellent VPN based fabric. But the goal is an all centrally managed to a single point of orchestration control which William we'll talk about shortly. The goal of this is to to drive down the TCO of your data center as a whole by allowing you to retire legacy appliances that are deployed in in east west role, not utilized host based agents and thus save a whole lot of money. And we've modeled on the order of 60 to 70% in terms of savings in terms of the traditional data center pod design of 1000 compute nodes which will be publishing and as as we go forward, additional services as we mentioned like encryption, this platform has the capability to terminate up to 800 gigs of line, right encryption, I P sec VPN per platform state will not load balancing and this is all functionality will be adding to this existing platform because it's programmable as we mentioned from the ground up. >>What are some of the use cases lead and one of the top use case. What's the low hanging fruit? And where does this go? Service providers enterprise, what are the types of customers you guys see implementing? >>Yeah, that's what's really exciting about the C X 10,000 we actually see customer interest from all types of different markets, whether it be higher education service providers to financial services, basically all enterprises verticals with private cloud or edge data centers for example, could be a hospital, a big box retailer or Coehlo. Such as an equity. It's so it's really the 6 10,000 that creates a new switching category enabling staple services in that leaf node, right at the workload, unifying network and security automation policy management. Second, the C X 10,000 greatly improved security posture and eliminates the need for hair pinning east west traffic all the way back to the centralized plants. Lastly, a Shane highlighted there's a 70% Tco savings by eliminating that appliance brawl and ultimately collapsing the network security operations. >>I love the category creation vibe here. Love it. And obviously the technical and the cloud line is great. But how do the customers manage all this? Okay. You got a new category. I just put the box in, throw away some other one. I mean how does this all get down? How does the customers manage all this? >>Yeah. So we're looking to build on top of the ribbon fabric composer. It's another familiar sight for our customers which already provides for compute storage and network automation with a broad ecosystem integrations such as being where the sphere be center as with Nutanix prison And so aligned with the c. x. 10,000 at G. A. now the aruba fabric composer unifies security and policy orchestration and management with the ability to find firewall policies efficiently and provide that telemetry to collectors such a slump. >>So the customer environments right now involve a lot of multi vendor and new frameworks cloud native. How does this fit into the customer's existing environment? The ecosystem. How do they get that get going here? >>Yeah, great question. Um our customers can get going is we we built a flexible platform that can be deployed in either Greenfield or brownfield. Obviously it's a best of breed architecture for distributed services were building in conjunction with the ruble but if customers want to gradually integrate this into their existing environments and they're using other vendors, spines or course this can be inserted seamlessly as a leaf or an access access to your switch to deliver the exact same set of services within that architecture. So it plugs seamlessly in because it supports all the standard control playing protocols, VX, Lenny, VPN and traditional attitude three tier designs easily. Now for any enterprise solution deployment, it's critical that you build a holistic ecosystem around it. It's clear that this will get customer deployments and the ecosystem being diverse and rich is very, very important and as part of our integrations with the controller, we're building a broad suite of integrations across threat detection application dependency mapping, Semen sore develops infrastructure as code tools like ants, Poland to answer the entire form. Um, it's clear if you look at these categories of integrations, you know XDR or threat detection requires full telemetry from within the data center. It's been hard to accomplish to date because you typically need agents on, on your compute nodes to give you the visibility into what's going on or firewalls for east west flaws. Now our platform can natively provide full visibility in dolphins, East west in the data center and this can become the source of telemetry truth that these Ml XT or engines required to work. The other aspects of ecosystem are around application dependency mapping the single core challenge with deploying segmentation. East West is understanding the rules to put in place right first, is how do you insert the service uh service device in such a way that it won't add more complexity. We don't add any complexity because we're in line natively. How do we understand that allow you to build the rules are necessary to do segmentation. We integrate with tools like guard corps, we provide our flow logs a source of data and they can provide rural recommendations and policy recommendations for customers around. We're building integrations around steve and soar with tools like Splunk and elastic elastic search that will allow net hops and sec ops teams to visualize, train and manage the services delivered by the C X 10-K. And the other aspect of ecosystem from a security standpoint is clearly how do I get policy from these traditional appliances and enforce them on this next generation architecture that you've built that can enable state health services. So we're building integrations with tools like toughen analgesic third party sources of policy that we can ingest and enforcing the infrastructure allowing you to gradually migrate to this new architecture over time >>it's really a cloud native switch, you solve people's problems pain points but yet positioned for growth. I mean it sounds that's my takeaway. But I gotta ask you guys both what's the takeaway for the customers because it's not that simple for that. We have a complicated >>Environment. I think, I think it's really simple every 10 years or so. We see major evolutions in the data center in the switching environment. We do believe we've created a new category with the distributed services, distributed services, switch, delivering cloud scale distribute services where the local where the workloads were side greatly simplifying network security provisions and operations with the Yoruba fabric composer while improving security posture and the TCO. But that's not all folks. It's a journey. Right. >>Yeah, it's absolutely a journey. And this is the first step in in a long journey with a great partner like Aruba, there's other platforms, 100 or four gig hardware platforms we're looking at and then there's additional services that we can enable over time allowing customers to drive even more Tco value out of the platform and the architectural services like encryption for securing the cloud on ramp services like state for load balancing to deploy east west in the data center and you know, holistically that's that's the goal, deliver value for customers and we believe we have an architecture and a platform and this is the first step in a long journey. It's >>a great way. I just ask one final final question for both of you. As product leaders, you've got to be excited having a category creation product here in this market, this big wave. What's what's your thoughts? >>Yeah, exactly. Right. It doesn't happen that often. And so we're all in, it's it's exciting to be able to work with a great team like Sandu and chain here. And so we're really excited about this launch. >>Yeah, it's awesome. The team is great. It's a great partnership between and santo and Aruba and you know, we we look forward to delivering value for john customers. >>Thank you both for sharing under the hood and more details on the product. Thanks for coming on. >>Thank you. Okay, >>the next evolution of switching, I'm john furrier here with the power of An HP, Aruba and Pensando, changing the game the way customers scale up in the cloud and networking. Thanks for watching. Mhm.

Published Date : Oct 15 2021

SUMMARY :

So the first the perimeter. so go ahead. property around our DPU across a rack of servers that Net Net delivers the same set You know, one of the things that we've been reporting on with you guys as well as the cloud scale, the first configuration has 48 25 gig ports with 100 gig uplinks running And in fact, the two of these deep you component service, I think this is worth calling out if you guys don't mind commenting more on this state issue Um the policy scale that you can So the other kind of key point here is that if you think about the sophistication I mean this is an evolution, I would say it's a revolution you guys are being humble um but how The goal of this is to to drive down the TCO of your data center as a whole by allowing What are some of the use cases lead and one of the top use case. It's so it's really the 6 10,000 that creates a new switching category And obviously the technical and the cloud prison And so aligned with the c. x. 10,000 at G. A. now the aruba fabric So the customer environments right now involve a lot of multi vendor and new frameworks cloud native. and enforcing the infrastructure allowing you to gradually migrate to this new architecture But I gotta ask you guys both what's the takeaway for the customers because We see major evolutions in the data center in the switching environment. in the data center and you know, holistically that's that's the goal, deliver value for customers this big wave. it's it's exciting to be able to work with a great team like Sandu and chain here. It's a great partnership between and santo and Aruba and you Thank you both for sharing under the hood and more details on the product. Thank you. the next evolution of switching, I'm john furrier here with the power of An HP, Aruba and Pensando,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shane CorbynPERSON

0.99+

Shane CorbanPERSON

0.99+

80%QUANTITY

0.99+

100 gigQUANTITY

0.99+

William ChoePERSON

0.99+

48QUANTITY

0.99+

60QUANTITY

0.99+

70%QUANTITY

0.99+

ArubaORGANIZATION

0.99+

200 gigQUANTITY

0.99+

Net NetORGANIZATION

0.99+

twoQUANTITY

0.99+

PensandoORGANIZATION

0.99+

first questionQUANTITY

0.99+

FirstQUANTITY

0.99+

100QUANTITY

0.99+

SecondQUANTITY

0.99+

C XTITLE

0.99+

john ferrierPERSON

0.99+

SanduORGANIZATION

0.99+

HPORGANIZATION

0.99+

H P E ArubaORGANIZATION

0.99+

WilliamPERSON

0.99+

first stepQUANTITY

0.99+

bothQUANTITY

0.99+

GreenfieldLOCATION

0.98+

first configurationQUANTITY

0.98+

John SoPERSON

0.98+

threeQUANTITY

0.98+

C X 10-KTITLE

0.98+

santoORGANIZATION

0.98+

CoehloORGANIZATION

0.97+

2nd generationQUANTITY

0.97+

seven nanometerQUANTITY

0.97+

john furrierPERSON

0.97+

sixQUANTITY

0.97+

todayDATE

0.97+

C X 10,000COMMERCIAL_ITEM

0.96+

four starQUANTITY

0.96+

PolandLOCATION

0.96+

one final final questionQUANTITY

0.96+

seven staple servicesQUANTITY

0.96+

four gigQUANTITY

0.96+

oneQUANTITY

0.95+

first distributed servicesQUANTITY

0.95+

TcoORGANIZATION

0.95+

SecondlyQUANTITY

0.95+

RubaORGANIZATION

0.95+

brownfieldLOCATION

0.94+

NutanixORGANIZATION

0.94+

up to 800 gigsQUANTITY

0.94+

eachQUANTITY

0.93+

three tierQUANTITY

0.92+

johnPERSON

0.92+

C XTITLE

0.91+

east westLOCATION

0.9+

1000 computeQUANTITY

0.9+

C X 10,000TITLE

0.89+

each compute noteQUANTITY

0.89+

10,000QUANTITY

0.87+

GeckoORGANIZATION

0.86+

single coreQUANTITY

0.86+

firstQUANTITY

0.85+

single pointQUANTITY

0.85+

25 gigQUANTITY

0.81+

ShanePERSON

0.81+

HP GentlemenORGANIZATION

0.8+

1stQUANTITY

0.79+

DPUQUANTITY

0.76+

Semen soreORGANIZATION

0.74+

every 10 yearsQUANTITY

0.73+

6 10,000OTHER

0.71+

past yearDATE

0.69+

YorubaORGANIZATION

0.68+

SplunkTITLE

0.65+

Pensando WilliamsORGANIZATION

0.64+

East WestLOCATION

0.61+

BrownfieldORGANIZATION

0.59+

layerQUANTITY

0.54+

G. A.LOCATION

0.54+

fourOTHER

0.53+

tonQUANTITY

0.52+

Mark Hinkle | KubeCon + CloudNativeCon NA 2021


 

(upbeat music) >> Greetings from Los Angeles, Lisa Martin here with Dave Nicholson. We are on day three of the caves wall-to-wall coverage of KubeCon CloudNativeCon North America 21. We're pleased to welcome Mark Hinkle to the program, the co-founder and CEO of TriggerMesh. Mark welcome. >> Thank you, It's nice to be here. >> Lisa: Love the name. Very interesting TriggerMesh. Talk to us about what TriggerMesh does and what, when you were founded and what some of the gaps were that you saw in the market. >> Yeah, so TriggerMesh actually the Genesis of the name is in, cloud event, driven architecture. You trigger workloads. So that's the trigger and trigger mesh, and then mesh, we mesh services together, so cloud, so that's why we're called TriggerMesh. So we're a cloud native open source integration platform. And the idea is that, the number of cloud services are proliferating. You still have stuff in your data center that you can't decommission and just wholesale lift and shift to the cloud. So we wanted to provide a platform to create workflows from the data center, to the cloud, from cloud to cloud and not, and use all the cloud native design principles, but not leave your past behind. So that's, what we do. We're, very, we were cloud, we are cloud operators and developers, and we wanted the experience to be very similar to the way that DevOps folks are doing infrastructure code and deploying that we want to make it easy to do integration as code. So we follow the same design patterns, use the same domain languages, some of those tools like Hashi corpse, Terraform, and that that's what we do and how we go about doing it. >> Lisa: And when were you guys founded? >> September, 2018. >> Oh so your young, your three years young. >> Three years it's feels like 21 >> I bet. >> And startup years it's a lot has happened, but yeah, we my co-founder and I were former early cloud folks. We were at cloud.com worked through the OpenStack years and the CloudStack, and we just saw the pattern of, abstraction coming about. So first you abstract the hardware, then you abstract the operating system. And now at with the Kubernetes container, you know, evolution, you're abstracting it up to the application layer and we want it to be able to provide tooling that lets you take full advantage of that. >> Dave: So being founded in 2018, what's your perception of that? The shift that happened during the pandemic in terms of the drive towards cloud adoption and the demands for services like you provide? >> Mark: Yeah, I think it's a mixed blessing. So we, people became more remote. They needed to enable digital transformation. Biggest thing, I think that that for us is, you know, you don't go to the bank anymore. And the banking industry is doing, you know, exponentially more remote, online transactions than in person. And it's very important. So we decided that financial services is where we were going to start with first because they have a lot of legacy architecture. They have a lot of need to move to the cloud to have better digital experiences. And we wanted to enable them to, you know, keep their mainframes online while they were still doing cutting edge, you know, mobile applications, that kind of thing. >> Lisa: And of course the legacy institutions like the BFA's the Wells Fargo, they're competing with the fintechs who are much more nimble, much more agile and able to sort of disrupt the financial services industry. Was that part of also your decision to start in financial services? >> It was a little bit of luck because we started with our network and it turned out the, you know, we saw, we started talking to our friends early on, cause we're a startup and said, this is what we're going to do. And where it really resonated was PNC bank was our, one of our first customers. You know, another financial regulatory company was another one, a couple of banks in Europe. And we, you know, as we started talking about what we were doing, that we just gravitated there because they had the, the biggest need, even though everybody has the need, their businesses are, you know, critically tied to digital transformation. >> So starting with financial services. >> It's, it's counter intuitive, isn't it? >> It was counterintuitive, but it lends credibility to any other industry vertical that you're going to approach. >> Yeah, yeah it does. It's a, it's a great, they're going to be our hardest customers and they have more at stake than a lot of like transactions are millions and millions of dollars per hour for these folks. So they don't want to play around, they, they have no tolerance for failure. So it's a good start, but it's sort of like taking up jogging and running a marathon in your first week. It's very very grilling in that sense, but it really has made us a lot better and gave us a lot of insight into the kinds of things we need to do from not just functionality, but security and that kind of thing. >> Where are you finding these customers with respect to adoption of Kubernetes? Are they leading? Are they knowing we've got to get there eventually from an infrastructure perspective? >> So the interesting thing is Kubernetes is a platform for us to deliver on, so we, we don't require you to be a Kubernetes expert we offer it as a SaaS, but what happens is that the Kubernetes folks are the ones that we end up really engaging with earlier on. And I think that we find that they're in this phase of they're containerizing their apps, that's the first step. And then they're putting them on Kubernetes and then their next step is a security and integration path. So once she, I think they call it and this is my buzzword of the show day two operations, right? So they, they get to day two and then they have a security and an integration concern before they go live. So they want to be able to make sure that they don't increase their attack face. And then they also want to make sure that this newly deployed containerized infrastructure is as well integrated as the previous, you know, virtualized or even, you know, on the server infrastructure that they had before. >> So TriggerMesh, doesn't solely work in the containerized world, you're, you're sort of you're bridging the divide. >> Mark: Yes. >> What percentage of the workloads that you're seeing are the result of modernization migration, as opposed to standing up net new application environments in Kubernetes? Do you have a sense for that? >> I think we live in a lot in the brown field. So, you know, folks that have an existing project that they're trying to bridge to it versus the Greenfield kind of, you know, the, the huge wins that you saw in the early cloud days of the Netflix and the Twitter's Dwayne scale. Now we're talking to the enterprises who have, you know, they have existing concerns. So I would say that it's, it's mostly people that are, you know, very few net new projects, unless it's a modernization and they're getting ready to decommission an old one, which is. >> Dave: So Brownfield financial services. You just said, you know, let's just, let's just go after that. >> You know, yeah. I mean, we had this dart forward and we put up buzzwords, but no, it was, it was actually just, and you know, we're still finding our way as far as early on where we're open source folks. And we did not open source from day one, which is very weird when everybody's new, your identity is, you know, I worked, I was the VP of marketing for Linux foundation and no JS and all these open source projects. And my co-founder and I are Apache committers. And our project wasn't open yet because we had to get to the point where it could be open and people could be productive in the use and contribution. And we had to staff up engineers. And now I think this week we open-sourced our entire platform. And I think that's going to open up, you know, that's where we started because it was not necessarily the lowest hanging fruit, but the profitable, less profitable, lowest hanging fruit was financial services. Now we are letting our code out into the wild. And I think it'll be interesting to see what comes back. >> So you just announced that this week TriggerMesh integration platform as an open source project here at KubeCon, what's been some of the feedback? >> It's all been positive. I haven't heard anything negative. We did it, so we're very, very, there's a very, the culture around open source is very tough. It's very critical if you don't do it right. So I think we did a good job, we used enough, we used a OSI approved. They've been sourced, licensed the Apache software, a V2 license. We hired someone who was well-respected in the DevREL world from a chef who understands the DevOps sort of culture methodologies. We staffed up our engineers who are going to be helping the free and open source users. So they're successful and we're betting that that will yield business results down the road. >> Lisa: And what are the two I see on your website, two primary use cases that you guys support. Can you dig into details on that? >> So the first one is sort of a workflow automation and a really simple example of that is you have a, something that happens in one cloud. So for example, you take a picture on your phone and you upload it and it goes to Amazon and there is a service that wants to identify what's in that picture. And once you put it on the line and the internship parlance, you could kick off a workflow from TensorFlow, which is artificial intelligence to identify the picture. And there isn't a good way for clouds to communicate from one to the other, without writing custom blue, which is really what, what we're helping to get rid of is there's a lot of blue written to put together cloud native applications. So that's a workflow, you know, triggering a server less function is the workflow. The other thing is actually breaking up data gravity. So I have a warehouse of data, in my data center, and I want to start replicating some portion of that. As it changes to a database as a service, we can based on an event flow, which is passive. We're not, we're not making, having a conversation like you would with an API where there's an event stream. That's like drinking from the fire hose and TriggerMesh is the nozzle. And we can direct that data to a DBaaS. We can direct that data to snowflake. We can direct that data to a cloud-based data lake on Microsoft Azure, or we can split it up, so some events could go to Splunk and all of the events can go to your data lake or some of those, those things can be used to trigger workloads on other systems. And that event driven architecture is really the design pattern of the individual clouds. We're just making it multi-cloud and on-prem. >> Lisa: Do you have a favorite customer example that you think really articulates that the value of that use case? >> Mark: Yeah I think a PNC is probably our, well for the, for the data flow one, I would say we have a regular to Oracle and one of their customers it was their biggest SMB customer of last year. The Oracle cloud is very, very important, but it's not as tool. It doesn't have the same level of tooling as a lot of the other ones. And to, to close that deal, their regulatory customer wanted to use Datadog. So they have hundreds and hundreds of metrics. And what TriggerMesh did was ingest the hundreds and hundreds of metrics and filter them and connect them to Datadog so that, they could, use Datadog to measure, to monitor workloads on Oracle cloud. So that, would be an example of the data flow on the workflow. PNC bank is, is probably our best example and PNC bank. They want to do. I talked about infrastructure code integration is code. They want to do policy as code. So they're very highly regulatory regulated. And what they used to do is they had policies that they applied against all their systems once a month, to determine how much they were in compliance. Well, theoretically if you do that once a month, it could be 30 days before you knew where you were out of compliance. What we did was, we provided them a way to take all of the changes within their systems and for them to a server less cluster. And they codified all of these policies into server less functions and TriggerMesh is triggering their policies as code. So upon change, they're getting almost real-time updates on whether or not they're in compliance or not. And that's a huge thing. And they're going to, they have, within their first division, we worked with, you know, tens of policies throughout PNC. They have thousands of policies. And so that's really going to revolutionize what they're able to do as far as compliance. And that's a huge use case across the whole banking system. >> That's also a huge business outcome. >> Yes. >> So Mark, where can folks go to learn more about TriggerMesh, maybe even read about more specifically about the announcement that you made this week. >> TriggerMesh.com is the best way to get an overview. The open source project is get hub.com/triggermesh/trigger mesh. >> Awesome Mark, thank you for joining Dave and me talking to us about TriggerMesh, what you guys are doing. The use cases that you're enabling customers. We appreciate your time and we wish you best of luck as you continue to forge into financial services and other industries. >> Thanks, it was great to be here. >> All right. For Dave Nicholson, I'm Lisa Martin coming to you live from Los Angeles at KubeCon and CloudNativeCon North America 21, stick around Dave and I, will be right back with our next guest.

Published Date : Oct 15 2021

SUMMARY :

the co-founder and CEO of TriggerMesh. Talk to us about what the data center, to the cloud, Oh so your young, So first you abstract the hardware, I think that that for us is, you know, like the BFA's the And we, you know, but it lends credibility to any So they don't want to play around, as the previous, you know, the containerized world, it's mostly people that are, you know, You just said, you know, to open up, you know, So I think we did a good that you guys support. So that's a workflow, you know, we worked with, you know, announcement that you made this week. TriggerMesh.com is the and me talking to us about you live from Los Angeles at

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark HinklePERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

PNCORGANIZATION

0.99+

EuropeLOCATION

0.99+

2018DATE

0.99+

LisaPERSON

0.99+

September, 2018DATE

0.99+

MarkPERSON

0.99+

Los AngelesLOCATION

0.99+

Wells FargoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

three yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

BFAORGANIZATION

0.99+

millionsQUANTITY

0.99+

NetflixORGANIZATION

0.99+

first divisionQUANTITY

0.99+

Three yearsQUANTITY

0.99+

twoQUANTITY

0.99+

TwitterORGANIZATION

0.99+

first stepQUANTITY

0.99+

last yearDATE

0.99+

MicrosoftORGANIZATION

0.99+

KubeConEVENT

0.99+

30 daysQUANTITY

0.99+

TriggerMeshORGANIZATION

0.98+

this weekDATE

0.98+

CloudStackTITLE

0.98+

21QUANTITY

0.98+

hub.com/triggermesh/trigger meshOTHER

0.98+

first weekQUANTITY

0.98+

KubeConORGANIZATION

0.98+

CloudNativeCon North America 21EVENT

0.97+

LinuxORGANIZATION

0.97+

once a monthQUANTITY

0.97+

ApacheORGANIZATION

0.97+

firstQUANTITY

0.96+

first customersQUANTITY

0.96+

tens of policiesQUANTITY

0.96+

two primary use casesQUANTITY

0.96+

oneQUANTITY

0.95+

first oneQUANTITY

0.95+

thousands of policiesQUANTITY

0.94+

BrownfieldORGANIZATION

0.93+

day twoQUANTITY

0.92+

day threeQUANTITY

0.92+

one cloudQUANTITY

0.91+

Hashi corpseTITLE

0.91+

day twoQUANTITY

0.9+

OpenStackTITLE

0.88+

PNC bankORGANIZATION

0.87+

hundreds of metricsQUANTITY

0.87+

TensorFlowORGANIZATION

0.86+

CloudNativeCon NA 2021EVENT

0.85+

TerraformTITLE

0.83+

KubeCon CloudNativeCon North America 21EVENT

0.82+

KubernetesTITLE

0.81+

pandemicEVENT

0.81+

hundreds andQUANTITY

0.8+

cloud.comORGANIZATION

0.79+

DevOpsTITLE

0.75+

GreenfieldORGANIZATION

0.74+

Venkat Krishnamachari, MontyCloud | AWS Startup Showcase: Innovations with CloudData and CloudOps


 

(upbeat music) >> Hello, and welcome to this Cube special presentation of Cube On CloudStartups with AWS Showcase. I'm John Furrier, your host of theCUBE. This session is the accelerate digital transformation and simplify AWS with autonomous cloud operations with Venkat Krishnamachari, who's the CEO and co-founder here with me on remote. Venkat, good to see you. >> Great to see you, John. >> So this is a session on, essentially DAY2 operations. Something that we've been covering on theCUBE as you know, for a long time. But the big trend is as DevOps becomes much more mainstream, intelligent applications or agile applications, have to connect with intelligent infrastructure and your company MontyCloud has the solution that literally turns IT pros into cloud powerhouses as you guys say, it's your tagline. This is a super important area. I want to get your thoughts and showcase what you guys are doing as one of the hot 10 startups. Thanks for coming on. So take a minute to explain real quick. What is MontyCloud all about? >> Great, thank you again for the opportunity. Hey everybody, I'm Venkat Krishnamachari. I represent mandate team at MontyCloud. We are an intelligent cloud management platform company. What we help customers do, is we help them simplify their cloud operations so they can go innovate and develop intelligent applications. Our platform is called DAY2, because everything after the day one of going to Cloud, needs a lot of expertise and we decided that's a fun area to go solve for our customers. We solve everything on starting DAY2 from simplifying provisioning, to management, to operations, to autonomous cloud operations. Our platform does this for our customers so they can innovate faster and they can close the cloud skills gap that is required to empower the developers. >> Venkat, I want to get your thoughts on DAY2 operations. There's been a trend that people talk about for a long time. As people move to the cloud and see the economic advantage of certainly with COVID-19, the market has said, "Hey, if you're on cloud native, you win." Andy Jassy at re:Invent last Keynote really laid out how companies can be proficient in becoming cloud-scale advantages. One of them was have expertise in cloud. So everyone is kind of doing that. You're starting to see enterprises all build the muscle for cloud operations. That's day one, they get started. Then that's kind of the challenges and the opportunities kick in when you have to continue in production. You have things that go on in the software. The underlying scaling infrastructure needs to be scaled out or all these kinds of things happen. This is what DAY2 is all about, keeping track of and maintaining high availability, uptime and keep the cost structure in line. This is what people discover. If they don't think properly about the architecture, they have huge problems. You guys solve this problem. Could you explain why this is important. >> Sure thing, John. So cloud operations, as you described, it's a continuous operations and continuous improvement in cloud environments. What efficient cloud operations does for customers is it accelerates innovation, reduces the risk, and more importantly, all the period of time that they are using their applications in the cloud, which is future, reduces the total cost of cloud operations. This is important because there is a huge gap in cloud skills. The surface area of cloud that customers need to manage is growing by the day. And most importantly, developers are increasingly and rightfully so, getting a seat at the table in defining and accelerating company's cloud journey. Which means, now they're proposing, microservices based application, container based application. Traditional applications are still in the mix. Now the surface area becomes a challenge for the IT operators to manage. That's why it's very important to start right. See, we ask this question to our customers. Having listened to our customers as hundreds of them, one thing is clear, when we ask this question to our customers, ever wonder why and how large scale companies like AWS are able to deliver massively scalable services and operate massive data centers with fewer people? Because it's automation. And it's important to think about, as you scale, automate a way things that must be automated, eliminate undifferentiated heavy lifting and help your developers move fast. All of this is vital in the day and age we live in, John. >> Yeah, I want to double down on that because I think this idea of integrating into operations is a critical key point for where success and failure kind of happen. We've seen with cloud, certainly IT departments and enterprise is going okay, cost optimization, check. Get cloud native, getting the cloud, lift and shift, I thought it through, I put some stuff in the cloud and then they go great, now I need resilience. I need resiliency, and I want to make sure things are now working okay, water flowing through the pipes, cloud's working. Then they say, "Well this is good, I got to need to integrate in with my own premises or edge or other things that are happening." Then they try to integrate into their core operations. McKinsey calls this the value driver three, integrating into core operators. We heard from them earlier in the program here at this event. This is key, it's not trivial to integrate cloud into your operations. This is what DAY2 and beyond is all about. Talk more about that. >> Yeah, that's a great point. And that's something that we've been working with customers to hands-on help learn and build it for them, right? So the acceleration of cloud adoption during the pandemic and ongoing adoption, it's going to shift the software security compliance and operational landscape dramatically. There's no escaping it. Cloud operations will no longer be an afterthought. DevOps will integrate with CloudOps. It'll provide a seamless feedback loop so that a box can be found sooner, fixed sooner, and uptime can be guaranteed. I'll give an example. One of our customers is a university. During the pandemic, their core examination application went down and they couldn't fix it on time because of lack of resources. For them, it's vital to have adopted cloud operations sooner but the runway they had was very little. Fortunately, we had the solution for them there. Within a week, they were able to take their entire on-prem application online, not just take the application but provide an autonomous cloud operations layer to their existing IT team with our platform, upscale them, and then about 14,000 students took their exams without any disruption. Now this customer and customers such as themselves have come to expect that level of integrated cloud operations into their application portfolio. It's important to address that with a platform that simplifies it. >> Venkat, real quick. Define, what is autonomous CloudOps platform? What does that mean? >> So let's take an example here, right? Customers who are trying to move an existing workload to cloud bring a traditional set of application. Then customers who are born in the cloud build microservices or server less based applications. Then there is containers. Now, all three the person surface areas that customers, particularly the IT teams have to manage. With the growing surface area, with the adoption of infrastructure as core, it becomes more nuanced to think about, how do we simplify? And in simplification comes automation. When a developer provision certain resource, previously, they used to be filing a ticket. Central IT team has to respond. Developers don't want that anymore. They want to innovate faster but at the same time Central IT team wants to have some governance in play. The best way to get out of the way of developers is automating it. And providing autonomous cloud operations means developers can deploy newer workloads faster, but with a level of guaranteed guardrail on security compliance and costs that sets them free. This is what we mean by autonomous cloud operations, closing the gap in skills, closing the gap in tooling, empowering your developers without thinking about the traditional model but enabling them to do things that's more in a rapid pace. That's what we mean by autonomous cloud operations. >> You had a great market opportunity. I think this is obviously a no brainer. As people say in the industry "cloud is scale is proven". Even post COVID if people don't have a cloud growth strategy they're pretty much going to be toast. McKinsey calls this a trillion dollar at a minimum not including potential new use cases, new pioneering applications coming. So pretty much, well the verdict is there, this is cloud. I got to ask you about MontyCloud as you guys have a business. Give or take a quick minute to explain the business of MontyCloud, some vitals or how people buy the product, the business model. Take a quick minute to explain MontyCloud business. >> Sure thing. John, see, our entire goal is to simplify cloud operations. Because what we learned is what seems to be complex about cloud adoption is that everybody is expected to be an expert on everything in the new era, but most teams are not ready to run efficient cloud operations at scale, as the cloud footprint is growing. This means we have to redefine certain conversations here. We talk directly to infrastructure architects, cloud architects, application owners. And in general, we talk to people who are leading their IT digital transformation for their companies. What we are enabling our customers is, they must demand that the traditional operation model must change to enable newer application patterns. For this, we are expecting customers want to standardize things, right? IT leaders are beginning to say, "All right, I got to standardize my provisioning, standardize my operations, reduce the heavy lifting that comes with infrastructure's code, and enable the business team and the application team to work closely together." The best way to do that is to go solve this problem with automation. So our platform is able to go help such customers, particularly leaders who demand digital transformation. With clear KPIs, our platform can help them ask the why question easily. And then our platform can also go perform, the how part of automation. That's what we solve. Those are the kinds of customers we really have been working with, John. >> So if I'm a customer, how do I know when I need to call MontyCloud? Is it because my cloud footprint is growing which is a natural sign of growth, or is it because I have more events happening, more things to manage? When do I know I have the need to call you guys? What's the signal? What's the sign? >> So we call it the day one mindset, and also the DAY2 mindset. Customers deciding to go to cloud on day one, should think about DAY2. Because without thinking about DAY2, it can become very expensive, right? When a customer's thinking about digital transformation, could be a lift and shift or it could be starting a new application pattern in the cloud, we can certainly help starting right that day because there are a couple of things they have to do, right? They have to standardize the cloud operations which means setting up the cloud accounts, setting up guardrails, enabling teams to go provision with self service. You want to start the right way. So we are happy to help on the day one journey itself and we can automate DAY2 along with it. So standardizing infrastructure operations, standardizing provisioning, security, visibility, compliance, cost. If any of this is an important milestone that customers have to achieve in their cloud journey, we can help. >> By the way, I would just point out that we were just talking on another session around lift and shift is not a no-brainer either if not thought through and remediated correctly that cost could go through the roof. I mean, we've seen evidence of lift and shift fails just because they didn't think it through. Just to your point. I mean, that's not a no brainer. Quickly explain why lift and shift is not as easy as it looks. >> Sure thing. So lift and shift is great to get started, but why sometimes it fails is that the connotations about wanting to keep your Opex down while giving up CapEx is at odds with each other, right? Cloud is great for reducing your Capex. But ongoing operations, of the DAY2 operations, can add a lot of burden to the operational expenses. What customers find out is after moving to the cloud, the cost overruns are happening because of resources that are not provisioned correctly, resources that should not be running. Wild Wild West kind of scenarios, where everybody has access to everything and they over provision. All of this together end up impacting customers' ability to go control the Opex. Then digital transformation projects are looked at from three different angles at least, right? Cost is definitely one, security is another, and then the ongoing operational tax with respect to monitoring, governance, remediation. All three when it simultaneously hits our customers, they look at lift and shift and saying, "Hey, this was cheaper on prem." But actually in the long run, this will be not just cheaper on the cloud, it can also be more efficient if they do it right. We can talk about some examples on how we help some customers with that helpful, John. >> Well, I want to get into the cloud operations, the whole dashboard in cloud operation administration. Is there anything that you could share because people are wanting more and more analytics. I mean, they're buying everything in sight. I mean, cyber security, you name it. There's more and more dashboards. No one wants another dashboard. So this is something that you guys have a strong opinion on how to think this through. Because again, at the end of the day, if you're instrumenting your network properly and your applications, your intelligence, things are changing, where's the data? Take us through your thinking around that. >> Sure thing. You are spot on. Nobody wants another dashboard that is just spewing data at them because data, without context is irrelevant in our mind, right? We want to be able to provide context, we want to be able to provide data within the context. And the dashboard to us means a customer that's looking at it, an IT leader looking at it should be able to ask the why question without working too hard at it, right? Let's bring up our dashboard. I would love to show and tell, although it's a dashboard, it is a tool that can enable IT leaders do things differently. >> John: Right, here it is. This is it right here. Okay, so this is the dashboard. Take me through it, what does it mean? >> Venkat: Yeah, let's (indistinct) right? The chart in the middle is the most important piece there. What we help our leaders, IT leaders do is, all the fullness of time of cloud adoption, we know the cloud's footprint is going to grow. The gray chart in the back, the stock chart represents the cloud footprint. As the cloud footprint continues to grow, we would like our leaders to demand that their security issues go down, their compliance issues go down and their costs to become more and more optimum. When leaders demand this, they can make things happen and our platform can help reduce all three and leaders can have this kind of dashboard to ask the why question. For example, they can compare one department with another department, ask that why question. They can compare an application that is similar in one department in another department and ask the why question, why is it more expensive? Why is it having more compliance issues? This is the kind of why questions our dashboard helps our customers perform and ask those questions, and they don't have to lift a finger, right? This entire dashboard comes to life within few minutes of them connecting their cloud accounts, where we provide visibility into operational issues, trend lines of data on how much consumption happens. And over a couple of months, they can see for themselves, make overall operation cost going down. Is my IT infrastructure now in cloud more resilient? And doesn't take more people to do it or am I able to turn on MontyClouds DAY2 bonds to go start reducing that burden or the period of time. This is what we mean by putting the power of autonomous CloudOps in our hands for customers. >> And this is what you mean by the IT powerhouse for the cloud. Is this on Amazon? So if I want to consume the product, what do I need to do to engage with you guys? What does it mean to me? Am I buying a service? Is it native? Is there agents involved? Take me through, what do I need to do? >> It's a great question. We are born in the cloud startup, which means we are super thankful for amazing technologies like Amazon infrastructure as core and the venting platform that's out there. So our platform is fully hosted, managed SaaS platform. A customer does not need to do anything but log onto montycloud.com, click a bunch of buttons, and connect their database account. They get started in under five minutes, self-service. And as they go through the platform, the guided experience where they can get to that dashboard I showed you in just a few clicks. They can get visibility, security posture assessment, compliance posture assessment, all in those few clicks. And when they decide to start using the platform more to automate and leverage the bots, they can always buy into additional services in the platform. So it's a easy to use get started in 10 minutes tops, if you will, that kind of platform >> Okay, great stuff. I want you to take me through the intelligent application flywheel that's going on here. So I can imagine that as the flywheel of success happens. Okay, got some intelligent apps, I see the dashboard, I'm getting some more visibility on the value creation, unlocking more value, new use case, all the things that happen in cloud, all good. And then I start growing, but I got builders trying to build more applications, more demand for more applications, more pressure on the infrastructure. The next question's, how do you guys simplify the cloud operation equation? Because I got to add more VPCs, I got to do more infrastructure, is it more EC Two? It can get complicated. How do you guys solve that problem? Because if the cloud footprint starts to grow because of more intelligent applications, how do you guys make it easier and simpler to scale up the intelligent infrastructure? >> Oh, that's a great question again, John. I'm going to go into a little bit of a detailed slide here. But before I do that, let's talk about two customers that we helped, right? This slide on the left, talks to those, both the customers. So what we have learned working with customers is, they have to build cloud accounts, manage cloud regions, user onboarding. Then they have to build networking infrastructure. Then they have to enable application infrastructure on top of the networking infrastructure. Application infrastructure could mean they want high-performance computing workloads or elastic services, such as queuing services, storage, or traditional VMS databases. That's a lot to build in the application infrastructure with infrastructure scope. On top of that, our customers have to deal with visibility, security, compliance costs. You get it, right? The path to intelligent applications is not easy because cloud is powerful, but it's broad, and the talent required is deep. We are able to say, how can we help our customers automate everything below the intelligent application layer. If we can do that, which we do, we can now propel our developers to go build intelligent applications without having the of also managing the underlying infrastructure. And we can help the IT operations team become cloud powerhouses because they get out of the way and enabled. Give you two examples here, right? One of our customers is a fortune 200 large ISP. They have about 10,000 servers in a particular department. And previously, when the servers were on premises, they had about a four member team managing compliance for it. When they lifted and shifted these servers into the cloud, the same model they wanted to... There are leaders that asked "Why should we continue with the same model?" They wanted MontyCloud. Now there is a DAY2 compliance board that's running, managing the 10,000 servers automatically watching on for compliance drifts, notifying them in a Slack channel, gets approval, remediates and fixes it. They were able to take those four folks and put them on the intelligent application side, I suppose to continuous infrastructure management site. Another example, a fortune 200 global networking company. It's an interesting situation, John. So on cyber Monday, they wanted to go big of obviously the cyber Monday was very important for them. The Thursday before cyber Monday, their on-premises data center and application went down and their teams wanted to move the application to cloud. And the partner that we work with, that brought this challenge to us saying hey, this fortune customer wants to go to cloud and we have this weekend. Well, we were able to go guide the partner and with our platform they were able to not only take their application from on-prem to cloud, they set up the cloud infrastructure, the networking, the application layer, the monitoring layer, the operations layer, all of that within a day. And on Monday that application delivered three X sales for this customer, without that partner or the customer being a cloud expert. That's what we mean by putting that kind of power in the hands of customers. >> Yeah, and I want to go back to that slide 'cause I think there's a second section I want to look at because what you just referred to is, I think this builds into the next comment on the right-hand side, this DAY2 kind of console vision here. The idea of getting in the weeds and getting into the troubleshooting of say, that cyber Monday example is exactly the non agility scenario, right? Because, if anyone's ever worked in tech knows when you have to get to root cause on something, it can take a while, right? So you need to have the system architecture built out. So here, classic cloud architecture on the left moves to a simple kind of console model. That's kind of what you guys are offering. Am I getting that right, Venkat? Is that kind of how this works? >> Yeah, that's kind of how it works, but the path to that maybe, a quick explanation though. We look at what's on the right--- >> Put that slide back up, let's get that slide back. Okay, there it is. >> Venkat: So what's on the right side here is, every layer on the left requires specialized talent and specialized tooling. That's all customers are currently experienced in the cloud. They either have to buy into a expensive monitoring tool or buy into an expensive security posture management tool. They have to hire, you know... It's hard to find cloud talent, right? And then they have to use infrastructure as code solutions. Sometimes that is, that can get more complex to maintain. What we have in MontyCloud is that, every layer there, they can provision by clicking away. For example, when they provision their cloud accounts setting up AWS best practices, budget guardrails, security, logging and monitoring, they can click away and do it. Setting up network infrastructure like VPC is setting up AWS transit gateway, VPNs, there's templates they can click and do it. The application infrastructure, which is a growing set of application infrastructure. Imagine this John, if a developer can come in and request the IT team they would like to set up an RDS database, right? The IT team can now with DAY2, can provide the developer options of, do you want it in dev stage prod? And do you want snapshots, backup, high availability? These are all check boxes and the developer can pick and choose and they can provision what they want without additional help from the IT team. And the IT team does not have to automate any one of those because it's pre automated in our platform. >> Yeah, this is the promise of infrastructure as code. You don't got to get in to the architecture and start throwing switches and all kinds of weird stuff can happen. Someone doesn't turn off, they don't enable auto-scale and they tested for this they forgot to revert back. I mean, there's a zillion things that could go wrong, human error, as well as automation. So once you set it up, then you provide a consumable developer friendly approach. That seems to be what's happening. Okay, cool. All right, well Venkat, this is fantastic. Final minutes we have left. I want to get your thoughts on the momentum and the vision. Talk about the momentum that you guys have now in the marketplace and what's the vision for the next five years. >> Great, it's a great question. From a momentum perspective John, we take an approach of, let's work with customers and understand that we can solve some problems for them. We've been working backups with customers. We have customers that are startups, that are born in the cloud, we have customers that are enterprise customers who are having a large footprint on-prem. Then we have everybody in between like university customers who are transitioning off. So what we did is from a momentum perspective, we worried more about, do we understand the talent gap and the tooling gap that exists across the board of all customers? Because every customer, once they go to cloud, they look to achieving the same level of efficiency and simplicity like modern cloud companies. A traditional company that moves to cloud wants to act and behave like the one in the cloud customer. For us it was very important to understand a variety of customers, a variety of use cases, and then automated away. So momentum is that we are able to go help a customer that is a Greenfield customer to go to cloud easily. And we're also able to go help brownfield customers, ensure they can reduce the total cost of cloud operations on an ongoing basis. So we've been seeing customers of all sizes, even helping customers of all sizes move fast. And there's a bunch of case studies out there in our website. We are a startup, so we've been able to help those customers and earn their trust by delivering results for them. So the momentum is that, we are able to go scale up now, and scale up fast for our customers without us being in the way, technically. Or customers can go to our platform help themselves and accelerate the platform. That's the momentum we have. From a future perspective, you asked, where things are headed, right? There are a couple of things. First things first, it's important to not just predict the future, we got to create it, right? About two years back when we founded MontyCloud, the question my team asked me, my CTO asked me is, what really matters in cloud ranking, right? So we said, all right, this is provisioning automation management. Yeah, they all matter. But what seemed to really matter is there are three things that matter. That's how we came to... One is events. The cloud itself is an eventing machine, right? More than ever, the cloud infrastructure emits events at every turn, every resource, every activity is expressed as an event. So we made an early bet on building an event driven platform from the ground up. We are the only platform that is even driven. Every other platform is seen to try and solve problems which is awesome to have, but they take an approach of an API based model or an inference into log based model. So the future, we believe, belongs to eventing model because it's lightweight on the customer's infrastructure, it goes easy on the cloud providers. More importantly, it gets the customer as close as possible to when the event happens, right? That's very important, to be able to be even event-driven. If you noticed Cloud Native Foundation came up and announced recently cloud events is the right way to deal with modern SaaS platforms. We've been in cloud events from day one for us, right? So the future is in eventing model. >> And that's where the data angle, I think, connects here for this event and why you guys are a hot startup is, observability, all these things. It's all about a event driven infrastructure. It's all events. It's monitoring, it's management, it's data. At the end of the day, the data is the instrumentation, is what it is. Developers are coding. Media's data. Everything's data. Everything has to do with data. You guys have a unique approach. Venkat Krishnamachari, thank you for coming on. Appreciate it, and thanks for sharing your story here at the AWS Showcase. First inaugural Cube On CloudStartups, part of the 10 hot startups categories. Thanks for sharing. >> Thanks for the opportunity. And we hope to help a lot more customers, simply for the cloud operations and innovate with some intelligent applications that's going to change the world. >> Check out Venkat and his company all on Twitter, on Facebook, they're on every channel, all the channels are open, of course. theCUBE we're bringing you all the hot startups, extracting the signal from the noise. I'm John furrier. Thanks for watching. (Upbeat music)

Published Date : Mar 24 2021

SUMMARY :

This session is the accelerate have to connect with that is required to and see the economic advantage for the IT operators to manage. put some stuff in the cloud but the runway they had was very little. What does that mean? particularly the IT teams have to manage. I got to ask you about MontyCloud and the application team and also the DAY2 mindset. By the way, I would is that the connotations Because again, at the end of the day, And the dashboard to us means a customer This is it right here. As the cloud footprint continues to grow, for the cloud. and the venting platform that's out there. So I can imagine that as the move the application to cloud. and getting into the but the path to that maybe, let's get that slide back. and request the IT team in the marketplace and what's the vision So the momentum is that, we data is the instrumentation, Thanks for the opportunity. all the channels are open, of course.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian GilmorePERSON

0.99+

David BrownPERSON

0.99+

Tim YoakumPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VolantePERSON

0.99+

Dave VellantePERSON

0.99+

BrianPERSON

0.99+

DavePERSON

0.99+

Tim YokumPERSON

0.99+

StuPERSON

0.99+

Herain OberoiPERSON

0.99+

JohnPERSON

0.99+

Dave ValantePERSON

0.99+

Kamile TaoukPERSON

0.99+

John FourierPERSON

0.99+

Rinesh PatelPERSON

0.99+

Dave VellantePERSON

0.99+

Santana DasguptaPERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

BMWORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

ICEORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Jack BerkowitzPERSON

0.99+

AustraliaLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VenkatPERSON

0.99+

MichaelPERSON

0.99+

CamillePERSON

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

DellORGANIZATION

0.99+

Don TapscottPERSON

0.99+

thousandsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Intercontinental ExchangeORGANIZATION

0.99+

Children's Cancer InstituteORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

telcoORGANIZATION

0.99+

Sabrina YanPERSON

0.99+

TimPERSON

0.99+

SabrinaPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MontyCloudORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeoPERSON

0.99+

COVID-19OTHER

0.99+

Santa AnaLOCATION

0.99+

UKLOCATION

0.99+

TusharPERSON

0.99+

Las VegasLOCATION

0.99+

ValentePERSON

0.99+

JL ValentePERSON

0.99+

1,000QUANTITY

0.99+

Miguel Perez Colino & Rich Sharples, Red Hat | KubeCon + CloudNativeCon NA 2020


 

>>From around the globe. It's the cube with coverage of coop con and cloud native con North America, 2020 virtual brought to you by red hat, the cloud native computing foundation and ecosystem partners. >>Hey, welcome back, everybody Jeffrey here with the cube coming to you from our Palo Alto studios today with our ongoing coverage of coupon cloud native con North America, 2020. It's not really North America, it's virtual like everything else, but you know that the European show earlier in the summer, and this is the, this is the late fall show. So we're excited to welcome in our very next two guests. Uh, first joining us from Madrid. Spain is Miguel Perez, Kaleena. He is a principal product manager from red hat, Miguel. Great to see you. >>Good to see you happy to be in the cube. >>Yes. Great. Well welcome. And joining us from North Carolina is rich Sharples. He is a senior director, product management of red hat. Rich. Great to see you. >>Yeah, likewise, thanks for inviting me again. >>So we're talking about Java today and before we kind of jump into it, you know, in preparing for this rich, I saw an interview that you did, I think earlier about halfway through the year, uh, celebrating the 25th anniversary of Java and talking about the 25th anniversary Java. And before we kind of get into the future, I think it's worthwhile to take a look back at, you know, kind of where Java came from and how it's lasted for 25 years of such an important enterprise, you know, kind of application framework, because we always hear jokes about people looking for COBOL programmers or, you know, all these old language programmers, because they have some old system that's that needs a little assist. What's special about Java. Why are we 25 years into it? And you guys are still excited about Java yesterday, today and in the future. >>Yeah. And I should add that, um, in terms of languages, uh, twenty-five is actually still pretty young. Java's, uh, kind of middle aged, I guess. Um, you know, things like CC plus bus rrr you're 45, 50 years old Python, I think is about the same as Java in terms of years. So, you know, the languages do tend to move at a, um, at a, they do tend to stick around, uh, uh, a bit, well what's made Java really, really important for enterprises building business critical applications is it started off with a very large ecosystem of big vendors supporting it. Um, it was open in a sense from the very start and it's remained open as in open source and an open community as well. So that's really, really helped, um, you know, keep the language innovating and moving along and attracting new developers. And, um, it's, it's still a fairly modern language in terms of some of the new features it's advancing with the industry taking on new kinds of workloads and new kinds of per program paradigms as well. So, you know, it's, it's evolved very well and has a huge base out somewhere between 11 and 13 million developers still use it as a primary development language in professional settings. Yeah. >>What struck me about what you said though in that interview was kind of the evolution and how Java has been able to continue to adapt based on kind of what the new frameworks are. So whether it was early days in a machine, like you talked about being in a set top box, or, you know, kind of really lightweight kind of almost IOT applications then to be calming, you know, this really a great application to deliver enterprise applications via a web browser and that, you know, and it continues to morph and change and adapt over time. I thought that was pretty interesting given the vast change in the way applications are delivered today versus what they were 25 years ago. >>Yeah, absolutely. It's, you know, the very early days were around embedded devices, uh, intelligent toasters and, you know, whatever. Um, and, and then where it really, really took off was, but the building supporting big backend systems, big transactional workloads, whether you're a bank or an airline you're running both the scale, but also running really, really complex transactional systems that were business critical. And that's that's for the last, you know, 15 years has been, um, where it's, it's really shown building backend, um, systems. Now, as we kind of move forward, you know, the idea of, uh, um, like server side, uh, server side application versus a front end is kind of changed. You know, now we're talking microservices, we're talking about running in containers. So really the focus of where we run Java and the kinds of applications we're building with Java as this has radically changed. And as such the language has to change as well, which is, you know, one, I'm pretty excited to talk about caucus today. >>So let's, let's jump into it and talk about corcus cause the other big trend, you know, along with, with, with obviously, uh, uh, browsers being great enterprise applications, delivery vehicles is this thing called containers, right? And, and specifically more recently Kubernetes is the one that's grabbing all the attention and grabbing all the, all the momentum. Um, so I wonder Miguel, if you could talk about, you know, kind of as, as the popularity of containerized applications and containerized to everything right, containerized storage, or you even talked about containerizing networking, troll, how that's impacted, uh, what you guys are doing and the impact of Java, uh, and making it work with kind of a containerized Kubernetes world. >>Well, what we found is that the paradigm of development has teeth. So we have this top up, uh, uh, paradigm that the people are following to be able to do the best with containers, to the best with Kubernetes on the, this has worked quite fine in Greenfield on for, for many cases has been a way to develop applications faster, to be able to obtain variably salts. And the thing is that for many, uh, users, for many companies that we work with, uh, they also want to bring some of their stuff that the applications that are currently are running into this world. And, uh, I mean, we, we walk especially a lot in helping these customers be able to adopt those obligations, but we try to do it, uh, as we say, the N pixie dust, you know, we really dig into the code, we'll review the code with modernize. The application will help their customer with that application. We provide the tools are open for anyone to be able to review it and to be able to take it. So we are moving away from Greenfield into brownfield and not a way we are evolving together to say we more precise, you know, all these Greenfield applications keep coming, but also the current applications want to be more organized. >>Right. Right. So it's pretty interesting. Cause that's always the big conversation. There's, it's, it's all fine. And good if you're just building something new, uh, to use the latest tools. But as you mentioned, there's a whole lot of conversation about application modernization and this is really an opportunity to apply some of these techniques to do that. So quirky. So I wonder if you just give, let's just jump into it. What is it at the highest level? Uh, what's it all about? What should people know? >>Yeah. So, so Corker says I'm reading an attempt by red hat to ensure Java is a first-class citizen in containerized environments, but building reactive applications, uh, cloud native applications, uh, functions, Java is an incredible piece of engineering. It does some incredible things. It sudden can self optimize. As it's running in line code, it can do some really amazing things the longer it runs, but in a containerized environment, you're likely not going to be running huge amounts of code. You'd likely be running microservices and your, your services are likely to have a kind of limited life cycle as we you're able to deploy more frequently or in a function environment where, you know, you've been bought once and then you're done, um, you know, during all those long, um, kind of, um, those optimizations over time, don't really, um, make a lot of sense. So what we can do is remove a lot of the, um, the weights of Java, a lot of the complexity of Java, and we can optimize for an environment where your code is maybe just running for a few microseconds as in the case of the function or something running in native, cause you scale up and scale down. >>So we move a lot of the op side. We move a lot of the, um, the, the efforts within the application, uh, to compile time, we pre compile all of your, of your config and initialization, so that doesn't have to happen in your, um, your, your, your runtime or your production environment. Um, and then we can optimize the code week. We can, we can remove that code. We can remove, you know, whole, uh, trees and class libraries and really slimmed down the memory footprint and radically, um, slim, the Maddie memory footprint, um, increase the startup time as well. So, you know, you have less downtime in your applications. Um, and we've recently done a S a study with ADC that shows some pretty stunning results compared to, you know, some existing frameworks. And, you know, we get, um, you know, sort of like, you know, overall cost savings of, you know, 60, 64%. >>Um, we can get eight times better density. You're running more in a, in a, in a cluster and, um, you know, reduction in memory up to 90% as well. So it's, these are significant changes now. That's all good, you know, saving, saving 60, 60% on your operational costs is significant. But what we find is that most organizations, they come for the performance and the optimizations, but what actually stay for is the speed of development. So I think, I think caucus real silver bullets is, um, the developer productivity, you know, for organizations, the cost of development is still one of the major costs. I mean, the operational costs, the hosting costs a significant, but development costs, time to market will always be top of mind for organizations that are trying to move faster than the competition. And I think that's really where, um, um, caucus special and coupled in, uh, in, uh, OpenShift or Coobernetti's environment really, really does shine. Yeah, >>It's pretty interesting. So people can go to corcus.io and see a lot of the statistics that you just referenced in terms of memory usage and speed and, and whole bunch of stuff. But what struck me when I went to the site was that was this big, uh, uh, two words that jumped out developer joy. And it's funny that you talked on that just now about really, um, the benefits that come to the developer directly to make them happier. I mean, really calling out their joy. So they're more productive and ultimately that's what you said. That's where the great value is in terms of speed of deployment, happy developers, and productive developers. You know, Miguel, you get your, you get down into the weeds of this stuff. Again, the presentations on your LinkedIn, everyone needs to go look and you talk a lot about at migration and you lot talk a lot about app modernization. So without going through all 120 some odd slides that I think you have, which is good, phenomenal information, what are some of the top things that people need to think about and consider both for app modernization as well as at migration? >>Um, that's, that's, that's an interesting question. Uh, the thing is that, um, the tolling is important on the current code is, and the thing is that normally when, when we started migration project, we tried to find architects in the applications to be able to find patterns. You know, you find parents is much easier because, uh, once you solve one part on the same part on can be solved in a very similar way. So this is one of the parts of that. We focus a lot, but before getting to that point, it's very important how you stop, you know, so the assessment phase is, is very important to be able to review well, what is the status of the applications, the context of the applications. And with that, I mean, things like, for example, the requirements that they have, there's the maintenance that they take in their resiliency and so on. >>So you have to prepare very well, the project by starting with a good assessment, you have to check which applications makes more, make more sense to start with and see which, how to group them together by similarities. And then you can start with the project that saying, okay, let's go for these set of applications that make more sense that are more likely to be containerized because of the way we are developing them because of the dependencies that they have because of the resiliency that is already embedded into them and so on. So that, that the methodology is important. And we normally, for example, when we, when we help partners do a application migration, one of the things that we stress is that this is the methodology that we follow and in the website for my vision, totally for application, you can find also, um, methodology, uh, part that, uh, could help, uh, people understand, okay, these, these are the stages that we normally follow to be successful with migrating applications. >>Yeah. Let go. You don't, we're not friends. We don't hang out a lot, but if we did, you would know I never ever recommend PowerPoint for anything. So, so the fact that I'm calling out your PowerPoint actually means something. Cause I think it's the worst application ever built, but you got some tremendous, tremendous information in there and people do need to go in and look, and again, it's all from your LinkedIn work, but I wanted to shift gears a little bit, right? We're at CubeCon cloud native con. Um, obviously it's virtual is 2020. That's the way the world today. But I just curious to get your guys' take on, on what does this, uh, event mean for you obviously really active, open source community, you know, red hat has a long open-source history. Um, what does CubeCon cloud native con mean for you guys? What do you hope to get out of it? What should people hope to, uh, to learn from red hat? >>Yeah, we, um, yeah, we're, we're buying your DNA. We're very, very collaborative. Uh, we, we love to learn from our customers, users of the technologies, um, in the communities that we support. Um, speaking as a, you know, we're both product guys, there's nothing better than getting with, um, people that actually use the products, um, in anger, in real life, whether they're products are upstream technologies, learning, learning, what they're doing, understanding where, um, some of the gaps are there's. Um, yeah, we just couldn't do our jobs without engaging with developers, users in these kind of conferences. Yeah. A lot of the, um, love interest we've seen with coworkers is, is in the community, you know, um, like I'd been part of many, many successful open source projects, um, um, over red hat. And it's great when your customers, you know, like, uh, Vodafone, Greece or Carrefour in Spain are openly publicly talking about how good your technology is, what they're using it for. And that's really good. So it's just nothing, there's no alternative that, you know, whether it be virtual virtually or physically sitting down with, uh, with users of your technology, >>How about you, Miguel? What are you hoping to get out of, uh, out of the show this year? >>Um, we are working a lot with, on Kubernetes in red hat, on, uh, as part of the community, of course. And, um, I mean, there are so many new stuff that is coming around, Kubernetes that, uh, it's mostly about it, about all the capabilities that were arming, especially for example, several lists, you know, several lessons, there is an important topic with crackers, because for example, as you make the application stopped so much faster and react so much faster, you could have known of them running and just waiting for an event to happen, which saves a lot of resources and makes us super efficient. So this is one of the topics, for example, that we wanted to cover in this edition, you know, how we are implementing serverless with Kubernetes and OpenShift and many other things like pipelines. Like, I don't know, we just had quite a visit in the, uh, uh, video, uh, life of what is coming up. I see for the six. And I recommend people to take a look at it, to get everything that's new because there's a lot. Yeah, >>Yeah. You guys are technical people. You've been doing this for a long time. Why is Kubernetes so special? W Y Y you know, there's been containers in the past, right. And we've seen other kind of branded open source projects that got a lot of momentum, but Kubernetes just seems to be blowing everybody out of the out of its path. Why, what should people know about Kubernetes that aren't necessarily developers? >>Yeah, there's really nothing interesting about a single container or a single microservice, right? That's not, that's not the kind of environment that, um, real organizations live in. They live in organizations where they're going to have hundreds of services, um, who just containers and you need a technology to orchestrate and manage that in that complex environment. And Kubernete's has just quickly become the, the district per standard. Um, yeah, folks are red hat jumped on my very, very early, um, I mean, one of the advantages around her have is where we're embedded with developers and open source communities. We often have a pretty good, it gives us a pretty good crystal ball. So we're often quick to jump on the emerging technologies that are coming out of open source. And that's exactly what happened with Cubanetis. It was clear. It was, um, you're going to be sophisticated for our, you know, most, um, most sophisticated customers running at scale. Um, but, but also, you know, great for development environments as well. So it really a good fit for, uh, where we were headed and, you know, just very, very quickly became the fact that standard. And you, you just gotta go with the de facto standard. Right, right. >>Right. Well, the another thing that you mentioned rich in that other interview that I was watching is it came up the conversation in terms of managing open source projects. And at some point, you know, they kind of start, and then, you know, I think this one, if I go to corcus and look at the bottom of the page sponsored by red hat, but you talked about, you know, at some point, do you move it over to a foundation, um, you know, and kind of what are the things that kind of drive that process, that decision, um, and, you know, I would imagine that part of it has to do with popularity and scale, is that something, you know, potentially down the road, how do you think that you said you've been in lots of open source projects, when does it move from, you know, kind of single point of origin to more of a foundational support? >>Yeah. I mean, in fact the foundation's owner was necessary. Um, you know, when you have a, yeah. If you, if you have a, an open, very open project with, um, um, clear, clear rules for collaboration and kind of the encouragement or others to collaborate and be able to, you know, um, move the project and, you know, the foundation as low as necessarily what we've seen, I've been part of the no GS world where, you know, the, the community reached Belden to keep no GS moving forward. Um, we had to go from a, what we call a benevolent dictator for life, somebody who's well-intentioned, but, um, yeah, we're on stone, the technology, so a foundation, which is much more inclusive and, um, you know, greater collaboration and you can move even quicker. So, you know, um, I think what's required is, is open governance for open source projects and where that doesn't happen. You know, maybe a foundation is, is the right way forward. Right, right now with, with caucus, um, you know, the, the non red hat developers seem pretty happy with the way they can get, uh, get engaged and contribute. Um, but if we get to a point where the community is demanding a foundation and we'll absolutely consider it, that's the best project we'll do. >>So, so we're, we're coming to the end of our time. I want to give you each the last word, really with two questions, one again, you know, just kind of a summary of, of, uh, of CubeCon cloud, native con, you know, what should people be looking for, uh, find you, and, and, and I don't know if you guys are sponsoring any sessions, I'm sure there's a lot of great content. If you want to highlight one or two things. And then most importantly, as we turn the calendars, we come to the end of 2020, uh, thankfully, um, as you look ahead to 2021, you know, what are some of your priorities, uh, as, as we get ready to turn the turn, the calendar, and Miguel let's start with you. >>So, um, I mean, we have been working very hard this year on the migration, took it for applications to help her every user that is using Java to bring the two containers. You know, whether it is data IE or these crackers, but we're putting like a lot of effort in crackers. And now we are bringing in new rules. And, uh, by the, by December, we expect to have the new version of the migration looking for applications that is going to include the, all the bulls to help developers bring their, their code to the Java code, to, to carcass. And, uh, on this, this is the main goal for us right now. We are moving forward to the next year to include more, more capabilities in that project. Everything's up on site. You can go to the conveyor, uh, project and ticket on, uh, on the up capabilities for the assessment phase. So whenever any partner, any, any of our consultants are working on, on migration or anyone that would like to go and try it themselves on adopted, would like to do these migrations to the cloud native world, uh, will feel comfortable with, with this tool. So that is our main goal in, in my, in my team. >>All right. And how about you rich? >>Yeah, I think we're going to see this, um, um, kind of syllabus solidification kind of web of, um, microservices. Um, you know, if you like hate that, I'm sorry, but I'm just going to next generation microservice. There's going to be, as Miguel mentioned, is gonna be based around, um, uh, native, um, advancing, um, serverless functions. I think that's really the, the, the ideal architecture, the building March services, um, on, on Coobernetti's and caucus plays really, really well there. Um, I think there's, there's a, there's a kind of backlog of projects, um, within organizations that, um, you know, hopefully next year, everything really does start to crank up. And I think, um, yeah, I think a lot of the migration that Miguel has talked about is going to be, is going to rise in terms of importance. So app modernization, taking those existing applications, maybe taking aspects of those and, you know, doing some kind of decomposition in some microservices using caucus and a native, I think we'll see a lot of that. So I think we'll see a real drive around both the kind of Greenfield, um, applications, uh, you know, this next generation of microservices, as well as pulling those existing applications forward into these new environments, don't give an answers. So it's going to be excellent. >>Awesome. Well, thank you both for taking a few minutes with us and sharing the story of corcus, uh, and have a great show. Great to see you and a really good the conversation. All right. He's Miguel, he's rich. I'm Jeff. You're watching the cubes ongoing coverage of CubeCon cloud native con 2020 North America. Virtual. Thanks for watching. We'll see you next time.

Published Date : Nov 20 2020

SUMMARY :

cloud native con North America, 2020 virtual brought to you by red hat, Hey, welcome back, everybody Jeffrey here with the cube coming to you from our Palo Alto studios today with our ongoing coverage Great to see you. And before we kind of get into the future, I think it's worthwhile to take a look back at, you know, kind of where Java came So that's really, really helped, um, you know, keep the language innovating and moving IOT applications then to be calming, you know, this really a great application And that's that's for the last, you know, 15 years has been, So let's, let's jump into it and talk about corcus cause the other big trend, you know, along with, the N pixie dust, you know, we really dig into the code, So I wonder if you just give, as in the case of the function or something running in native, cause you scale up and scale down. um, you know, sort of like, you know, overall cost savings of, in a, in a cluster and, um, you know, reduction in memory up to 90% And it's funny that you talked on that just now about really, to that point, it's very important how you stop, you know, so the assessment phase is, So you have to prepare very well, the project by starting with a good assessment, open source community, you know, red hat has a long open-source history. So it's just nothing, there's no alternative that, you know, for example, that we wanted to cover in this edition, you know, how we are implementing serverless W Y Y you know, there's been containers in the past, right. So it really a good fit for, uh, where we were headed and, you know, just very, very quickly became the fact that And at some point, you know, kind of the encouragement or others to collaborate and be able to, you know, uh, thankfully, um, as you look ahead to 2021, you know, what are some of your priorities, So, um, I mean, we have been working very hard this year on the migration, And how about you rich? um, applications, uh, you know, this next generation of microservices, as well Great to see you and a really good the conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark ShuttleworthPERSON

0.99+

John TroyerPERSON

0.99+

MadridLOCATION

0.99+

60QUANTITY

0.99+

JeffPERSON

0.99+

Dorich TelecomORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

VodafoneORGANIZATION

0.99+

$10QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Miguel PerezPERSON

0.99+

SpainLOCATION

0.99+

10 serversQUANTITY

0.99+

two questionsQUANTITY

0.99+

CarrefourORGANIZATION

0.99+

45QUANTITY

0.99+

North CarolinaLOCATION

0.99+

MiguelPERSON

0.99+

AmericasLOCATION

0.99+

SoftBankORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

2021DATE

0.99+

VancouverLOCATION

0.99+

AT&TORGANIZATION

0.99+

20%QUANTITY

0.99+

MarkPERSON

0.99+

100 serversQUANTITY

0.99+

30%QUANTITY

0.99+

JavaTITLE

0.99+

2018DATE

0.99+

OpenStack FoundationORGANIZATION

0.99+

2020DATE

0.99+

GoogleORGANIZATION

0.99+

last yearDATE

0.99+

PowerPointTITLE

0.99+

StuPERSON

0.99+

one serverQUANTITY

0.99+

15 yearsQUANTITY

0.99+

North AmericaLOCATION

0.99+

64%QUANTITY

0.99+

JeffreyPERSON

0.99+

next yearDATE

0.99+

3%QUANTITY

0.99+

LinkedInORGANIZATION

0.99+

todayDATE

0.99+

11QUANTITY

0.99+

CentOSTITLE

0.99+

Vancouver, CanadaLOCATION

0.99+

.3%QUANTITY

0.99+

two wordsQUANTITY

0.99+

120QUANTITY

0.99+

sixQUANTITY

0.99+

oneQUANTITY

0.99+

KaleenaPERSON

0.99+

three years agoDATE

0.99+

PythonTITLE

0.99+

OpenStackORGANIZATION

0.99+

two problemsQUANTITY

0.99+

yesterdayDATE

0.99+

bothQUANTITY

0.99+

Serge Lucio, Glyn Martin & Jeffery Hammond V1


 

>> Announcer: From around the globe, it's theCUBE with digital coverage of DevOps virtual forum. Brought to you by Broadcom. >> Hi guys, welcome back. So we have discussed the current state and the near future state of DevOps and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where DevOps needs to go. In order to be successful next year. So our guests today are you've seen them all before. Jeffrey Hammond is here the VP and Principal Analyst serving CIO at Forrester. We've also got Serge Lucio, the GM of Broadcom Enterprise Software Division. And Glyn Martin, the head of QA Transformation at BT. Guys welcome back. Great to have you all three together. >> Hi Lisa. (Serge speaks faintly) >> Good to be here. >> All right. So we're all very socially distanced as we talked about before. Great to have this conversation. So let's start with one of the topics that we kicked off the forum with. Jeff, we're going to start with you spiritual colocation. That's a really interesting topic that we've uncovered. But how much of the challenge is truly cultural? And what can we solve through technology? Jeff, we'll start with you, then Serge, then Glyn, Jeff take it away. >> Yeah I think fundamentally, you can have all the technology in the world. And if you don't make the right investments in the cultural practices in your development organization. You still won't be effective. Almost 10 years ago, I wrote a piece. Where I did a bunch of research around what made high performance teams software delivery teams high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the Agile Manifesto. Let's take that today. Where developers are on their own in their own offices. if you've got teams where the team itself had a high level of autonomy. And they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. And so what we have seen is that organizations that embraced autonomy, and got their teams in the right place. And their teams had the information that they needed to make the right decisions. Have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like well, how do we actually push the software that we've created into production that have become the challenge is not. Are we writing the right software? And that's why I think the term spiritual colocation is so important. Because even though we may be physically distant, we're on the same plane, we're connected from a shared purpose. There's a Surgeon I worked together a long, long time ago, so just it's been what, almost 15-16 years, since we worked at the same place. And yet I would say there's probably still a certain level of spiritual colocation, between us. because of this shared purposes that we've had in the past and what we've seen in the industry, and that's a really powerful tool to build on. So what do tools play as part of that, to the extent that tools make information available to build shared purpose on. To the extent that they enable communication so that we can build that spiritual colocation. To the extent that they reinforce the culture that we want to put in place. They can be incredibly valuable, especially when we don't have the luxury of physical colocation. Hope that makes sense.(chuckles) >> It does. I should have introduced this last segment as we're all spiritually colocated. All right. So Serge, clearly you're still spiritually colocated with Jeff. Talk to me about what your thoughts are about spiritual of colocation. The cultural impact and how technology can move it forward? >> Yes, so I think, while I'm going to sound very similar to Jeff in that respect. I think it starts with kind of shared purpose, and understanding how individuals teams contribute to kind of a business outcome. What is our shared goals our shared vision with what is it we're trying to achieve collectively. And keeping kind of the line to that. And so it really starts with it Now, the big challenge always is over the last 20 years, especially in large organization has been specialization of roles and functions. And so we all have started to basically measure which we do on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome. Or is it on purpose. We kind of revert that to Okay, what is my database uptime? What is my cycle time? Right. And I think which we can do or where we really should be focused as an industry is to start to basically provide a lens for these different stakeholders to look at what they're doing. In the context of benefiting this business outcomes. So, probably one of my theories experience was to actually witness at one of our large financial institution. Two stakeholders across development and operations staring at the same data. Like which was related to economy changes, test execution results, coverage, official liabilities, and all the overran direction of incidents. And when you start to put these things in context, and represent that in a way that these different stakeholders can look at from their different lens. And they can start to basically communicate, and understand how they jointly or complement to do that kind of common vision or objective. >> And Glyn, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part of technology impact? >> Yeah, I mean I agree with Jeffrey that, you know, the people and culture are the most important thing. Actually, that's why it's really important when you're transforming to have partners who have the same vision as you. Who you can work with have the same end goal in mind. And we would constantly found that with our continuing relationship with Broadcom. What it also does, are those tools can accelerate what you're doing and can drive consistency. You know, we've seen within simplify, which is BT's Flagship Transformation Program, where we're trying to as it says, simplify the number of system stacks that we have. The number of products that we have, actually at the moment we've got different value streams within that program. Who have got organizational silos who are trying to rewrite the wheel. Who are still doing things manually. So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT. Which is such a complex and very different environment, depending on what area BT you're in. Whether it's consumer, whether it's a mobile area, whether it's large global or government organizations. We found that we need tools that can drive that consistency. But also flex to Greenfield Brownfield kind of technologies as well. So it's really important that as it's a from a number of different aspects. That you have the right partner, and to drive the right culture here, and the same vision, but also who have the tool sets to help you accelerate, They can't do that on their own. But they can help accelerate what it is you're trying to do. And a really good example of that is we're trying to shift left, which is probably a quite a bit of a buzz phrase. And they're kind of testing well at the moment. But I could talk about things like Continuous Delivery Director to Broadcom tools. And it has many different features to it, but very simply on its own. It allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams around could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of Continuous Delivery Director. But it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >> Thanks, Glyn So we kind of sum that up autonomy, collaboration tools that facilitate that. So let's talk now about metrics. From your perspective, what are the metrics that matter Jeff? >> Well, I'm going to go right back to what Glyn said about data that provides visibility that enables us to to make decisions with shared purpose. And so business value has to be one of the first things that we looking at. How do we assess whether we have built something that is valuable? That could be sales revenue, it could be Net Promoter Score, if you're not selling what you've built, it could even be what the level of reuse is within your organization. Or other teams picking up the services that you've created. One of the things that I've begun to see organizations do is to align value streams with customer journeys. And then to align teams with those value streams. So that's one of the ways that you get to a shared purpose. 'Cause we're all trying to deliver around that customer journey. The value associated with it. And we're all measured on that. There are flow metrics, which are really important. How long does it take us to get a new feature out. From the time that we conceive it to the time that we can run our first experiments with it. There are quality metrics, some of the classics or maybe things like defect density or meantime to response. One of my favorites came from a company called Ultimate Software. Where they looked at the ratio of defects found in Production defects found in pre production. And their developers were in fact measured on that ratio and told them that guess what quality is your job too. Not just the test departments group. The fourth level that I think is really important in the current situation that we're in, is the level of engagement in your development organization. We used to joke that we measured this with the parking lot metric. How how full was the parking lot at 9, and how full was it at 5 o'clock. I can't do that anymore, since we're not physically colocated. But what you can do is you can look at how folks are delivering. You can look at your metrics in your SCCM environment, you can look at the relative rates of churn, you can look at things like well are our developers delivering during longer periods. Earlier in the morning, later in the evening? Are they delivering on the weekends as well. Are those signs that we might be heading toward burnout, because folks are still running at sprint levels instead of marathon levels. So all of those in combination, business value, flow, engagement and quality. I think form the backbone of any sort of metrics program. The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Unfortunately I see organizations where they weaponize the data. And that's completely the wrong way to look at it. What you need to do is you need to say. "How is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing? And then what do we do to remove those blockers to make sure that we're giving these autonomous teams, the context that they need to do their job in a way that creates the most value for the customers?" >> Great advice, Jeff. Glyn over to you metrics that matter to you that really make a big impact. And also how do you measure quality kind of following on to the advice that Jeff provided? >> I mean, Jeff provided some great advice. Actually, he talks about value, he talks about flow, both of those things are very much on my mind at the moment. But there was a time, listen to a speaker called Mia Kirsten, a couple of months ago, he talked very much around how important flow management is. And remove and using that to remove waste, to understand in terms of, making software changes. What is it that's causing us to do it longer than we need to? So where are those areas where it takes too long. So I think that's a very important thing. For us, it's even more basic than that at the moment. We're on a journey from moving from waterfall to agile. And the problem with moving from waterfall to agile is, with waterfall, the the business had a kind of comfort that everything was tested together, and therefore it's safer. And with agile, there's that kind of how do we make sure that you know, if we're doing things quick, and we're getting stuff out the door that we give that confidence, that that's ready to go? Or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence. And some of the metrics around that and how healthy those releases are and actually saying we spend a lot of money, in an investment setting up agile teams training agile teams. Are we actually seeing them deliver more quickly? And are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment. But I think it's also about, generally bringing it all together DevOps. We've got the kind of value ops, AI Ops. How do we actually bring that together to so we can make quick decisions, and making sure that we are delivering the biggest bang for our partners. >> Absolutely biggest bang for the partners. Serge your thoughts. >> Yes I think we all agree, right? It starts with business metrics, flow metrics. These are one of the most important metrics and ultimately, I mean, one of the things that's very common across I highly functional teams is engagements, right? When you see a team that's highly functional, and that's agile, that practices DevOps everyday. They are highly engaged. That definitely true. Now back to you, I think, Jeff's points on weaponization of metrics. One of the key challenges we see is that organizations traditionally have been kind of, setting up benchmarks. Right. So what is a good cycle time? What is a good mean time? What is a good mean time to repair? The problem is that this is very contextual, right? It's going to vary quite a bit, depending on the nature of application and system. And so one of the things that we really need to evolve as an industry. Is to understand that it's not so much about those flow metrics is about are these flow metrics ultimately contribute to the business metric. To the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood, is that when you have a bad cycle time or what you perceive as being a bad cycle time or bad quality. The problem is oftentimes like, how do you go and explore why, right? What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics. And not on the I type patterns, which are pretty common across the industry. If you look at for instance things like, lead time for instance. It's very common that organizational boundaries are going to be a key contributor to bad lead time. And so I think that there is reviewing the metrics, there is I think a lot of work that we need to do in terms of classifying this untied PaaS. Back to you, Jeff, I think you're one of the cool offers of Water-Scrum Fall as a key pattern in the industry or anti-patterns. >> Yeah >> But Water Scrum Fall, right. Is the key one right? And you will detect that through kind of a defect rival rates. That's right, that looks like an S curve. And so I think it's the output of the metrics is what do you do with those metrics. >> Right. I'll tell you Serge, one of the things that is really interesting to me in that space is. I think those of us had been in industry for a long time, we know the anti patterns, 'cause we've seen them in our career,(laughs) maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti patterns based on the telemetry that comes in. I think it would be a really interesting place to apply machine learning and reinforcement learning techniques. So hopefully something that we'd see in the future with DevOps tools. 'Cause as a manager that maybe only a 10 year veteran or a 15 year veteran. You may be seeing these anti patterns for the first time, and it would sure be nice to know what to do when they start to pop up.(chuckles) >> That would right? Insight, always helpful. All right guys, I would like to get your final thoughts on the fit one thing that you believe our audience really needs to be on the lookout for. and to put on our agendas. For the next 12 months. Jeff will be back to you. >> I would say, look for the opportunities that this disruption presents. And there are a couple that I see. First of all, as we shift to remote central working, we're unlocking new pools of talent. Where it's possible to implement more geographic diversity. So look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low code tools. To very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers. That agile and DevOps are the peanut butter and chocolate to support creative workers with algorithmic capabilities. >> Peanut butter and chocolate. Glyn where do we go from there? What's the one silver bullet that you think that needs to be on the look out for? >> (indistinct) out I certainly agree that low code is next year, we'll see much more low code. We've already started going moving towards more of a SaaS based world but low code also. I think as well for me, we've still got one foot in the kind of cloud camp. We'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last thing for me is, how do you really instill quality throughout the kind of the life cycle When I heard the word scrum for it kind of made me shut it. 'Cause I know that's a problem. That's where we're at with some of our things at the moment. So we need to get beyond that we need to be releasing changes more frequently into production. And actually being a bit more brave and having the confidence to actually do more testing in production and going straight to production itself. So expect to see much more of that next year. Yeah, thank you. I haven't got any food analogies unfortunately. (laughs) >> We all need some peanut butter and chocolate. All right Serge, Just take us on that sir. What's that nugget you think everyone needs to have on their agendas? >> That's interesting, right? So a couple of days ago, we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about velocity, right? It's all about we still are perceiving DevOps as being all about speed. And so to me the key advice is, in order to create kind of this spiritual colocation in order to foster engagement. We have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams. To understand how they're delivering value into allowing everything they do from a metrics to delivery to flow to those metrics. And only with data, I think, are we going to be able to actually start to to restart to align kind of all these roles across the organizations and drive not just speed, but business outcomes. >> All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me. Today and our guests, I think this was an incredibly valuable, fruitful conversation. And we appreciate all of you taking the time to spiritually colocate with us today. Guys, thank you. >> Thank you Lisa. >> Thank you. >> Thank you. >> For Jeff Hammond, Serge Lucio and Glyn Martin. I'm Lisa Martin. Thank you for watching the Broadcom DevOps virtual forum. (upbeat music)

Published Date : Nov 13 2020

SUMMARY :

Brought to you by Broadcom. and how it's going to evolve Hi Lisa. But how much of the challenge And that's one of the things that you see Talk to me about what your thoughts are And keeping kind of the line to that. and the cultural part The number of products that we have, of sum that up autonomy, the context that they need to do their job metrics that matter to you And the problem with moving bang for the partners. One of the key challenges we see is what do you do with those metrics. And one of the things that I and to put on our agendas. That agile and DevOps are the that needs to be on the look out for? and exploiting the capabilities of cloud. What's that nugget you think And so to me the key advice is, taking the time to spiritually Thank you for watching the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SergePERSON

0.99+

JeffPERSON

0.99+

GlynPERSON

0.99+

Jeff HammondPERSON

0.99+

Glyn MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

Lisa MartinPERSON

0.99+

Mia KirstenPERSON

0.99+

JeffreyPERSON

0.99+

TodayDATE

0.99+

threeQUANTITY

0.99+

2021DATE

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

second aspectQUANTITY

0.99+

5 o'clockDATE

0.99+

Jeffery HammondPERSON

0.99+

next yearDATE

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

one footQUANTITY

0.99+

9DATE

0.99+

todayDATE

0.98+

first timeQUANTITY

0.98+

bothQUANTITY

0.98+

BTORGANIZATION

0.98+

fourth levelQUANTITY

0.98+

agileTITLE

0.98+

first experimentsQUANTITY

0.98+

Agile ManifestoTITLE

0.98+

two main thingsQUANTITY

0.98+

Ultimate SoftwareORGANIZATION

0.97+

second thingQUANTITY

0.97+

Broadcom Enterprise Software DivisionORGANIZATION

0.96+

DevOpsTITLE

0.95+

couple of months agoDATE

0.94+

FirstQUANTITY

0.94+

Two stakeholdersQUANTITY

0.93+

couple of days agoDATE

0.91+

ContinuousTITLE

0.9+

10 years agoDATE

0.89+

one thingQUANTITY

0.85+

15 year veteranQUANTITY

0.83+

Number twoQUANTITY

0.83+

10 year veteranQUANTITY

0.83+

Broadcom DevOpsORGANIZATION

0.83+

last 20 yearsDATE

0.81+

ON DEMAND SPEED K8S DEV OPS SECURE SUPPLY CHAIN


 

>> In this session, we will be reviewing the power and benefits of implementing a secure software supply chain and how we can gain a cloud like experience with the flexibility, speed and security of modern software delivering. Hi, I'm Matt Bentley and I run our technical pre-sales team here at Mirantis. I spent the last six years working with customers on their containerization journey. One thing almost every one of my customers has focused on is how they can leverage the speed and agility benefits of containerizing their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason and that is for our applications. So now let's take a look at how we can provide flexibility to all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focused platforms, I generally see two different mindsets in terms of where their responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure, yet robust service that fits their organization's goals around how modern applications are built and delivered. First, let's take a look at the developer or application team approach. This approach falls more of the DevOps philosophy, where a developer and application teams are the owners of their applications from the development through their life cycle, all the way to production. I would refer to this more of a self service model of application delivery and promotion when deployed to a container platform. This is fairly common, organizations where full stack responsibilities have been delegated to the application teams. Even in organizations where full stack ownership doesn't exist, I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers. In other organizations, there is a strong separation between responsibilities for developers and IT operations. This is often due to the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include dock at the development layer or be more traditional, throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach where we can take container platforms and be delivered as a service to other consumers inside of the IT organization. This is fairly prescriptive in the manner of which application teams would consume it. Yeah when examining the two approaches, there are pros and cons to each. Process, controls and compliance are often seen as inhibitors to speed. Self-service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self-service is great, without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization. And a true infrastructure as a code experience, requires DevOps, related coding skills that teams often have in pockets, but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this. Docker Enterprise Container Cloud provide the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that our professional services team and your operations teams spend their time designing and implementing. This removes much of the additional work and worry around ensuring that your clusters and experiences are consistent, while maintaining the ideal self service model. No matter if it is a full stack ownership or easing the needs of IT operations. We're also bringing the most natural Kubernetes experience today with Lens to allow for multi-cluster visibility that is both developer and operator friendly. Lens provide immediate feedback for the health of your applications, observability for your clusters, fast context switching between environments and allowing you to choose the best in tool for the task at hand, whether it is the graphic user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get DevOps speed with all the security and controls to meet the regulations your business lives by. We're talking about more frequent deployments, faster time to recover from application issues and better code quality. As you can see from our clusters we have worked with, we're able to tie these processes back to real cost savings, real efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look and see how we're able to actually build a secure supply chain to help deliver these sorts of initiatives. In our example secure supply chain, where utilizing Docker desktop to help with consistency of developer experience, GitHub for our source control, Jenkins for our CACD tooling, the Docker trusted registry for our secure container registry and the Universal Control Plane to provide us with our secure container runtime with Kubernetes and Swarm, providing a consistent experience, no matter where our clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience. For my developers, that works for any application, Brownfield or Greenfield, Monolith or Microservice. Onboarding teams can be simplified with integrations into enterprise authentication services, calls to GitHub repositories, Jenkins access and jobs, Universal Control Plan and Docker trusted registry teams and organizations, Kubernetes namespace with access control, creating Docker trusted registry namespaces with access control, image scanning and promotion policies. So, now let's take a look and see what it looks like from the CICD process, including Jenkins. So let's start with Docker desktop. From the Docker desktop standpoint, we'll actually be utilizing visual studio code and Docker desktop to provide a consistent developer experience. So no matter if we have one developer or a hundred, we're going to be able to walk through a consistent process through Docker container utilization at the development layer. Once we've made our changes to our code, we'll be able to check those into our source code repository. In this case, we'll be using GitHub. Then when Jenkins picks up, it will check out that code from our source code repository, build our Docker containers, test the application that will build the image, and then it will take the image and push it to our Docker trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we can sign them. So once we've signed our images, we've deployed our application to dev, we can actually test our application deployed in our real environment. Jenkins will then test the deployed application. And if all tests show that as good, we'll promote our Docker image to production. So now, let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as it's deployed today. Here, we can see that we have a change that we want to make on our application. So our marketing team says we need to change containerize NGINX to something more Mirantis branded. So let's take a look at visual studio code, which we'll be using for our ID to change our application. So here's our application. We have our code loaded and we're going to be able to use Docker desktop on our local environment with our Docker desktop plugin for visual studio code, to be able to build our application inside of Docker, without needing to run any command line specific tools. Here with our code, we'll be able to interact with Docker maker changes, see it live and be able to quickly see if our changes actually made the impact that we're expecting our application. So let's find our updated tiles for application and let's go ahead and change that to our Mirantis sized NGINX instead of containerized NGINX. So we'll change it in a title and on the front page of the application. So now that we've saved that changed to our application, we can actually take a look at our code here in VS code. And as simple as this, we can right click on the Docker file and build our application. We give it a name for our Docker image and VS code will take care of the automatic building of our application. So now we have a Docker image that has everything we need in our application inside of that image. So, here we can actually just right click on that image tag that we just created and do run. This will interactively run the container for us. And then once our containers running, we can just right click and open it up in a browser. So here we can see the change to our application as it exists live. So, once we can actually verify that our applications working as expected, we can stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here, we're going to go ahead and make a commit message to say that we updated to our Mirantis branding. We will commit that change and then we'll push it to our source code repository. Again, in this case, we're using GitHub to be able to use as our source code repository. So here in VS code, we'll have that pushed here to our source code repository. And then, we'll move on to our next environment, which is Jenkins. Jenkins is going to be picking up those changes for our application and it checked it out from our source code repository. So GitHub notifies Jenkins that there's a change. Checks out the code, builds our Docker image using the Docker file. So we're getting a consistent experience between the local development environment on our desktop and then in Jenkins where we're actually building our application, doing our tests, pushing it into our Docker trusted registry, scanning it and signing our image in our Docker trusted registry and then deploying to our development environment. So let's actually take a look at that development environment as it's been deployed. So, here we can see that our title has been updated on our application, so we can verify that it looks good in development. If we jump back here to Jenkins, we'll see that Jenkins go ahead and runs our integration tests for our development environment. Everything worked as expected, so it promoted that image for our production repository in our Docker trusted registry. We're then, we're going to also sign that image. So we're assigning that yes, we've signed off that has made it through our integration tests and it's deployed to production. So here in Jenkins, we can take a look at our deployed production environment where our application is live in production. We've made a change, automated and very secure manner. So now, let's take a look at our Docker trusted registry, where we can see our name space for our application and our simple NGINX repository. From here, we'll be able to see information about our application image that we've pushed into the registry, such as the image signature, when it was pushed by who and then, we'll also be able to see the results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Docker trusted registry does binary level scanning. So we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how exactly we would remediate that in a secure supply chain. So let's take a look at that. In the example that we were looking at, the vulnerability is actually in the base layer of our image. In order to pull in a new base layer for our image, we need to actually find the source of that and update it. One of the ways that we can help secure that as a part of the supply chain is to actually take a look at where we get our base layers of our images. Docker hub really provides a great source of content to start from, but opening up Docker hub within your organization, opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker hub are curated by Docker, open source projects and other vendors. One of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them. Instead of just blindly trusting the content from Docker hub, we can take a set of content that we find useful such as those base image layers or content from vendors and pull that into our own Docker trusted registry, using our mirroring feature. Once the images have been mirrored into a staging area of our Docker trusted registry, we can then scan them to ensure that the images meet our security requirements. And then based off of the scan result, promote the image to a public repository where you can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know is secure and controlled within our environment. So from here, we can find our updated Docker image in our Docker trusted registry, where we can see that the vulnerabilities have been resolved. From a developer's point of view, that's about as smooth as the process gets. Now, let's take a look at how we can provide that secure content for our developers in our own Docker trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our Docker trusted registry. Here, we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up mirroring and we can quickly turn it on by making it active. And then we can see that our image mirroring, we'll pull our content from Docker hub and then make it available in our Docker trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within Docker trusted registry that makes it so that content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the Docker image. So our actual users, how they would consume this content is by taking a look at the public to them, official images that we've made available. Here again, looking at our Alpine image, we can take a look at the tags that exist and we can see that we have our content that has been made available. So we've pulled in all sorts of content from Docker hub. In this case, we've even pulled in the multi architecture images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Lens. Lens provides capabilities to be able to give developers a quick opinionated view that focuses around how they would want to view, manage and inspect applications deployed to a Kubernetes cluster. Lens integrates natively out of the box with Universal Control Plane clam bundles. So you're automatically generated TLS certificates from UCP, just work. Inside our organization, we want to give our developers the ability to see their applications in a very easy to view manner. So in this case, let's actually filter down to the application that we just employed to our development environment. Here, we can see the pod for application. And when we click on that, we get instant detailed feedback about the components and information that this pod is utilizing. We can also see here in Lens that it gives us the ability to quickly switch contexts between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package up applications, especially those that may be more complex to make it much simpler to be able to consume and inversion our applications. In this case, let's take a look at the application that we just built and deployed. In this case, our simple NGINX application has been bundled up as a helm chart and is made available through Lens. Here, we can just click on that description of our application to be able to see more information about the helm chart. So we can publish whatever information may be relevant about our application. And through one click, we can install our helm chart. Here, it will show us the actual details of the helm charts. So before we install it, we can actually look at those individual components. So in this case, we can see this created an ingress rule. And then this will tell Kubernetes how did it create this specific components of our application. We'd just have to pick a namespace to deploy it to and in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Docker hub. In our Universal Control Plane, we've turned on Docker content trust policy enforcement. So this is actually going to fail to deploy. Because we're trying to employ our application from Docker hub, the image hasn't been properly signed in our environment. So the Docker content trust policy enforcement prevents us from deploying our Docker image from Docker hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know where our image came from and that meets our quality standards. So if we comment out the Docker hub repository and comment in our Docker trusted registry repository and click install, it will then install the helm chart with our Docker image being pulled from our DTR, which then it has a proper signature. We can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple NGINX application and in this case, we'll get details around the actual deployed helm chart. The nice thing is, is that Lens provides us this capability here with helm to be able to see all of the components that make up our application. From this view, it's giving us that single pane of glass into that specific application, so that we know all of the components that is created inside of Kubernetes. There are specific details that can help us access the applications such as that ingress rule that we just talked about, gives us the details of that, but it also gives us the resources such as the service, the deployment and ingress that has been created within Kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around DevOps and operations control processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators, more time designing systems that meet our security and compliance concerns.

Published Date : Sep 14 2020

SUMMARY :

of our application to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

GitHubORGANIZATION

0.99+

FirstQUANTITY

0.99+

one reasonQUANTITY

0.99+

MirantisORGANIZATION

0.99+

OneQUANTITY

0.99+

NGINXTITLE

0.99+

DockerTITLE

0.99+

two approachesQUANTITY

0.99+

MonolithORGANIZATION

0.99+

oneQUANTITY

0.98+

UCPORGANIZATION

0.98+

KubernetesTITLE

0.98+

One thingQUANTITY

0.98+

one developerQUANTITY

0.98+

JenkinsTITLE

0.98+

todayDATE

0.98+

BrownfieldORGANIZATION

0.97+

both worldsQUANTITY

0.97+

twoQUANTITY

0.97+

bothQUANTITY

0.96+

one clickQUANTITY

0.96+

GreenfieldORGANIZATION

0.95+

eachQUANTITY

0.95+

single paneQUANTITY

0.92+

Docker hubTITLE

0.91+

a hundredQUANTITY

0.91+

LensTITLE

0.9+

DockerORGANIZATION

0.9+

MicroserviceORGANIZATION

0.9+

VSTITLE

0.88+

DevOpsTITLE

0.87+

K8SCOMMERCIAL_ITEM

0.87+

Docker hubORGANIZATION

0.85+

waysQUANTITY

0.83+

KubernetesORGANIZATION

0.83+

last six yearsDATE

0.82+

JenkinsPERSON

0.72+

One ofQUANTITY

0.7+

Speed K8S Dev Ops Secure Supply Chain


 

>>this session will be reviewing the power benefits of implementing a secure software supply chain and how we can gain a cloud like experience with flexibility, speed and security off modern software delivery. Hi, I'm Matt Bentley, and I run our technical pre sales team here. Um Iran. Tous I spent the last six years working with customers on their container ization journey. One thing almost every one of my customers is focused on how they can leverage the speed and agility benefits of contain arising their applications while continuing to apply the same security controls. One of the most important things to remember is that we are all doing this for one reason, and that is for our applications. So now let's take a look at how we could provide flexibility all layers of the stack from the infrastructure on up to the application layer. When building a secure supply chain for container focus platforms, I generally see two different mindsets in terms of where the responsibilities lie between the developers of the applications and the operations teams who run the middleware platforms. Most organizations are looking to build a secure yet robust service that fits the organization's goals around how modern applications are built and delivered. Yeah. First, let's take a look at the developer or application team approach. This approach follows Mawr of the Dev ops philosophy, where a developer and application teams are the owners of their applications. From the development through their life cycle, all the way to production. I would refer this more of a self service model of application, delivery and promotion when deployed to a container platform. This is fairly common organizations where full stack responsibilities have been delegated to the application teams, even in organizations were full stack ownership doesn't exist. I see the self service application deployment model work very well in lab development or non production environments. This allows teams to experiment with newer technologies, which is one of the most effective benefits of utilizing containers and other organizations. There's a strong separation between responsibilities for developers and I T operations. This is often do the complex nature of controlled processes related to the compliance and regulatory needs. Developers are responsible for their application development. This can either include doctorate the development layer or b'more traditional throw it over the wall approach to application development. There's also quite a common experience around building a center of excellence with this approach, where we can take container platforms and be delivered as a service to other consumers inside of the I T organization. This is fairly prescriptive, in the manner of which application teams would consume it. When examining the two approaches, there are pros and cons to each process. Controls and appliance are often seen as inhibitors to speak. Self service creation, starting with the infrastructure layer, leads to inconsistency, security and control concerns, which leads to compliance issues. While self service is great without visibility into the utilization and optimization of those environments, it continues the cycles of inefficient resource utilization and the true infrastructure is a code. Experience requires Dev ops related coding skills that teams often have in pockets but maybe aren't ingrained in the company culture. Luckily for us, there is a middle ground for all of this Doc Enterprise Container Cloud provides the foundation for the cloud like experience on any infrastructure without all of the out of the box security and controls that are professional services Team and your operations team spend their time designing and implementing. This removes much of the additional work and worry Run, ensuring that your clusters and experiences are consistent while maintaining the ideal self service model, no matter if it is a full stack ownership or easing the needs of I T operations. We're also bringing the most natural kubernetes experience today with winds to allow for multi cluster visibility that is both developer and operator friendly. Let's provides immediate feedback for the health of your applications. Observe ability for your clusters. Fast context, switching between environments and allowing you to choose the best in tool for the task at hand. Whether is three graphical user interface or command line interface driven. Combining the cloud like experience with the efficiencies of a secure supply chain that meet your needs brings you the best of both worlds. You get Dave off speed with all the security controls to meet the regulations your business lives by. We're talking about more frequent deployments. Faster time to recover from application issues and better code quality, as you can see from our clusters we have worked with were able to tie these processes back to real cost savings, riel efficiency and faster adoption. This all adds up to delivering business value to end users in the overall perceived value. Now let's look at see how we're able to actually build a secure supply chain. Help deliver these sorts of initiatives in our example. Secure Supply chain. We're utilizing doctor desktop to help with consistency of developer experience. Get hub for our source Control Jenkins for a C A C D. Tooling the doctor trusted registry for our secure container registry in the universal control playing to provide us with our secure container run time with kubernetes and swarm. Providing a consistent experience no matter where are clusters are deployed. You work with our teams of developers and operators to design a system that provides a fast, consistent and secure experience for my developers that works for any application. Brownfield or Greenfield monolith or micro service on boarding teams could be simplified with integrations into enterprise authentication services. Calls to get help repositories. Jenkins Access and Jobs, Universal Control Plan and Dr Trusted registry teams and organizations. Cooper down his name space with access control, creating doctor trusted registry named spaces with access control, image scanning and promotion policies. So now let's take a look and see what it looks like from the C I c D process, including Jenkins. So let's start with Dr Desktop from the doctor desktop standpoint, what should be utilizing visual studio code and Dr Desktop to provide a consistent developer experience. So no matter if we have one developer or 100 we're gonna be able to walk through the consistent process through docker container utilization at the development layer. Once we've made our changes to our code will be able to check those into our source code repository in this case, abusing Get up. Then, when Jenkins picks up, it will check out that code from our source code repository, build our doctor containers, test the application that will build the image, and then it will take the image and push it toward doctor trusted registry. From there, we can scan the image and then make sure it doesn't have any vulnerabilities. Then we consign them. So once we signed our images, we've deployed our application to Dev. We can actually test their application deployed in our real environment. Jenkins will then test the deployed application, and if all tests show that is good, will promote the r R Dr and Mr Production. So now let's look at the process, beginning from the developer interaction. First of all, let's take a look at our application as is deployed today. Here, we can see that we have a change that we want to make on our application. So marketing Team says we need to change containerized injure next to something more Miranda's branded. So let's take a look at visual studio coat, which will be using for I D to change our application. So here's our application. We have our code loaded, and we're gonna be able to use Dr Desktop on our local environment with our doctor desktop plug in for visual studio code to be able to build our application inside of doctor without needing to run any command line. Specific tools here is our code will be able to interact with docker, make our changes, see it >>live and be able to quickly see if our changes actually made the impact that we're expecting our application. Let's find our updated tiles for application and let's go and change that to our Miranda sized into next. Instead of containerized in genetics, so will change in the title and on the front page of the application, so that we save. That changed our application. We can actually take a look at our code here in V s code. >>And as simple as this, we can right click on the docker file and build our application. We give it a name for our Docker image and V s code will take care of the automatic building of our application. So now we have a docker image that has everything we need in our application inside of that image. So here we can actually just right click on the image tag that we just created and do run this winter, actively run the container for us and then what's our containers running? We could just right click and open it up in a browser. So here we can see the change to our application as it exists live. So once we can actually verify that our applications working as expected, weaken, stop our container. And then from here, we can actually make that change live by pushing it to our source code repository. So here we're going to go ahead and make a commit message to say that we updated to our Mantis branding. We will commit that change and then we'll push it to our source code repository again. In this case we're using get Hub to be able to use our source code repository. So here in V s code will have that pushed here to our source code repository. And then we'll move on to our next environment, which is Jenkins. Jenkins is gonna be picking up those changes for our application, and it checked it out from our source code repository. So get Hub Notifies Jenkins. That there is a change checks out. The code builds our doctor image using the doctor file. So we're getting a consistent experience between the local development environment on our desktop and then and Jenkins or actually building our application, doing our tests, pushing in toward doctor trusted registry, scanning it and signing our image. And our doctor trusted registry, then 2.4 development environment. >>So let's actually take a look at that development environment as it's been deployed. So here we can see that our title has been updated on our application so we can verify that looks good and development. If we jump back here to Jenkins, will see that Jenkins go >>ahead and runs our integration tests for a development environment. Everything worked as expected, so it promoted that image for production repository and our doctor trusted registry. Where then we're going to also sign that image. So we're signing that. Yes, we have signed off that has made it through our integration tests, and it's deployed to production. So here in Jenkins, we could take a look at our deployed production environment where our application is live in production. We've made a change automated and very secure manner. >>So now let's take a look at our doctor trusted registry where we can see our game Space for application are simple in genetics repository. From here we will be able to see information about our application image that we've pushed into the registry, such as Thean Midge signature when it was pushed by who and then we'll also be able to see the scan results of our image. In this case, we can actually see that there are vulnerabilities for our image and we'll actually take a look at that. Dr Trusted registry does binary level scanning, so we get detailed information about our individual image layers. From here, these image layers give us details about where the vulnerabilities were located and what those vulnerabilities actually are. So if we click on the vulnerability, we can see specific information about that vulnerability to give us details around the severity and more information about what, exactly is vulnerable inside of our container. One of the challenges that you often face around vulnerabilities is how, exactly we would remediate that and secure supply chain. So let's take a look at that and the example that we were looking at the vulnerability is actually in the base layer of our image. In order to pull in a new base layer of our image, we need to actually find the source of that and updated. One of the ways that we can help secure that is a part of the supply chain is to actually take a look at where we get our base layers of our images. Dr. Help really >>provides a great source of content to start from, but opening up docker help within your organization opens up all sorts of security concerns around the origins of that content. Not all images are made equal when it comes to the security of those images. The official images from Docker, However, curated by docker, open source projects and other vendors, one of the most important use cases is around how you get base images into your environment. It is much easier to consume the base operating system layer images than building your own and also trying to maintain them instead of just blindly trusting the content from doctor. How we could take a set >>of content that we find useful, such as those base image layers or content from vendors, and pull that into our own Dr trusted registry using our rearing feature. Once the images have been mirrored into a staging area of our DACA trusted registry, we can then scan them to ensure that the images meet our security requirements and then, based off the scan result, promote the image toe a public repository where we can actually sign the images and make them available to our internal consumers to meet their needs. This allows us to provide a set of curated content that we know a secure and controlled within our environment. So from here we confined our updated doctor image in our doctor trust registry, where we can see that the vulnerabilities have been resolved from a developers point of view, that's about a smooth process gets. Now let's take a look at how we could provide that secure content for developers and our own Dr Trusted registry. So in this case, we're taking a look at our Alpine image that we've mirrored into our doctor trusted registry. Here we're looking at the staging area where the images get temporarily pulled because we have to pull them in order to actually be able to scan them. So here we set up nearing and we can quickly turn it on by making active. Then we can see that our image mirroring will pull our content from Dr Hub and then make it available in our doctor trusted registry in an automatic fashion. So from here, we can actually take a look at the promotions to be able to see how exactly we promote our images. In this case, we created a promotion policy within docker trusted registry that makes it so. That content gets promoted to a public repository for internal users to consume based off of the vulnerabilities that are found or not found inside of the docker image. So are actually users. How they would consume this content is by taking a look at the public to them official images that we've made available here again, Looking at our Alpine image, we can take a look at the tags that exist. We could see that we have our content that has been made available, so we've pulled in all sorts of content from Dr Hub. In this case, we have even pulled in the multi architectural images, which we can scan due to the binary level nature of our scanning solution. Now let's take a look at Len's. Lens provides capabilities to be able to give developers a quick, opinionated view that focuses around how they would want to view, manage and inspect applications to point to a Cooper Days cluster. Lindsay integrates natively out of the box with universal control playing clam bundles so you're automatically generated. Tell certificates from UCP. Just work inside our organization. We want to give our developers the ability to see their applications and a very easy to view manner. So in this case, let's actually filter down to the application that we just deployed to our development environment. Here we can see the pot for application and we click on that. We get instant, detailed feedback about the components and information that this pot is utilizing. We can also see here in Linz that it gives us the ability to quickly switch context between different clusters that we have access to. With that, we also have capabilities to be able to quickly deploy other types of components. One of those is helm charts. Helm charts are a great way to package of applications, especially those that may be more complex to make it much simpler to be able to consume inversion our applications. In this case, let's take a look at the application that we just built and deployed. This case are simple in genetics. Application has been bundled up as a helm chart and has made available through lens here. We can just click on that description of our application to be able to see more information about the helm chart so we can publish whatever information may be relevant about our application, and through one click, we can install our helm chart here. It will show us the actual details of the home charts. So before we install it, we can actually look at those individual components. So in this case, we could see that's created ingress rule. And then it's well, tell kubernetes how to create the specific components of our application. We just have to pick a name space to to employ it, too. And in this case, we're actually going to do a quick test here because in this case, we're trying to deploy the application from Dr Hub in our universal Control plane. We've turned on Dr Content Trust Policy Enforcement. So this is actually gonna fail to deploy because we're trying to deploy application from Dr Hub. The image hasn't been properly signed in our environment. So the doctor can to trust policy enforcement prevents us from deploying our doctor image from Dr Hub. In this case, we have to go through our approved process through our secure supply chain to be able to ensure that we know our image came from, and that meets our quality standards. So if we comment out the doctor Hub repository and comment in our doctor trusted registry repository and click install, it will then install the helm chart with our doctor image being pulled from our GTR, which then has a proper signature, we can see that our application has been successfully deployed through our home chart releases view. From here, we can see that simple in genetics application, and in this case we'll get details around the actual deploy and help chart. The nice thing is that Linds provides us this capability here with home. To be able to see all the components that make up our application from this view is giving us that single pane of glass into that specific application so that we know all the components that is created inside of kubernetes. There are specific details that can help us access the applications, such as that ingress world that we just talked about gives us the details of that. But it also gives us the resource is such as the service, the deployment in ingress that has been created within kubernetes to be able to actually have the application exist. So to recap, we've covered how we can offer all the benefits of a cloud like experience and offer flexibility around dev ups and operations controlled processes through the use of a secure supply chain, allowing our developers to spend more time developing and our operators mawr time designing systems that meet our security and compliance concerns

Published Date : Sep 12 2020

SUMMARY :

So now let's take a look at how we could provide flexibility all layers of the stack from the and on the front page of the application, so that we save. So here we can see the change to our application as it exists live. So here we can So here in Jenkins, we could take a look at our deployed production environment where our application So let's take a look at that and the example that we were looking at of the most important use cases is around how you get base images into your So in this case, let's actually filter down to the application that we just deployed to our development environment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt BentleyPERSON

0.99+

UCPORGANIZATION

0.99+

MawrPERSON

0.99+

FirstQUANTITY

0.99+

CooperPERSON

0.99+

OneQUANTITY

0.99+

100QUANTITY

0.99+

one reasonQUANTITY

0.99+

two approachesQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

Dr HubORGANIZATION

0.98+

DavePERSON

0.98+

oneQUANTITY

0.98+

JenkinsTITLE

0.97+

twoQUANTITY

0.97+

LindsORGANIZATION

0.97+

IranLOCATION

0.97+

One thingQUANTITY

0.97+

one developerQUANTITY

0.96+

DACATITLE

0.95+

each processQUANTITY

0.95+

Dr DesktopTITLE

0.93+

one clickQUANTITY

0.92+

single paneQUANTITY

0.92+

both worldsQUANTITY

0.91+

Thean MidgePERSON

0.91+

dockerTITLE

0.89+

three graphical userQUANTITY

0.86+

MantisORGANIZATION

0.85+

last six yearsDATE

0.84+

DrORGANIZATION

0.82+

MirandaORGANIZATION

0.81+

BrownfieldORGANIZATION

0.8+

this winterDATE

0.75+

waysQUANTITY

0.75+

CTITLE

0.74+

one ofQUANTITY

0.74+

LindsayORGANIZATION

0.72+

ingressTITLE

0.71+

AlpineORGANIZATION

0.69+

most important use casesQUANTITY

0.67+

Cooper DaysORGANIZATION

0.66+

JenkinsPERSON

0.65+

mindsetsQUANTITY

0.63+

GreenfieldLOCATION

0.62+

MirandaPERSON

0.62+

RPERSON

0.59+

C A CTITLE

0.59+

LinzTITLE

0.59+

every oneQUANTITY

0.56+

challengesQUANTITY

0.53+

EnterpriseCOMMERCIAL_ITEM

0.5+

2.4OTHER

0.5+

HubORGANIZATION

0.48+

K8STITLE

0.48+

LensTITLE

0.44+

DocORGANIZATION

0.4+

HelpPERSON

0.39+

DockerORGANIZATION

0.37+

AlpineOTHER

0.35+

Tina Nolte & Tenry Fu, Spectro Cloud | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Man: from around the globe, it's "theCUBE" with coverage of "Kubecon" and "CloudNativeCon Europe 2020", virtual. Brought to you by Red Hat, the cloud native computing foundation and ecosystem partners. >> Welcome back, I'm Stu Miniman, and this is "theCUBE's" coverage of KubeCon CloudNativeCon Europe 2020, the virtual edition of course, it, this ecosystem has been bustling, a lot of activity in the five years that we've been covering it with "theCUBE" we've watched very much the maturation of what's going on. Remember, in the early days, it was open source projects, companies pulling all the pieces together. Now, there's a lot more things to choose from lots of projects, not just Kubernetes, but all the other pieces, and still lots of new innovations and new startups coming into the space. So happy to welcome to the program, have two first time guests from Spectro Cloud, first of all, we have the co founder and CEO Tenry Fu, and also Tina Notle who's the Vice President of product, Tina and Tenry, thank you so much for joining us. >> Thank you for having us. >> Likewise. >> All right, so Tenry, as one of the co founders, I want to understand, you know, why Spectro Cloud? Why now, you know, many outsiders, would they have said for a while, you know, Kubernetes, it's just getting baked into all of the environment. They looked at all the platforms, whether you're talking, you know, Google and AWS or VMware, they all have their platforms, they all have their managed services offering. So help us understand, what your team does and how you differentiate from what's already existing. >> Absolutely yeah, so I actually used to work at VMware, I, and then, I saw clouds taking off right and then I left VMware, to start my first startup called CliQr Technologies, which focus on multicloud management. But at that time, really, multicloud management through a single pane of glass is obviously right, and then clicker later acquired by Cisco. So at Cisco, I kind of witness The Container and Kubernetes taking off, right? And it makes a lot of sense, right for the first time both the application workloads and infrastructure became truly portable across multiple environments, but also very interestingly at Cisco I observed there are many developer teams, right? That is adopting Kubernetes and everyone is doing a little bit different things, that because different teams, they have a different stack constructor requirements, like some for AI/ML, some, they need a different base OS, some they just don't want to have a different version, and a lot of existing solutions doesn't really provide this kind of flexibility to satisfy all the different needs, right? one size fit all, typically is a one size fit for nothing. So we asked ourselves, why can't we try to create a platform that will give people the flexibility, but not turning it into a DIY project, right, still have a full manageability, so that user don't need to worry about the upgrade, Day Two operations, governance so and so forth. >> Yeah to Tina, I know when I've looked at your product, it's discussed as layers, which my background's in networking. So I love seeing things visually and understanding the pieces as they lay out the stack. So maybe help us understand a little bit as to, you know, that the flexibility that you give and how it's not just the Paradox of Choice, just too many options out there and you know, developers left to create their own mess that they can't then support. (laughing) >> Yeah, so you know, as Tenry mentioned, offering folks flexibility without turning into a do it yourself, you know, hot mess is what we're what we're helping People do at Spectrol Cloud, the core of our solution, the core of the differentiation within our solution is around this concept of a cluster profile, and as you mentioned, cluster profile basically allows people to define in a layered fashion, what's part of their Kubernetes infrastructure stack? So at the bottom, you're talking, what's the base operating system? What's the version of Kubernetes, that's going to be part of clusters that uses profile? What's your networking and storage interface look like? And then on top of that, you have a number of optional layers. So again, you know, back to flexibility manageability, we give people options around what those other layers look like on top. They include everything from security, logging, monitoring, etc, just anything that you want to go ahead and kind of bake into a definition, a profile of what a cluster should look like in one of your deployed environments. >> All right, well, Want to make sure I understand when you talk about Kubernetes in there, can it be, you know, say VMware with Vsphere7, now has Kubernetes support. Red Hat open shift is an option, all of the cloud players have their, you know, AKS, EKS. And they're like, can I bake that Kubernetes in or are you taking a different approach? >> We're going with upstream vanilla Kubernetes today, that allows us to go ahead and provide what's newest within the ecosystem, and let people go ahead and have a really open, really open solution that's replying. >> Okay, so when I talk to, when you look out there, a lot of companies are saying how can I manage multiple clusters? So if you look at what Google, Microsoft and VMware, they're talking about, we can manage our clusters and we can also help you with those other clusters. How does that impact Tenry, your Solution, doesn't it need to be, it's just the upstream solution that I put into that cluster profile, or can I connect to, say a managed cloud solution? >> Yeah, so I think in terms the multi class management or the consistency is really the key, right. So through this class profile concept, not only it can be used as the initial template to deploy a cluster, but it can also use as a single source for choose, to drive the cluster Lifecycle Management income upgrade. So right now, as Tina mentioned, we primarily focus on upstream, so that we want to provide the maximum flexibility in terms of our end to end Kubernetes stack. But we do also have a plan, that down the road that we go into in Brownfield existing clusters. So that enterprise, existing investment to their Kubernete infrastructure can be under managed by us. >> Well there always reaches a time when the brand new technology gets called Brownfield. I think that's the first time I've heard something like, you know, EKS or the like, you know, referred to as Brownfield. Tina, you know, when I think back to my history with integrated solutions, obviously, if I have the various pieces, it should be easier for me to stay on the latest make upgrades, roll things forward or roll things back, but you know, what, give us if you could some of the, the key values of, you know, building these cluster profiles, what that enables for your customers. >> So the key around cluster profiles, we offer this policy based management, so you describe as an administrator, what it is that those clusters need to look like, right? And we've got, we adopt a declarative desired state, you know, management approach along what Kubernetes does itself, and so what you're able to get through adopting, utilize cluster profiles, is this guarantee that from deployment and then into day two as well, what you've described in this profile, winds up maintaining itself, it remains true of the clusters that have been deployed. So what it is that you require as far as the operating system, what is required as far as some configuration options, etc. So the profile itself winds up being ground source of truth and around what it is that you've got running at all these various locations, across clouds, across different clusters, etc. >> All right. Tenry, you mentioned that having things more standardized is going to help customers, absolutely, we saw that in data centers for a long time, and standardized, how do you help customers make sure that the configuration that they build are going to work, are going to be stable, if they make changes that they're not going to get things out of sync. Is there you know, interoperability matrix or some other ways that we're trying to make sure that customers, you know, stay on the rails, if you will. >> Absolutely right, So through our system, right, all the integration points, we carry the additional metadata, right to basically give the hint about compatibility, resource constraints, right, and also the upgradability, in terms of moving from one version to another. So this way, we can kind of give you some guidance, when they initially construct a class profile, what will work together nicely and then what will not, right. And then on top of that, when upgrading from one existing cluster to a new version of a class profile definition, then we can look at the environment, right to understand, right, if there's something that potentially incompatible will popping up right, so we call that pre pilot integration, check right and also post deployment, we also allow user to run additional conformance tests. So that make sure the cluster everything is actually is still acting as as it's supposed to be. >> Another way to explain that is that you know, the cluster profile concept has a lot of flexibility attached with to it, right? That's a lot of power, it can get you into trouble if you don't have the right safety nets and safety harnesses underneath you. So we have a multi layered approach to helping make sure that people are getting benefit out of that flexibility. >> Wonderful and I'm wondering did, when you've had more customers using this, is their shared information, and if there're community guidelines that help, you know, understand when it's going to be okay, hey, 1.19's out, we're looking at 1.20. You might want to do this or hey, if you're using this piece of networking, you might want to wait a little bit before you go to the next version. >> That's definitely the idea over time, folks that are engaging with us, are very interested in the fact that, because of the fact that we're SaaS management platform, SaaS space management platform today, that it offers them the opportunity to learn from their peers, if you will, right, and their peers experiences. On top of that, we also have the ability to watch just what's been going on in other deployments in the Kubernetes ecosystem and we can make sure that all that's available, as Tenry mentioned, you know, in the form of the metadata that's on top of those packs. >> All right, how about how do you price this solution? When I look out there, I talked about Kubernetes baked into all the platforms, oftentimes, it can be baked into ELA, It's part of, you know, my just general cloud spend from that platform. So how do you do the pricing and, you know, are you plugged into any of the cloud marketplaces yet? >> Yeah, so flexibility is really part of our DNA. So even for pricing, we want to provide the maximum flexibility to our customer. So unlike some traditional solution typically is priced based on number of pause, right, a year, or even number of nodes, right. So we actually price based on number of CPU cores of all workers node under management by hour. So what we call those, core hour under management, right, and then every thousand core hours at one unit, we call kilo core hours. So kind of similar to how electricity is consumed, right, so this way, based on these core hour consumption, we allow user to either pay as you go as amongst the on demand plan, or you can do an annual commitment. >> And we are in process on the marketplaces. >> Yeah. >> All right, how about, we talked about Kubernetes, I think service mesh are part of it. What in this Kube, kubecon cloud native con ecosystem, which projects are the most tied into what you're doing anything that specter cloud is particularly contributing to that you can share? >> Yeah, so our system is built on top of Kubernetes cluster API project. So we are one of the contributor to class API, we are actively adding additional functionality to enhance class API, especially by in some other VMware environment for some custom use case, such as static IP or some special placement behaviors, and also adding additional contribute on different cloud support. >> Yeah, and as far as things that we're watching, and clearly we're, we've seen a dramatic increase in the number of people on our customer front that are interested in actual deployment, of service mesh now. So that's something that you know, we're going to be more engaged in over time. And another one that we're hoping to see, check out more talks around Kubecon is AI ML, right? A lot of interest on the part of customers around AIML use cases. >> Yeah, absolutely edge and AI and ML. Definitely very hot topics to conversation this year at the, at the Europe show, expect that to continue. Tina, I'm wondering, do you have any customer examples, maybe even anonymized that could kind of just explain the key values that your customers are seeing using your solution? >> Yeah, sure, so we've got one of our earliest customers is a Canadian financial, who came to us because, they were looking to figure out how to manage consistently at scale, and they have the problem that Tenry described earlier, around, I've got different development teams, they have different needs, and you know, how do you satisfy all those guys without going crazy, right? They've got an AIML use case, that's a special snowflake they've got two separate teams in different groups that would like to be under an IT management umbrella. That's a convergence use case that they're looking at, so kind of a typical example of somebody that we think of is, you know, a really good set of people for us to be having conversations with. We've also been working with a telecom provider that it's in a similar, similar vein actually, there's an AIML, there are multiple teams of different infrastructure, and they want to be able to consistently manage it's a story that we're seeing over and over again, thankfully. >> Yeah, we also see right from I think, at individual group or team level, right. There are a lot of, kind of a product owner or data scientists that they really want to have a kind of an easy button to quickly be able to provision Kubernetes clusters that suit for their need, right. And a lot of these groups, their primary focus is really the application, right? It's not their interest to spend a lot of time and resource on Kubernete management, in terms of deploying update, or secure an operation. So through us, they can very easily spin up a Kubernetes cluster, whether it's for AIML or for developing experiment, they can very quickly do that But with the flexibility, because a lot of existing solution, they may limit the version of Kubernetes clusters, they may limit the what kind of integration they can do. >> Yeah, Tenry you, we talked a little bit earlier about, you know, potential integration down the road. I'm curious, just there's so many companies creating innovations out there, you know, say for example, one that I hear a lot of feedback on is AWS now has far gate support for their EKS offering. Is that Something down the line you should look at or do you have some guidance as to how customers should be thinking about that, and if they want that kind of functionality, how they would get that with a solution like yours? >> Yeah, actually, we really share the same vision as AWS, right. So we believe, ultimately is the infrastructure really should be transparent to application developers, right, and it should be boundary-less. So our goal is not only manage Kubernetes, across multiple environment, but eventually we will be able to link all these cluster together, to make them acting as a single infrastructure. So developers, they can still use their familiar Kubernetes interface to deploy and manage their application, but without worrying about the how infrastructure underneath is operated or managed, right. So this in a way will eventually become kind of a phallic model, but across multiple cluster and multiple clouds. >> Alright, Tina, if maybe if you could give us the final takeaway, people attending Kubecon, cloud native con, what's the one thing that if you know they have a problem, they should be coming to Spectro cloud to hear more about? >> Yeah, sure so what Spectrol cloud aims to do is help enterprises not have to trade off between flexibility and control of their infrastructure, and manageability of use that stuff's that's the main, the main thing that we would like people to remember. >> All right, well Tenry and Tina, thank you so much for sharing with our community a little bit about Specter Cloud great talking to you and look forward to hearing more in the future. >> Thanks so much. >> Thank you too. >> All right, and stay tuned more coverage from Kubecon Cloud Native Con 2020. I'm Stu MiniMan and thank you, for watching "theCUBE." (light music)

Published Date : Aug 18 2020

SUMMARY :

Brought to you by Red Hat, a lot of activity in the five years that and how you differentiate and a lot of existing solutions that the flexibility that you So again, you know, back to all of the cloud players have that allows us to go ahead and provide and we can also help you that down the road that or roll things back, but you know, what, So what it is that you require that customers, you know, stay So that make sure the cluster that is that you know, guidelines that help, you know, the ability to watch just So how do you do the So kind of similar to how on the marketplaces. that you can share? So we are one of the So that's something that you know, expect that to continue. we think of is, you know, a kind of an easy button to quickly be able Is that Something down the is the infrastructure really that stuff's that's the main, talking to you and look forward I'm Stu MiniMan and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TinaPERSON

0.99+

Tina NotlePERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Tina NoltePERSON

0.99+

Stu MinimanPERSON

0.99+

TenryPERSON

0.99+

Tenry FuPERSON

0.99+

Spectro CloudORGANIZATION

0.99+

CliQr TechnologiesORGANIZATION

0.99+

Stu MiniManPERSON

0.99+

VMwareORGANIZATION

0.98+

KubernetesTITLE

0.98+

oneQUANTITY

0.98+

Spectrol CloudORGANIZATION

0.98+

one unitQUANTITY

0.98+

first timeQUANTITY

0.98+

KubeConEVENT

0.97+

five yearsQUANTITY

0.97+

single sourceQUANTITY

0.97+

bothQUANTITY

0.97+

this yearDATE

0.96+

two first timeQUANTITY

0.96+

BrownfieldORGANIZATION

0.96+

one versionQUANTITY

0.96+

a yearQUANTITY

0.95+

ELATITLE

0.93+

first startupQUANTITY

0.93+

CloudNativeCon Europe 2020EVENT

0.93+

Kubecon Cloud Native Con 2020EVENT

0.92+

Day TwoQUANTITY

0.91+

TenryORGANIZATION

0.91+

todayDATE

0.91+

Spectrol cloudORGANIZATION

0.9+

two separate teamsQUANTITY

0.9+

day twoQUANTITY

0.9+

KuberneteTITLE

0.9+

one sizeQUANTITY

0.88+

single paneQUANTITY

0.85+

one thingQUANTITY

0.84+

single infrastructureQUANTITY

0.82+

every thousand core hoursQUANTITY

0.79+

cloud native conEVENT

0.78+

KubeCon CloudNativeCon Europe 2020EVENT

0.78+

KubernetesORGANIZATION

0.76+

theCUBETITLE

0.75+

firstQUANTITY

0.73+

CanadianOTHER

0.73+

Red HatTITLE

0.72+

Specter CloudTITLE

0.7+

Erin A. Boyd, Red Hat | KubeCon + CloudNativeCon NA 2019


 

>> Announcer: Live from San Diego, California, it's theCUBE, covering KubeCon + CloudNativeCon. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to the third day of wall-to-wall coverage here at Kubecon + CloudNativeCon 2019 in San Diego. I am your host for the three days of coverage, Stu Miniman. Joining me this morning is Justin Warren. And happy to welcome back to the program, Erin Boyd who's a senior principal software engineer at Red Hat. Erin, thanks so much for joining us. >> Thanks for having me. >> All right, so we had a chance to catch up in Barcelona on theCUBE there. Storage is definitely one of the faster moving areas of this ecosystem over the last two years. Why don't we start with, really, the event? So, you know, as I said, we're in day three but day zero there were a whole lot of things we had. Some of your peers at Red Hat have talked about OpenShift Commons, but storage, to my understanding had a couple of things going on. Why don't you share with our audience a little bit of that? >> Sure, so we had a SIG face-to-face for Kubernetes, it was probably one of the best attended. We had to cap the number of attendees, so about 60 different people came to talk about the future of Kubernetes in storage, and what we need to be doing to meet our customers' needs. In conjunction with that, there was a parallel session called CNS Days, which is Container Native Storage Days. That event is very customer focused, so I really enjoyed bouncing between the two of them. To go from the hypothetical, programming, architecture view, straight to what customers in the enterprise are looking at and doing, and what their real needs are. >> So from that SIG, can you actually share a little bit of where we are, where some of the requests are? We know storage is never one way to fix it, there's been some debates, there's a couple different ways to do... I mean, traditional storage, you've got block, file, and object. Cloud storage, there are more options in cloud storage today than there was, if I was to configure a server, or buy a storage array in my own data center. So where are we, what are those asks? What's on the roadmap there? >> Right, so I think for the past five years, we've been really focused on being mindful of what APIs are common across all the vendors. I think we want to ensure that we're not excluding any vendors from being part of this ecosystem. And so, with that, we've created the basis of things like persistent volumes, persistent volumes claims, storage classes to automate that, storage quotas to be able to have management and control over it. So I think now we're looking to the next evolution of... As the model's maturing, and people are actually running stateful applications on Kubernetes, we need to be addressing their needs. So things like snapshotting, eventually volume cloning, which has just gone in, and migrating. All these type of things that exist within the data plane are going to be the next evolution of things we look at in the SIG. >> Yeah, so one criticism that's been mentioned about Kubernetes a few times, that one, it's a bit complicated. But also, it didn't really deal that well with stateful sets. Stateful data management has always been, it's been a little bit lacking. That seems to have pretty much been sorted out now. As you mentioned, there's a lot more work being done on storage operators. But you're talking about some of these data management features that operators from other paradigms are kind of used to being there. When you're thinking about moving workloads to Kubernetes, or putting in new workloads on Kubernetes, if you're unsure about, "Well, will I be able to operate this in the same way that I did things before?" How do you think people should be thinking about those kind of data services in Kubernetes? >> So I think it's great that you mentioned operators. Because that was one of the key things when Rook came into the landscape, to be able to lower the complexity of taking something that requires physical storage and compute, geography, node selection. All those things, it helped people who were used to just the cloud model. I create a PVC, it's a request for storage, Amazon magically fulfills it. I don't know what's backing it. To be able to take these more complex storage systems and deploy them within the ecosystem, it also does a good job supporting our Brownfield customers, because not every customer that's coming to Kubernetes is green. So it's important that we understand that some customers want to keep their data on-prem, maybe burst to the cloud to leverage those services, but then keep their data close to home. So operators help facilitate that. >> Yeah, Erin, I hesitate a little bit to ask this, but I'm wondering if you can do a little compare, contrast for us, for what the industry had done back in OpenStack days? When I looked at storage, every traditional storage company certified their environment for OpenStack. On a storage standpoint, it feels like a different story to me when I hear about the ecosystem of operators in OpenStack. So I know you know this space, so maybe you can give us a little bit of what we learned in the past. What's similar, what's different? >> Right, well I think one of the benefits is we have a lot of the same key players. As you may know, OpenShift has pivoted from Gluster to Ceph, Ceph being the major backer of OpenStack. So we're able to take some of that technical debt, and learn our lessons from things we could improve, and apply those things within Kubernetes. I just think that it's a little slower migration, because in OpenStack, like you said, we had certification, there were different drivers. And we're trying to learn from, maybe, I wouldn't even call those mistakes, but, how can we better automate this? What can we do from an operational perspective to make it easier? >> Well I think because one of the... It felt like we were kind of taking some older models and... I'm testing it, I'm adding it. The ecosystem for operators here is different. Many of these, we're talking very much software-driven solutions. It's built for container architectures, so it's understandable that it might take a little bit longer because it's a different paradigm. >> Right, well, and I think the certification kind of... It wasn't an inhibitor but it certainly took a lot of time. And I think our take was on... We used to have all the storage providers be entry providers within Kubernetes. And with CSI, we have since started to redo the plugins and the sidecars, and move that out of core. So then the certification kind of falls outside of that instead of being more tightly wound into the platform. And I think it will allow us to have a lot more flexibility. Instead of waiting on each release, vendors can create operators, certify them themselves, have them in their own CSI driver, and move at the pace that they need to move. >> So how do you balance that need for Kubernetes to be a common operating platform that people can build on with each vendor's desire to provide their own unique capabilities that they think that they do particularly well? That's why they charge the money that they do, because they think that theirs is the best storage ever. How do you balance that tension between the need for a standard platform and to make it interoperable, but still allowing the flexibility for people to have their own kind of innovation in there? >> So when we created the storage class, for instance, to be able to create a service level over storage, to be able to provide the provisioner that we're going to use, we made the specification of that section completely opaque. And what that allowed us to do is that when vendors wrote their provisioners and now their CSI drivers, allowed them to feed in different attributes of the storage that they want to leverage, that don't necessarily have to be in core Kubernetes. So it provided a huge amount of flexibility on that. The other side of that, though, is, the feeback we get from real users is "I need backup and recovery, and I need DR, and I need that across the platform." So I really think as we look to scale this out, we have to be looking at the commonalities between all storage and bringing those APIs into Kubernetes. >> One of the things I've really liked to see in this ecosystem over the last year or so, and really highlighted at this show, we're talking a lot more about workloads and applications and how those... What works today and where we're growing. Can you speak a little bit from your world as to where we are, what's working great, what customers are deploying, and a little bit, the road map of where we still need to go? >> Sure, I think workloads are key. I mean, I think that we have to focus on the actual end-to-end delivery of that, and so we have to figure out a way that we can make the data more agile, and create interfaces to really enable that, because it's very unlikely that an enterprise company is going to rely on one cloud or stay with one cloud, or want their data in one cloud. They're going to want to have the flexibility to leverage that. So as we enable those workloads, some are very complex. We started with, "Hey, I just want to containerize my application and get it running. Now I want to have some sort of state, which is persistent storage, and now I want to be able to scale that out across n number of clusters." That's where the workloads become really important. And long term, where we need policy to automate that. My pod goes down, I restart it, it needs to know that because of, maybe, the data that that workload's producing, it can only stay in this geographical region. >> Yeah, we talk about multicloud. You mentioned data protection, data protection is something I need to do across the board. Security is something I need to do across the board. My automation needs to take all that into account. How's Red Hat helping customers get their arms around that challenge? >> Yeah, so I think Red Hat really does take a holistic view in making sure that we provide a very consistent, secure platform. I think that's one of the things that you see when you come on to OpenShift, for instance, or OKR, that you're seeing security tightened a little bit more, to ensure that you're running in the best possible way that you can, to protect your data. And then, the use of Rook Ceph, for instance, Ceph provides that universal backplane, where if you're going to have encryption or anything like that, you know it's going to be the same across that. >> It sounds like there's an opportunity here for people new to Kubernetes who have been doing things in a previous way. There's a little bit of reticence from this community to understand enterprise, they're like, "Well, actually, you're kind of doing it wrong. It's slow and inflexible." There's actually a lot of lessons that we've learned in enterprise, particularly around these workloads. Having security, having backup in DR. In the keynote this morning, there was a lot of discussion about the security that either is in Kubernetes, and some parts it's kind of lacking. I think there's a lot that both of these communities can learn from each other, so I'm seeing a lot of moves of late to be a little bit more welcoming to some people who are coming to Kubernetes from other ecosystems. To be able to bring the ideas that they have that... We've already learned these lessons before, we can take some of that knowledge and bring it into Kubernetes to help us to do that better. Do you see Red Hat bringing a lot of that expereience in its work... Red Hat's been around for quite some time now, so you've done a lot of this already. Are you bringing all of that knowledge into Kubernetes and sharing it with the ecosystem? >> Absolutley, and just like Stu pointed out, I mean, OpenStack was a big part of our evolution, and security within RHEL, and I think we absolutely should take those lessons learned and look to how we do protect our customers' data, and make sure that the platform, Kubernetes itself and as we evolve OpenShift, can provide that, and ways that we can certify that. >> Erin, you're meeting with a lot of customers. You were talking about the Day Zero thing. What's top of mind for your customers? We talk about, that Kubernetes has crossed the chasm but to get the vast majority, there's still lots of work to do. We need to, as an industry, make things simpler. What's working well, and what are some of the challenges from the customers that you've talked to? >> So I think, if you walk in, across the hall, and you see how many vendors are there, it's trying to get a handle on what I should even be doing. And as the co-lead of the CNCF Storage SIG, I think that's one of the initiatives that we take very seriously. So in addition to a storage whitepaper, we've been working on use cases that define, when should I use a data store? When should I use object? Why would I want to use file? And then really taking these real-world examples, creating use cases and actual implementations so someone can, "Oh, that's similar to my workload." Here are some tools to accelerate understanding how to get that set up. And also creating those guard rails from an architectural standpoint. You don't want to go down this path, that's not right for your workload. So we're hoping to at least provide an education around containerized storage that'll help customers. >> Yeah, I'm just curious. I think back ten years ago, I was working for a large storage company. We were having some of these same conversations. So is it very different now in the containerized, multicloud world? Or are some of the basic decision tree discussions around block, file, and object and application the same as we might have been having a decade ago? >> I think we're starting to just touch on those, and I'm glad that you brought up object. That was one of the things I talked about in Barcelona, and we actually talked about at the face-to-face. To me, it's kind of the missing piece of storage today in Kubernetes, and I think we're finally starting to see that more customers are asking for that and realizing that's an important workload to be able to support at its core. So I think, yes, we're having the same conversations again, but certainly in a different context. >> Yeah, I mean, back in the day, it was, the future is object but we don't know how we'd get there. If you look behind the scenes in most public clouds, object's running a lot of what's there. All right, Erin, I want to give you the final word. KubeCon 2019, from that storage perspective. What should people watching take away? >> That we're only beginning with storage, yeah. We still have a lot of work to do, but I think it's a wonderful community and vibrant, and I think there'll be a lot of changes in the coming years. >> All right. Well, definitely a vibrant ecosystem. Erin, thank you so much for all the updates. We'll be back with more coverage here, for Justin Warren. I'm Stu Miniman. Thank you for watching theCUBE. (techno music)

Published Date : Nov 21 2019

SUMMARY :

Brought to you by Red Hat, the Cloud Native And happy to welcome back to the program, Erin Boyd to my understanding had a couple of things going on. We had to cap the number of attendees, so about 60 So from that SIG, can you actually share a little bit are going to be the next evolution of That seems to have pretty much been sorted out now. came into the landscape, to be able to lower the complexity Yeah, Erin, I hesitate a little bit to ask this, but to Ceph, Ceph being the major backer of OpenStack. It felt like we were kind of taking some older models the pace that they need to move. but still allowing the flexibility for people to that don't necessarily have to be in core Kubernetes. One of the things I've really liked to see I mean, I think that we have to focus on the actual Security is something I need to do across the board. I think that's one of the things that you see moves of late to be a little bit more welcoming take those lessons learned and look to how we do protect but to get the vast majority, So in addition to a storage whitepaper, the same as we might have been having a decade ago? and I'm glad that you brought up object. All right, Erin, I want to give you the final word. That we're only beginning with storage, yeah. Erin, thank you so much for all the updates.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

ErinPERSON

0.99+

Erin BoydPERSON

0.99+

Red HatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

Stu MinimanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

twoQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

Erin A. BoydPERSON

0.99+

three daysQUANTITY

0.99+

bothQUANTITY

0.99+

KubeConEVENT

0.99+

oneQUANTITY

0.99+

third dayQUANTITY

0.98+

CNS DaysEVENT

0.98+

each releaseQUANTITY

0.98+

one cloudQUANTITY

0.98+

ten years agoDATE

0.98+

KubeconEVENT

0.98+

OpenStackTITLE

0.98+

StuPERSON

0.98+

last yearDATE

0.97+

BrownfieldORGANIZATION

0.97+

todayDATE

0.97+

KubernetesTITLE

0.97+

one wayQUANTITY

0.96+

day threeQUANTITY

0.96+

CloudNativeConEVENT

0.96+

Container Native Storage DaysEVENT

0.96+

OpenShiftTITLE

0.94+

about 60 different peopleQUANTITY

0.92+

RHELTITLE

0.92+

CloudNativeCon NA 2019EVENT

0.91+

OneQUANTITY

0.9+

each vendorQUANTITY

0.89+

CephORGANIZATION

0.89+

multicloudORGANIZATION

0.89+

a decade agoDATE

0.88+

CloudNativeCon 2019EVENT

0.88+

day zeroQUANTITY

0.87+

this morningDATE

0.86+

OKRORGANIZATION

0.84+

Rook CephORGANIZATION

0.82+

KubeCon 2019EVENT

0.82+

last two yearsDATE

0.82+

CNCF Storage SIGORGANIZATION

0.8+

one criticismQUANTITY

0.76+

OpenShift CommonsORGANIZATION

0.76+

past five yearsDATE

0.73+

SIGORGANIZATION

0.73+

coupleQUANTITY

0.68+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+