Image Title

Search Results for GA:

James Bryan, Dell Technologies & Heather Rahill, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (bright music) >> Hey everyone! Welcome back. Good evening from Barcelona, Spain. It's theCUBE, the leader in live tech coverage. As you well know, Lisa Martin and Dave Nicholson. Day two of our coverage of MWC 23. Dave, we've been talking about sexy stuff all day. It's about to get, we're bringing sexy back. >> It's about to get hot. >> It's about to get hot. We've had two guests with us, two senior consultants from the product planning, networking and emerging server solutions group at Dell, Heather Raheel and James Bryan. Welcome guys. >> Thanks for having us. >> Thanks for having us. >> Really appreciate it. >> Lisa: Dude, you're bringing sexy back. >> I know. We are. We are. We wanted to bring it, yes. >> This is like XR8000 >> We've been talking about this all day. It's here... >> Yes. Yes. Talk to us about why this is so innovative. >> So, actually we wanted to bring this, getting a lot of attention here on site. Matter of fact, we even have a lot of our competition taking pictures of it. And why is it so innovative? So one of the things that we've done here is we've taken a lot of insights and feedback from our customers that are looking at 5G deployments and looking at how do they, basically, bring commercial off the shelf to a very proprietary industry. So what we've done is we've built a very flexible and scalable form factor in the XR8000. And so this is actually a product that we've purposely built for the telecommunications space. Specifically can be deployed for serving a virtual DU or DUC at a cell site for distributed ram. Or it can be put in a local data center, but outside a main data center to support centralized ram. We'll get into it, which is where the really excitement gets is it's sled-based in its design. And so because of that, it enables us to provide both functionality for telecommunications. Could be network, could be enterprise edge as well as being designed to be configured to whatever that workload is, and be cost-optimized for whatever that work. >> Ah, you're killing us! Let's see. Show, show it to us. >> Actually this is where I have to hand it off to my colleague Heather. But what I really want to show you here is the flexibility that we have and the scalability. So, right here what I'm going to show you first is a one U sled. So I'll set that out here, and I'll let Heather tell us all about it. >> Yeah. So XR8000. Let's talk about flexibility first. So the chassis is a two U chassis with a hot swap shared power supply on the right. Within it there are two form factors for the sleds. What James brought out here, this is the one U form factor. Each sled features one node or one CPU first sled. So we're calling the one U the highest, highest density sled right? Cause you can have up to four one node one U sleds in the chassis. The other form factor is a two U sled, on the right here. And that's just really building on top of the one U sled that adds two PCIe sleds on top. So this is really our general purpose sled. You could have up to two of these sleds within the chassis. So what's really cool about the flexibility is you can plug and play with these. So you could have two one Us, two two Us, or mix and match of each of those. >> Talk about the catalyst to build this for telco and some of the emerging trends that, that you guys have seen and said this needs to be purpose-built for the telco. There's so much challenge and complexity there, they need this. >> Want me to take this? So actually that, that's a great question by the way. It turns out that the market's growing. It's nascent right now. Different telecommunication providers have different needs. Their workloads are different. So they're looking for a form factor like this that, when we say flexible, they need to be able to configure it for theirs. They don't all configure the same way. And so they're looking for something that they can configure to their needs, but they also don't want to pay for things that they don't need. And so that's what led to the creation of, of this device the way we've created it. >> How is it specific for edge use cases, though? We think of the edge: it's emerging, it's burgeoning. What makes this so (pause) specific to edge use cases? >> Yeah, let's talk about some of the the ruggedized features of the product. So first of all, it is short depth. So only 430 millimeters. And this is designed for extreme temperatures, really for any environment. So the normal temperatures of operating are negative five to 55, but we've also developed an enhanced heat sink to get us even beyond that. >> Dave: That's Celsius? >> Celsius. Thank you. >> Lisa: Right. So this will get us all the way down to negative 20 boot in operating all the way up to 65 C. So this is one of the most extreme temperature edge offerings we've seen on the market so far. >> And so this is all outside the data center, so not your typical data center server. So not only are we getting those capabilities, but half the size when you look at a typical data center server. >> So these can go into a place where there's a rack, maybe, but definitely not, not doesn't have to be raised for... >> Could be a cell side cabinet. >> Yeah. Okay. >> Heather: Yeah. And we also have AC and DC power options that can be changed over time as well. >> So what can you pack into that one one U sled in terms of CPU cores and memory, just as an example? >> Yeah, great. So, each of the sleds will support the fourth generation of Intel Sapphire Rapids up to 32 corp. They'll also be supporting their new vRAN boost SKUs. And the benefit of those is it has an integrated FEC accelerator within the CPU. Traditionally, to get FEC acceleration, you would need a PCIe card that would take up one of the slots here. Now with it integrated, you're freeing up a PCIe slot, and there's also a power savings involved with that as well. >> So talk about the involvement of, of the telco customer here and then design, I know Dell is very tight with its customers. I imagine there was a lot of communications and collaboration with customers to, to deliver this. >> Interesting question. So it turns out that early on, we had had some initial insight, but it was actually through deep engagement with our customers that we actually redesigned the form factor to what you see here today. So we actually spent significant amount of time with various telecommunication customers from around the world, and they had a very strong influence in this form factor. Even to the point, like Lisa mentioned, we ended up redesigning it. >> Do, do you have a sense for how many of these, or in what kinds of configurations would you deploy in like the typical BBU? So if we're thinking about radio access network literally tran- tower transmitter receiver... somewhere down there (pause) in a cabinet, you have one of these, you have multiple units. I know, I know the answer is "it depends". >> You are right. >> But if, but if someone tells you, well you know, we have 20, 20 cellular sites, and we need (pause) we're we're moving to an open model, and we need the horsepower to do what we want to do. I'm trying to, I'm trying to gauge like what, one of these, what does that, what does that mean? Or is it more like four of these? >> So that, so we'll go >> It depends? >> Yeah it depends, you're absolutely right. However, we can go right there. So if you look in the two U >> Yeah. >> we have three PCIe slots, you know, as Heather mentioned. And so let's say you have a typical cell site, right? We could be able to support a cell site that could have it could have three radios in the configuration here, it could have a, multiply by three, right? It could have up to 18 radios, and we could actually support that. We could support multiple form factors or multiple deployments at a particular cell site. It really then to your point, it does depend, and that's one of the reasons that we've designed it the way we have. For example, if a customer says their initial deployment, they only need one compute node because maybe they're only going to have, you know, two or three carriers. So then, there, you've got maybe six or eight or nine radios. Well then, you put in a single node, but then they may want to scale over time. Well then, you actually have a chassis. They just come in, and they put in a new chassis. The other beauty of that is, is that maybe they wait, but then they want to do new technology. They don't even have to buy a whole new server. They can update to >> Heather: Yeah. the newest technology, same chassis put that in, connect to the radios, and keep going. >> But in this chassis, is it fair to say that most people will be shocked by how much traffic can go through something like this? In the sense that, if a tower is servicing 'n' number of conversations and data streams, going through something like this? I mean somehow blow, it blows my mind to think of thousands of people accessing something and having them all wrapped through something like this. >> It, it'll depend on what they're doing with that data. So you've probably talked a lot about a type of radios, right? Are we going to be massive MIMO or what type of radio? Is it going to be a mix of 4G or 5G? So it'll really depend on that type of radio, and then where this is located. Is it in a dense urban environment, or is it in a rural type of environment at that cell site shelter, but out in a suburban area. So will depend, but then, that's the beauty of this is then, (pause) I get the right CPU, I get the right number of adding cards to connect to the right radios. I purchase whatever, what I need. I may scale to that. I may be (pause) in a growing part of the city, like where we're from or where I'm from or in San Diego where Heather's from where she's in a new suburban, and they put out a new tower and the community grows rapidly. Well then, we may, they may put out one and then you may add another one and I can connect to more radios, more carriers. So it really just comes down to the type and what you're trying to put through that. It could edit a stadium where I may have a lot of people. I may have like, video streaming, and other things. Not only could I be a network connectivity, but I could do other functions like me, multi-axis axon point that you've heard about, talked about here. So I could have a GPU processing information on one side. I could do network on the other side. >> I do, I do. >> Go for it >> Yeah, no, no, I'm sorry. I'm sorry. I don't want to, don't want to hog all of the time. What about expansion beyond the chassis? Is there a scenario where you might load this chassis up with four of those nodes, but then because you need some type of external connectivity, you go to another chassis that has maybe some of these sleds? Or are these self-contained and independent of one another? >> They are all independent. >> Okay. >> So, and then we've done that for a reason. So one of the things that was clear from the customers, again and again and again, was cost, right? Total cost of ownership. So not only, how much does this cost when I buy it from you to what is it going to take to power and run it. And so basically we've designed that with that in mind. So we've separated the compute and isolated the compute from the chassis, from the power. So (pause) I can only deal with this. And the other thing is is it's, it's a sophisticated piece of equipment that people that would go out and service it are not used to. So they can just come out, pull it out without even bringing the system down. If they've got multiple nodes, pull it. They don't have to pull out a whole chassis or whole server. Put one in, connect it back up while the system is still running. If a power supply goes out, they can come and pull it out. We've got one, it's designed with a power infrastructure that if I lose one power supply, I'm not losing the whole system. So it's really that serviceability, total cost of ownership at the edge, which led us to do this as a configurable chassis. >> I was just going to ask you about TCO reduction but another thing that I'm curious about is: there seems to be like a sustainability angle here. Is that something that you guys talk with customers about in terms of reducing footprint and being able to pack more in with less reducing TCO, reducing storage, power consumption, that sort of thing? >> Go ahead. >> You want me to take that one as well? So yes, so it comes at me, varies by the customer, but it does come up and matter of fact one- in that vein, similar to this from a chassis perspective is, I don't, especially now with the technology changing so fast and and customers still trying to figure out well is this how we're really going to deploy it? You basically can configure, and so maybe that doesn't work. They reconfigure it, or, as I mentioned earlier, I purchased a single sled today, and I purchased a chassis. Well then the next generation comes. I don't have to purchase a new chassis. I don't have to purchase a new power supply. So we're trying to address those sustainability issues as we go, you know, again, back to the whole TCO. So they, they're kind of related to some extent. >> Right. Right, right. Definitely. We hear a lot from customers in every industry about ESG, and it's, and it's an important initiative. So Dell being able to, to help facilitate that for customers, I'm sure is part of what gives you that competitive advantage, but you talked about, James, that and, and we talked about it in an earlier segment that competitors are coming by, sniffing around your booth. What's going on? Talk about, from both of your lenses, the (pause) competitive advantage that you think this gives Dell in telco. Heather, we'll start with you. >> Heather: Yeah, I think the first one which we've really been hitting home with is the flexibility for scalability, right? This is really designed for any workload, from AI and inferencing on like a factory floor all the way to the cell site. I don't know another server that could say that. All in one box, right? And the second thing is, really, all of the TCO savings that will happen, you know, immediately at the point of sale and also throughout the life cycle of this product that is designed to have an extremely long lifetime compared to a traditional server. >> Yeah, I'll get a little geeky with you on that one. Heather mentioned that we'll be able to take this, eventually, to 65 C operating conditions. So we've even designed some of the thermal solutions enabling us to go there. We'll also help us become more power efficient. So, again, back to the flexibility even on how we cool it so it enables us to do that. >> So do, do you expect, you just mentioned maybe if I, if I heard you correctly, the idea that this might have a longer (pause) user-usable life than the average kind of refresh cycle we see in general IT. What? I mean, how often are they replacing equipment now in, kind of, legacy network environments? >> I believe the traditional life cycle of a of a server is, what? Three? Three to five years? Three to five years traditionally. And with the sled based design, like James said, we'll be designing new sleds, you know, every year two years that can just be plugged in, and swapped out. So the chassis is really designed to live much longer than, than just three to five years. >> James: We're having customers ask anywhere from seven to when it dies. So (pause) substantial increase in the life cycle as we move out because as you can, as you probably know, well, right? The further I get out on the edge, it, the more costly it is. >> Lisa: Yep. >> And, I don't want to change it if I don't have to. And so something has to justify me changing it. And so we're trying to build to support that both that longevity, but then with that longevity, things change. I mean, seven years is a long time in technology. >> Lisa: Yes it is. >> So we need to be there for those customers that are ready for that change, or something changed, and they want to still be able to, to adopt that without having to change a lot of their infrastructure. >> So customers are going to want to get their hands on this, obviously. We know, we, we can tell by your excitement. Is this GA now? Where is it GA, and where can folks go to learn more? >> Yeah, so we are here at Mobile World Congress in our booth. We've got a few featured here, and other booths throughout the venue. But if you're not here at Mobile World Congress, this will be launched live on the market at the end of May for Dell. >> Awesome. And what geographies? >> Worldwide. >> Worldwide. Get your hands on the XR8000. Worldwide in just a couple months. Guys, thank you >> James: Thank you very much. >> for the show and tell, talking to us about really why you're designing this for the telco edge, the importance there, what it's going to enable operators to achieve. We appreciate your time and your insights and your show and tell. >> Thanks! >> Thank you. >> For our guests and for Dave Nicholson, I'm Lisa Martin. You're watching theCUBE live, Spain in Mobile MWC 23. Be back with our sho- day two wrap with Dave Valente and some guests in just a minute. (bright music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. It's about to get, we're It's about to get hot. I know. We've been talking about this all day. Talk to us about why So one of the things that we've done here Show, show it to us. I'm going to show you So the chassis is a two Talk about the catalyst to build this that they can configure to their needs, specific to edge use cases? So the normal temperatures of operating Thank you. So this is one of the most but half the size when you look not doesn't have to be raised for... that can be changed over time as well. So, each of the sleds will support So talk about the involvement of, the form factor to what I know, I know the answer is "it depends". to do what we want to do. So if you look in the two U and that's one of the reasons that put that in, connect to But in this chassis, is it fair to say So it really just comes down to the type What about expansion beyond the chassis? So one of the things that Is that something that you guys talk I don't have to purchase a new chassis. advantage that you think of the TCO savings that will happen, So, again, back to the flexibility even the idea that this might So the chassis is really in the life cycle as we And so something has to So we need to be there for to want to get their hands on the market at the end of May for Dell. And what geographies? hands on the XR8000. for the telco edge, the importance there, Be back with our sho- day two wrap

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave ValentePERSON

0.99+

20QUANTITY

0.99+

LisaPERSON

0.99+

James BryanPERSON

0.99+

HeatherPERSON

0.99+

sixQUANTITY

0.99+

DavePERSON

0.99+

ThreeQUANTITY

0.99+

Heather RaheelPERSON

0.99+

DellORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

eightQUANTITY

0.99+

Heather RahillPERSON

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

San DiegoLOCATION

0.99+

telcoORGANIZATION

0.99+

two guestsQUANTITY

0.99+

oneQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

five yearsQUANTITY

0.99+

seven yearsQUANTITY

0.99+

Each sledQUANTITY

0.99+

SpainLOCATION

0.99+

MWC 23EVENT

0.99+

millimetersQUANTITY

0.99+

XR8000COMMERCIAL_ITEM

0.99+

55QUANTITY

0.99+

eachQUANTITY

0.99+

bothQUANTITY

0.99+

sevenQUANTITY

0.99+

one nodeQUANTITY

0.99+

one boxQUANTITY

0.99+

todayDATE

0.98+

three radiosQUANTITY

0.98+

three carriersQUANTITY

0.98+

firstQUANTITY

0.98+

first oneQUANTITY

0.98+

Sapphire RapidsCOMMERCIAL_ITEM

0.98+

fourth generationQUANTITY

0.98+

two form factorsQUANTITY

0.98+

end of MayDATE

0.97+

thousands of peopleQUANTITY

0.97+

two senior consultantsQUANTITY

0.97+

TCOORGANIZATION

0.97+

second thingQUANTITY

0.96+

GALOCATION

0.96+

first sledQUANTITY

0.96+

IntelORGANIZATION

0.96+

one UQUANTITY

0.96+

Phil Brotherton, NetApp | Broadcom’s Acquisition of VMware


 

(upbeat music) >> Hello, this is Dave Vellante, and we're here to talk about the massive $61 billion planned acquisition of VMware by Broadcom. And I'm here with Phil Brotherton of NetApp to discuss the implications for customers, for the industry, and NetApp's particular point of view. Phil, welcome. Good to see you again. >> It's great to see you, Dave. >> So this topic has garnered a lot of conversation. What's your take on this epic event? What does it mean for the industry generally, and customers specifically? >> You know, I think time will tell a little bit, Dave. We're in the early days. We've, you know, so we heard the original announcements and then it's evolved a little bit, as we're going now. I think overall it'll be good for the ecosystem in the end. There's a lot you can do when you start combining what VMware can do with compute and some of the hardware assets of Broadcom. There's a lot of security things that can be brought, for example, to the infrastructure, that are very high-end and cool, and then integrated, so it's easy to do. So I think there's a lot of upside for it. There's obviously a lot of concern about what it means for vendor consolidation and pricing and things like that. So time will tell. >> You know, when this announcement first came out, I wrote a piece, you know, how "Broadcom will tame the VMware beast," I called it. And, you know, looked at Broadcom's history and said they're going to cut, they're going to raise prices, et cetera, et cetera. But I've seen a different tone, certainly, as Broadcom has got into the details. And I'm sure I and others maybe scared a lot of customers, but I think everybody's kind of calming down now. What are you hearing from customers about this acquisition? How are they thinking about it? >> You know, I think it varies. There's, I'd say generally we have like half our installed base, Dave, runs ESX Server, so the bulk of our customers use VMware, and generally they love VMware. And I'm talking mainly on-prem. We're just extending to the cloud now, really, at scale. And there's a lot of interest in continuing to do that, and that's really strong. The piece that's careful is this vendor, the cost issues that have come up. The things that were in your piece, actually. And what does that mean to me, and how do I balance that out? Those are the questions people are dealing with right now. >> Yeah, so there's obviously a lot of talk about the macro, the macro headwinds. Everybody's being a little cautious. The CIOs are tapping the brakes. We all sort of know that story. But we have some data from our partner ETR that ask, they go out every quarter and they survey, you know, 1500 or so IT practitioners, and they ask the ones that are planning to spend less, that are cutting, "How are you going to approach that? What's your primary methodology in terms of achieving, you know, cost optimization?" The number one, by far, answer was to consolidate redundant vendors. It was like, it's now up to about 40%. The second, distant second, was, "We're going to, you know, optimize cloud costs." You know, still significant, but it was really that consolidating the redundant vendors. Do you see that? How does NetApp fit into that? >> Yeah, that is an interesting, that's a very interesting bit of research, Dave. I think it's very right. One thing I would say is, because I've been in the infrastructure business in Silicon Valley now for 30 years. So these ups and downs are, that's a consistent thing in our industry, and I always think people should think of their infrastructure and cost management. That's always an issue, with infrastructure as cost management. What I've told customers forever is that when you look at cost management, our best customers at cost management are typically service providers. There's another aspect to cost management, is you want to automate as much as possible. And automation goes along with vendor consolidation, because how you automate different products, you don't want to have too many vendors in your layers. And what I mean by the layers of ecosystem, there's a storage layer, the network layer, the compute layer, like, the security layer, database layer, et cetera. When you think like that, everybody should pick their partners very carefully, per layer. And one last thought on this is, it's not like people are dumb, and not trying to do this. It's, when you look at what happens in the real world, acquisitions happen, things change as you go. And in these big customers, that's just normal, that things change. But you always have to have this push towards consolidating and picking your vendors very carefully. >> Also, just to follow up on that, I mean, you know, when you think about multi-cloud, and you mentioned, you know, you've got some big customers, they do a lot of M & A, it's kind of been multi-cloud by accident. "Oh, we got all these other tools and storage platforms and whatever it is." So where does NetApp fit in that whole consolidation equation? I'm thinking about, you know, cross-cloud services, which is a big VMware theme, thinking about a consistent experience, on-prem, hybrid, across the three big clouds, out to the edge. Where do you fit? >> So our view has been, and it was this view, and we extend it to the cloud, is that the data layer, so in our software, is called ONTAP, the data layer is a really important layer that provides a lot of efficiency. It only gets bigger, how you do compliance, how you do backup, DR, blah blah blah. All that data layer services needs to operate on-prem and on the clouds. So when you look at what we've done over the years, we've extended to all the clouds, our data layer. We've put controls, management tools, over the top, so that you can manage the entire data layer, on-prem and cloud, as one layer. And we're continuing to head down that path, 'cause we think that data layer is obviously the path to maximum ability to do compliance, maximum cost advantages, et cetera. So we've really been the company that set our sights on managing the data layer. Now, if you look at VMware, go up into the network layer, the compute layer, VMware is a great partner, and that's why we work with them so closely, is they're so perfect a fit for us, and they've been a great partner for 20 years for us, connecting those infrastructural data layers: compute, network, and storage. >> Well, just to stay on that for a second. I've seen recently, you kind of doubled down on your VMware alliance. You've got stuff at re:Invent I saw, with AWS, you're close to Azure, and I'm really talking about ONTAP, which is sort of an extension of what you were just talking about, Phil, which is, you know, it's kind of NetApp's storage operating system, if you will. It's a world class. But so, maybe talk about that relationship a little bit, and how you see it evolving. >> Well, so what we've been seeing consistently is, customers want to use the advantages of the cloud. So, point one. And when you have to completely refactor apps and all this stuff, it limits, it's friction. It limits what you can do, it raises costs. And what we did with VMware, VMware is this great platform for being able to run basically client-server apps on-prem and cloud, the exact same way. The problem is, when you have large data sets in the VMs, there's some cost issues and things, especially on the cloud. That drove us to work together, and do what we did. We GA-ed, we're the, so NetApp is the only independent storage, independent storage, say this right, independent storage platform certified to run with VMware cloud on Amazon. We GA-ed that last summer. We GA-ed with Azure, the Azure VMware service, a couple months ago. And you'll see news coming with GCP soon. And so the idea was, make it easy for customers to basically run in a hybrid model. And then if you back out and go, "What does that mean for you as a customer?", it's not saying you should go to the cloud, necessarily, or stay on-prem, or whatever. But it's giving you the flexibility to cost-optimize where you want to be. And from a data management point of view, ONTAP gives you the consistent data management, whichever way you decide to go. >> Yeah, so I've been following NetApp for decades, when you were Network Appliance, and I saw you go from kind of the workstation space into the enterprise. I saw you lean into virtualization really early on, and you've been a great VMware partner ever since. And you were early in cloud, so, sort of talking about, you know, that cross-cloud, what we call supercloud. I'm interested in what you're seeing in terms of specific actions that customers are taking. Like, I think about ELAs, and I think it's a two-edged sword. You know, should customers, you know, lean into ELAs right now? You know, what are you seeing there? You talked about, you know, sort of modernizing apps with things like Kubernetes, you know, cloud migration. What are some of the techniques that you're advising customers to take in the context of this acquisition? >> You know, so the basics of this are pretty easy. One is, and I think even Raghu, the CEO of VMware, has talked about this. Extending your ELA is probably a good idea. Like I said, customers love VMware, so having a commitment for a time, consistent cost management for a time is a good strategy. And I think that's why you're hearing ELA extensions being discussed. It's a good idea. The second part, and I think it goes to your surveys, that cost optimization point on the cloud is, moving to the cloud has huge advantages, but if you just kind of lift and shift, oftentimes the costs aren't realized the way you'd want. And the term "modernization," changing your app to use more Kubernetes, more cloud-native services, is often a consideration that goes into that. But that requires time. And you know, most companies have hundreds of apps, or thousands of apps, they have to consider modernizing. So you want to then think through the journey, what apps are going to move, what gets modernized, what gets lifted-shifted, how many data centers are you compressing? There's a lot of data center, the term I've been hearing is "data center evacuations," but data center consolidation. So that there's some even energy savings advantages sometimes with that. But the whole point, I mean, back up to my whole point, the whole point is having the infrastructure that gives you the flexibility to make the journey on your cost advantages and your business requirements. Not being forced to it. Like, it's not really a philosophy, it's more of a business optimization strategy. >> When you think about application modernization and Kubernetes, how does NetApp, you know, fit into that, as a data layer? >> Well, so if you kind of think, you said, like our journey, Dave, was, when we started our life, we were doing basically virtualization of volumes and things for technical customers. And the servers were always bare metal servers that we got involved with back then. This is, like, going back 20 years. Then everyone moved to VMs, and, like, it's probably, today, I mean, getting to your question in a second, but today, loosely, 20% bare metal servers, 80% virtual machines today. And containers is growing, now a big growing piece. So, if you will, sort of another level of virtual machines in containers. And containers were historically stateless, meaning the storage didn't have anything to do. Storage is always the stateful area in the architectures. But as containers are getting used more, stateful containers have become a big deal. So we've put a lot of emphasis into a product line we call Astra that is the world's best data management for containers. And that's both a cloud service and used on-prem in a lot of my customers. It's a big growth area. So that's what, when I say, like, one partner that can do data management, just, that's what we have to do. We have to keep moving with our customers to the type of data they want to store, and how do you store it most efficiently? Hey, one last thought on this is, where I really see this happening, there's a booming business right now in artificial intelligence, and we call it modern data analytics, but people combining big data lakes with AI, and that's where some of this, a lot of the container work comes in. We've extended objects, we have a thing we call file-object duality, to make it easy to bridge the old world of files to the new world of objects. Those all go hand in hand with app modernization. >> Yeah, it's a great thing about this industry. It never sits still. And you're right, it's- >> It's why I'm in it. >> Me too. Yeah, it's so much fun. There's always something. >> It is an abstraction layer. There's always going to be another abstraction layer. Serverless is another example. It's, you know, primarily stateless, that's probably going to, you know, change over time. All right, last question. In thinking about this Broadcom acquisition of VMware, in the macro climate, put a sort of bow on where NetApp fits into this equation. What's the value you bring in this context? >> Oh yeah, well it's like I said earlier, I think it's the data layer of, it's being the data layer that gives you what you guys call the supercloud, that gives you the ability to choose which cloud. Another thing, all customers are running at least two clouds, and you want to be able to pick and choose, and do it your way. So being the data layer, VMware is going to be in our infrastructures for at least as long as I'm in the computer business, Dave. I'm getting a little old. So maybe, you know, but "decades" I think is an easy prediction, and we plan to work with VMware very closely, along with our customers, as they extend from on-prem to hybrid cloud operations. That's where I think this will go. >> Yeah, and I think you're absolutely right. Look at the business case for migrating off of VMware. It just doesn't make sense. It works, it's world class, it recover... They've done so much amazing, you know, they used to be called, Moritz called it the software mainframe, right? And that's kind of what it is. I mean, it means it doesn't go down, right? And it supports virtually any application, you know, around the world, so. >> And I think getting back to your original point about your article, from the very beginning, is, I think Broadcom's really getting a sense of what they've bought, and it's going to be, hopefully, I think it'll be really a fun, another fun era in our business. >> Well, and you can drive EBIT a couple of ways. You can cut, okay, fine. And I'm sure there's some redundancies that they'll find. But there's also, you can drive top-line revenue. And you know, we've seen how, you know, EMC and then Dell used that growth from VMware to throw off free cash flow, and it was just, you know, funded so much, you know, innovation. So innovation is the key. Hock Tan has talked about that a lot. I think there's a perception that Broadcom, you know, doesn't invest in R & D. That's not true. I think they just get very focused with that investment. So, Phil, I really appreciate your time. Thanks so much for joining us. >> Thanks a lot, Dave. It's fun being here. >> Yeah, our pleasure. And thank you for watching theCUBE, your leader in enterprise and emerging tech coverage. (upbeat music)

Published Date : Jan 31 2023

SUMMARY :

Good to see you again. the industry generally, There's a lot you can do I wrote a piece, you know, and how do I balance that out? a lot of talk about the macro, is that when you look at cost management, and you mentioned, you know, so that you can manage and how you see it evolving. to cost-optimize where you want to be. and I saw you go from kind And you know, and how do you store it most efficiently? And you're right, it's- Yeah, it's so much fun. What's the value you and you want to be able They've done so much amazing, you know, and it's going to be, and it was just, you know, Thanks a lot, Dave. And thank you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

PhilPERSON

0.99+

Phil BrothertonPERSON

0.99+

DellORGANIZATION

0.99+

80%QUANTITY

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

Phil BrothertonPERSON

0.99+

BroadcomORGANIZATION

0.99+

20%QUANTITY

0.99+

30 yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

$61 billionQUANTITY

0.99+

RaghuPERSON

0.99+

NetAppORGANIZATION

0.99+

second partQUANTITY

0.99+

1500QUANTITY

0.99+

one layerQUANTITY

0.99+

EMCORGANIZATION

0.99+

Hock TanPERSON

0.99+

todayDATE

0.98+

hundreds of appsQUANTITY

0.98+

NetAppTITLE

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

secondQUANTITY

0.97+

ETRORGANIZATION

0.97+

Analyst Predictions 2023: The Future of Data Management


 

(upbeat music) >> Hello, this is Dave Valente with theCUBE, and one of the most gratifying aspects of my role as a host of "theCUBE TV" is I get to cover a wide range of topics. And quite often, we're able to bring to our program a level of expertise that allows us to more deeply explore and unpack some of the topics that we cover throughout the year. And one of our favorite topics, of course, is data. Now, in 2021, after being in isolation for the better part of two years, a group of industry analysts met up at AWS re:Invent and started a collaboration to look at the trends in data and predict what some likely outcomes will be for the coming year. And it resulted in a very popular session that we had last year focused on the future of data management. And I'm very excited and pleased to tell you that the 2023 edition of that predictions episode is back, and with me are five outstanding market analyst, Sanjeev Mohan of SanjMo, Tony Baer of dbInsight, Carl Olofson from IDC, Dave Menninger from Ventana Research, and Doug Henschen, VP and Principal Analyst at Constellation Research. Now, what is it that we're calling you, guys? A data pack like the rat pack? No, no, no, no, that's not it. It's the data crowd, the data crowd, and the crowd includes some of the best minds in the data analyst community. They'll discuss how data management is evolving and what listeners should prepare for in 2023. Guys, welcome back. Great to see you. >> Good to be here. >> Thank you. >> Thanks, Dave. (Tony and Dave faintly speaks) >> All right, before we get into 2023 predictions, we thought it'd be good to do a look back at how we did in 2022 and give a transparent assessment of those predictions. So, let's get right into it. We're going to bring these up here, the predictions from 2022, they're color-coded red, yellow, and green to signify the degree of accuracy. And I'm pleased to report there's no red. Well, maybe some of you will want to debate that grading system. But as always, we want to be open, so you can decide for yourselves. So, we're going to ask each analyst to review their 2022 prediction and explain their rating and what evidence they have that led them to their conclusion. So, Sanjeev, please kick it off. Your prediction was data governance becomes key. I know that's going to knock you guys over, but elaborate, because you had more detail when you double click on that. >> Yeah, absolutely. Thank you so much, Dave, for having us on the show today. And we self-graded ourselves. I could have very easily made my prediction from last year green, but I mentioned why I left it as yellow. I totally fully believe that data governance was in a renaissance in 2022. And why do I say that? You have to look no further than AWS launching its own data catalog called DataZone. Before that, mid-year, we saw Unity Catalog from Databricks went GA. So, overall, I saw there was tremendous movement. When you see these big players launching a new data catalog, you know that they want to be in this space. And this space is highly critical to everything that I feel we will talk about in today's call. Also, if you look at established players, I spoke at Collibra's conference, data.world, work closely with Alation, Informatica, a bunch of other companies, they all added tremendous new capabilities. So, it did become key. The reason I left it as yellow is because I had made a prediction that Collibra would go IPO, and it did not. And I don't think anyone is going IPO right now. The market is really, really down, the funding in VC IPO market. But other than that, data governance had a banner year in 2022. >> Yeah. Well, thank you for that. And of course, you saw data clean rooms being announced at AWS re:Invent, so more evidence. And I like how the fact that you included in your predictions some things that were binary, so you dinged yourself there. So, good job. Okay, Tony Baer, you're up next. Data mesh hits reality check. As you see here, you've given yourself a bright green thumbs up. (Tony laughing) Okay. Let's hear why you feel that was the case. What do you mean by reality check? >> Okay. Thanks, Dave, for having us back again. This is something I just wrote and just tried to get away from, and this just a topic just won't go away. I did speak with a number of folks, early adopters and non-adopters during the year. And I did find that basically that it pretty much validated what I was expecting, which was that there was a lot more, this has now become a front burner issue. And if I had any doubt in my mind, the evidence I would point to is what was originally intended to be a throwaway post on LinkedIn, which I just quickly scribbled down the night before leaving for re:Invent. I was packing at the time, and for some reason, I was doing Google search on data mesh. And I happened to have tripped across this ridiculous article, I will not say where, because it doesn't deserve any publicity, about the eight (Dave laughing) best data mesh software companies of 2022. (Tony laughing) One of my predictions was that you'd see data mesh washing. And I just quickly just hopped on that maybe three sentences and wrote it at about a couple minutes saying this is hogwash, essentially. (laughs) And that just reun... And then, I left for re:Invent. And the next night, when I got into my Vegas hotel room, I clicked on my computer. I saw a 15,000 hits on that post, which was the most hits of any single post I put all year. And the responses were wildly pro and con. So, it pretty much validates my expectation in that data mesh really did hit a lot more scrutiny over this past year. >> Yeah, thank you for that. I remember that article. I remember rolling my eyes when I saw it, and then I recently, (Tony laughing) I talked to Walmart and they actually invoked Martin Fowler and they said that they're working through their data mesh. So, it takes a really lot of thought, and it really, as we've talked about, is really as much an organizational construct. You're not buying data mesh >> Bingo. >> to your point. Okay. Thank you, Tony. Carl Olofson, here we go. You've graded yourself a yellow in the prediction of graph databases. Take off. Please elaborate. >> Yeah, sure. So, I realized in looking at the prediction that it seemed to imply that graph databases could be a major factor in the data world in 2022, which obviously didn't become the case. It was an error on my part in that I should have said it in the right context. It's really a three to five-year time period that graph databases will really become significant, because they still need accepted methodologies that can be applied in a business context as well as proper tools in order for people to be able to use them seriously. But I stand by the idea that it is taking off, because for one thing, Neo4j, which is the leading independent graph database provider, had a very good year. And also, we're seeing interesting developments in terms of things like AWS with Neptune and with Oracle providing graph support in Oracle database this past year. Those things are, as I said, growing gradually. There are other companies like TigerGraph and so forth, that deserve watching as well. But as far as becoming mainstream, it's going to be a few years before we get all the elements together to make that happen. Like any new technology, you have to create an environment in which ordinary people without a whole ton of technical training can actually apply the technology to solve business problems. >> Yeah, thank you for that. These specialized databases, graph databases, time series databases, you see them embedded into mainstream data platforms, but there's a place for these specialized databases, I would suspect we're going to see new types of databases emerge with all this cloud sprawl that we have and maybe to the edge. >> Well, part of it is that it's not as specialized as you might think it. You can apply graphs to great many workloads and use cases. It's just that people have yet to fully explore and discover what those are. >> Yeah. >> And so, it's going to be a process. (laughs) >> All right, Dave Menninger, streaming data permeates the landscape. You gave yourself a yellow. Why? >> Well, I couldn't think of a appropriate combination of yellow and green. Maybe I should have used chartreuse, (Dave laughing) but I was probably a little hard on myself making it yellow. This is another type of specialized data processing like Carl was talking about graph databases is a stream processing, and nearly every data platform offers streaming capabilities now. Often, it's based on Kafka. If you look at Confluent, their revenues have grown at more than 50%, continue to grow at more than 50% a year. They're expected to do more than half a billion dollars in revenue this year. But the thing that hasn't happened yet, and to be honest, they didn't necessarily expect it to happen in one year, is that streaming hasn't become the default way in which we deal with data. It's still a sidecar to data at rest. And I do expect that we'll continue to see streaming become more and more mainstream. I do expect perhaps in the five-year timeframe that we will first deal with data as streaming and then at rest, but the worlds are starting to merge. And we even see some vendors bringing products to market, such as K2View, Hazelcast, and RisingWave Labs. So, in addition to all those core data platform vendors adding these capabilities, there are new vendors approaching this market as well. >> I like the tough grading system, and it's not trivial. And when you talk to practitioners doing this stuff, there's still some complications in the data pipeline. And so, but I think, you're right, it probably was a yellow plus. Doug Henschen, data lakehouses will emerge as dominant. When you talk to people about lakehouses, practitioners, they all use that term. They certainly use the term data lake, but now, they're using lakehouse more and more. What's your thoughts on here? Why the green? What's your evidence there? >> Well, I think, I was accurate. I spoke about it specifically as something that vendors would be pursuing. And we saw yet more lakehouse advocacy in 2022. Google introduced its BigLake service alongside BigQuery. Salesforce introduced Genie, which is really a lakehouse architecture. And it was a safe prediction to say vendors are going to be pursuing this in that AWS, Cloudera, Databricks, Microsoft, Oracle, SAP, Salesforce now, IBM, all advocate this idea of a single platform for all of your data. Now, the trend was also supported in 2023, in that we saw a big embrace of Apache Iceberg in 2022. That's a structured table format. It's used with these lakehouse platforms. It's open, so it ensures portability and it also ensures performance. And that's a structured table that helps with the warehouse side performance. But among those announcements, Snowflake, Google, Cloud Era, SAP, Salesforce, IBM, all embraced Iceberg. But keep in mind, again, I'm talking about this as something that vendors are pursuing as their approach. So, they're advocating end users. It's very cutting edge. I'd say the top, leading edge, 5% of of companies have really embraced the lakehouse. I think, we're now seeing the fast followers, the next 20 to 25% of firms embracing this idea and embracing a lakehouse architecture. I recall Christian Kleinerman at the big Snowflake event last summer, making the announcement about Iceberg, and he asked for a show of hands for any of you in the audience at the keynote, have you heard of Iceberg? And just a smattering of hands went up. So, the vendors are ahead of the curve. They're pushing this trend, and we're now seeing a little bit more mainstream uptake. >> Good. Doug, I was there. It was you, me, and I think, two other hands were up. That was just humorous. (Doug laughing) All right, well, so I liked the fact that we had some yellow and some green. When you think about these things, there's the prediction itself. Did it come true or not? There are the sub predictions that you guys make, and of course, the degree of difficulty. So, thank you for that open assessment. All right, let's get into the 2023 predictions. Let's bring up the predictions. Sanjeev, you're going first. You've got a prediction around unified metadata. What's the prediction, please? >> So, my prediction is that metadata space is currently a mess. It needs to get unified. There are too many use cases of metadata, which are being addressed by disparate systems. For example, data quality has become really big in the last couple of years, data observability, the whole catalog space is actually, people don't like to use the word data catalog anymore, because data catalog sounds like it's a catalog, a museum, if you may, of metadata that you go and admire. So, what I'm saying is that in 2023, we will see that metadata will become the driving force behind things like data ops, things like orchestration of tasks using metadata, not rules. Not saying that if this fails, then do this, if this succeeds, go do that. But it's like getting to the metadata level, and then making a decision as to what to orchestrate, what to automate, how to do data quality check, data observability. So, this space is starting to gel, and I see there'll be more maturation in the metadata space. Even security privacy, some of these topics, which are handled separately. And I'm just talking about data security and data privacy. I'm not talking about infrastructure security. These also need to merge into a unified metadata management piece with some knowledge graph, semantic layer on top, so you can do analytics on it. So, it's no longer something that sits on the side, it's limited in its scope. It is actually the very engine, the very glue that is going to connect data producers and consumers. >> Great. Thank you for that. Doug. Doug Henschen, any thoughts on what Sanjeev just said? Do you agree? Do you disagree? >> Well, I agree with many aspects of what he says. I think, there's a huge opportunity for consolidation and streamlining of these as aspects of governance. Last year, Sanjeev, you said something like, we'll see more people using catalogs than BI. And I have to disagree. I don't think this is a category that's headed for mainstream adoption. It's a behind the scenes activity for the wonky few, or better yet, companies want machine learning and automation to take care of these messy details. We've seen these waves of management technologies, some of the latest data observability, customer data platform, but they failed to sweep away all the earlier investments in data quality and master data management. So, yes, I hope the latest tech offers, glimmers that there's going to be a better, cleaner way of addressing these things. But to my mind, the business leaders, including the CIO, only want to spend as much time and effort and money and resources on these sorts of things to avoid getting breached, ending up in headlines, getting fired or going to jail. So, vendors bring on the ML and AI smarts and the automation of these sorts of activities. >> So, if I may say something, the reason why we have this dichotomy between data catalog and the BI vendors is because data catalogs are very soon, not going to be standalone products, in my opinion. They're going to get embedded. So, when you use a BI tool, you'll actually use the catalog to find out what is it that you want to do, whether you are looking for data or you're looking for an existing dashboard. So, the catalog becomes embedded into the BI tool. >> Hey, Dave Menninger, sometimes you have some data in your back pocket. Do you have any stats (chuckles) on this topic? >> No, I'm glad you asked, because I'm going to... Now, data catalogs are something that's interesting. Sanjeev made a statement that data catalogs are falling out of favor. I don't care what you call them. They're valuable to organizations. Our research shows that organizations that have adequate data catalog technologies are three times more likely to express satisfaction with their analytics for just the reasons that Sanjeev was talking about. You can find what you want, you know you're getting the right information, you know whether or not it's trusted. So, those are good things. So, we expect to see the capabilities, whether it's embedded or separate. We expect to see those capabilities continue to permeate the market. >> And a lot of those catalogs are driven now by machine learning and things. So, they're learning from those patterns of usage by people when people use the data. (airy laughs) >> All right. Okay. Thank you, guys. All right. Let's move on to the next one. Tony Bear, let's bring up the predictions. You got something in here about the modern data stack. We need to rethink it. Is the modern data stack getting long at the tooth? Is it not so modern anymore? >> I think, in a way, it's got almost too modern. It's gotten too, I don't know if it's being long in the tooth, but it is getting long. The modern data stack, it's traditionally been defined as basically you have the data platform, which would be the operational database and the data warehouse. And in between, you have all the tools that are necessary to essentially get that data from the operational realm or the streaming realm for that matter into basically the data warehouse, or as we might be seeing more and more, the data lakehouse. And I think, what's important here is that, or I think, we have seen a lot of progress, and this would be in the cloud, is with the SaaS services. And especially you see that in the modern data stack, which is like all these players, not just the MongoDBs or the Oracles or the Amazons have their database platforms. You see they have the Informatica's, and all the other players there in Fivetrans have their own SaaS services. And within those SaaS services, you get a certain degree of simplicity, which is it takes all the housekeeping off the shoulders of the customers. That's a good thing. The problem is that what we're getting to unfortunately is what I would call lots of islands of simplicity, which means that it leads it (Dave laughing) to the customer to have to integrate or put all that stuff together. It's a complex tool chain. And so, what we really need to think about here, we have too many pieces. And going back to the discussion of catalogs, it's like we have so many catalogs out there, which one do we use? 'Cause chances are of most organizations do not rely on a single catalog at this point. What I'm calling on all the data providers or all the SaaS service providers, is to literally get it together and essentially make this modern data stack less of a stack, make it more of a blending of an end-to-end solution. And that can come in a number of different ways. Part of it is that we're data platform providers have been adding services that are adjacent. And there's some very good examples of this. We've seen progress over the past year or so. For instance, MongoDB integrating search. It's a very common, I guess, sort of tool that basically, that the applications that are developed on MongoDB use, so MongoDB then built it into the database rather than requiring an extra elastic search or open search stack. Amazon just... AWS just did the zero-ETL, which is a first step towards simplifying the process from going from Aurora to Redshift. You've seen same thing with Google, BigQuery integrating basically streaming pipelines. And you're seeing also a lot of movement in database machine learning. So, there's some good moves in this direction. I expect to see more than this year. Part of it's from basically the SaaS platform is adding some functionality. But I also see more importantly, because you're never going to get... This is like asking your data team and your developers, herding cats to standardizing the same tool. In most organizations, that is not going to happen. So, take a look at the most popular combinations of tools and start to come up with some pre-built integrations and pre-built orchestrations, and offer some promotional pricing, maybe not quite two for, but in other words, get two products for the price of two services or for the price of one and a half. I see a lot of potential for this. And it's to me, if the class was to simplify things, this is the next logical step and I expect to see more of this here. >> Yeah, and you see in Oracle, MySQL heat wave, yet another example of eliminating that ETL. Carl Olofson, today, if you think about the data stack and the application stack, they're largely separate. Do you have any thoughts on how that's going to play out? Does that play into this prediction? What do you think? >> Well, I think, that the... I really like Tony's phrase, islands of simplification. It really says (Tony chuckles) what's going on here, which is that all these different vendors you ask about, about how these stacks work. All these different vendors have their own stack vision. And you can... One application group is going to use one, and another application group is going to use another. And some people will say, let's go to, like you go to a Informatica conference and they say, we should be the center of your universe, but you can't connect everything in your universe to Informatica, so you need to use other things. So, the challenge is how do we make those things work together? As Tony has said, and I totally agree, we're never going to get to the point where people standardize on one organizing system. So, the alternative is to have metadata that can be shared amongst those systems and protocols that allow those systems to coordinate their operations. This is standard stuff. It's not easy. But the motive for the vendors is that they can become more active critical players in the enterprise. And of course, the motive for the customer is that things will run better and more completely. So, I've been looking at this in terms of two kinds of metadata. One is the meaning metadata, which says what data can be put together. The other is the operational metadata, which says basically where did it come from? Who created it? What's its current state? What's the security level? Et cetera, et cetera, et cetera. The good news is the operational stuff can actually be done automatically, whereas the meaning stuff requires some human intervention. And as we've already heard from, was it Doug, I think, people are disinclined to put a lot of definition into meaning metadata. So, that may be the harder one, but coordination is key. This problem has been with us forever, but with the addition of new data sources, with streaming data with data in different formats, the whole thing has, it's been like what a customer of mine used to say, "I understand your product can make my system run faster, but right now I just feel I'm putting my problems on roller skates. (chuckles) I don't need that to accelerate what's already not working." >> Excellent. Okay, Carl, let's stay with you. I remember in the early days of the big data movement, Hadoop movement, NoSQL was the big thing. And I remember Amr Awadallah said to us in theCUBE that SQL is the killer app for big data. So, your prediction here, if we bring that up is SQL is back. Please elaborate. >> Yeah. So, of course, some people would say, well, it never left. Actually, that's probably closer to true, but in the perception of the marketplace, there's been all this noise about alternative ways of storing, retrieving data, whether it's in key value stores or document databases and so forth. We're getting a lot of messaging that for a while had persuaded people that, oh, we're not going to do analytics in SQL anymore. We're going to use Spark for everything, except that only a handful of people know how to use Spark. Oh, well, that's a problem. Well, how about, and for ordinary conventional business analytics, Spark is like an over-engineered solution to the problem. SQL works just great. What's happened in the past couple years, and what's going to continue to happen is that SQL is insinuating itself into everything we're seeing. We're seeing all the major data lake providers offering SQL support, whether it's Databricks or... And of course, Snowflake is loving this, because that is what they do, and their success is certainly points to the success of SQL, even MongoDB. And we were all, I think, at the MongoDB conference where on one day, we hear SQL is dead. They're not teaching SQL in schools anymore, and this kind of thing. And then, a couple days later at the same conference, they announced we're adding a new analytic capability-based on SQL. But didn't you just say SQL is dead? So, the reality is that SQL is better understood than most other methods of certainly of retrieving and finding data in a data collection, no matter whether it happens to be relational or non-relational. And even in systems that are very non-relational, such as graph and document databases, their query languages are being built or extended to resemble SQL, because SQL is something people understand. >> Now, you remember when we were in high school and you had had to take the... Your debating in the class and you were forced to take one side and defend it. So, I was was at a Vertica conference one time up on stage with Curt Monash, and I had to take the NoSQL, the world is changing paradigm shift. And so just to be controversial, I said to him, Curt Monash, I said, who really needs acid compliance anyway? Tony Baer. And so, (chuckles) of course, his head exploded, but what are your thoughts (guests laughing) on all this? >> Well, my first thought is congratulations, Dave, for surviving being up on stage with Curt Monash. >> Amen. (group laughing) >> I definitely would concur with Carl. We actually are definitely seeing a SQL renaissance and if there's any proof of the pudding here, I see lakehouse is being icing on the cake. As Doug had predicted last year, now, (clears throat) for the record, I think, Doug was about a year ahead of time in his predictions that this year is really the year that I see (clears throat) the lakehouse ecosystems really firming up. You saw the first shots last year. But anyway, on this, data lakes will not go away. I've actually, I'm on the home stretch of doing a market, a landscape on the lakehouse. And lakehouse will not replace data lakes in terms of that. There is the need for those, data scientists who do know Python, who knows Spark, to go in there and basically do their thing without all the restrictions or the constraints of a pre-built, pre-designed table structure. I get that. Same thing for developing models. But on the other hand, there is huge need. Basically, (clears throat) maybe MongoDB was saying that we're not teaching SQL anymore. Well, maybe we have an oversupply of SQL developers. Well, I'm being facetious there, but there is a huge skills based in SQL. Analytics have been built on SQL. They came with lakehouse and why this really helps to fuel a SQL revival is that the core need in the data lake, what brought on the lakehouse was not so much SQL, it was a need for acid. And what was the best way to do it? It was through a relational table structure. So, the whole idea of acid in the lakehouse was not to turn it into a transaction database, but to make the data trusted, secure, and more granularly governed, where you could govern down to column and row level, which you really could not do in a data lake or a file system. So, while lakehouse can be queried in a manner, you can go in there with Python or whatever, it's built on a relational table structure. And so, for that end, for those types of data lakes, it becomes the end state. You cannot bypass that table structure as I learned the hard way during my research. So, the bottom line I'd say here is that lakehouse is proof that we're starting to see the revenge of the SQL nerds. (Dave chuckles) >> Excellent. Okay, let's bring up back up the predictions. Dave Menninger, this one's really thought-provoking and interesting. We're hearing things like data as code, new data applications, machines actually generating plans with no human involvement. And your prediction is the definition of data is expanding. What do you mean by that? >> So, I think, for too long, we've thought about data as the, I would say facts that we collect the readings off of devices and things like that, but data on its own is really insufficient. Organizations need to manipulate that data and examine derivatives of the data to really understand what's happening in their organization, why has it happened, and to project what might happen in the future. And my comment is that these data derivatives need to be supported and managed just like the data needs to be managed. We can't treat this as entirely separate. Think about all the governance discussions we've had. Think about the metadata discussions we've had. If you separate these things, now you've got more moving parts. We're talking about simplicity and simplifying the stack. So, if these things are treated separately, it creates much more complexity. I also think it creates a little bit of a myopic view on the part of the IT organizations that are acquiring these technologies. They need to think more broadly. So, for instance, metrics. Metric stores are becoming much more common part of the tooling that's part of a data platform. Similarly, feature stores are gaining traction. So, those are designed to promote the reuse and consistency across the AI and ML initiatives. The elements that are used in developing an AI or ML model. And let me go back to metrics and just clarify what I mean by that. So, any type of formula involving the data points. I'm distinguishing metrics from features that are used in AI and ML models. And the data platforms themselves are increasingly managing the models as an element of data. So, just like figuring out how to calculate a metric. Well, if you're going to have the features associated with an AI and ML model, you probably need to be managing the model that's associated with those features. The other element where I see expansion is around external data. Organizations for decades have been focused on the data that they generate within their own organization. We see more and more of these platforms acquiring and publishing data to external third-party sources, whether they're within some sort of a partner ecosystem or whether it's a commercial distribution of that information. And our research shows that when organizations use external data, they derive even more benefits from the various analyses that they're conducting. And the last great frontier in my opinion on this expanding world of data is the world of driver-based planning. Very few of the major data platform providers provide these capabilities today. These are the types of things you would do in a spreadsheet. And we all know the issues associated with spreadsheets. They're hard to govern, they're error-prone. And so, if we can take that type of analysis, collecting the occupancy of a rental property, the projected rise in rental rates, the fluctuations perhaps in occupancy, the interest rates associated with financing that property, we can project forward. And that's a very common thing to do. What the income might look like from that property income, the expenses, we can plan and purchase things appropriately. So, I think, we need this broader purview and I'm beginning to see some of those things happen. And the evidence today I would say, is more focused around the metric stores and the feature stores starting to see vendors offer those capabilities. And we're starting to see the ML ops elements of managing the AI and ML models find their way closer to the data platforms as well. >> Very interesting. When I hear metrics, I think of KPIs, I think of data apps, orchestrate people and places and things to optimize around a set of KPIs. It sounds like a metadata challenge more... Somebody once predicted they'll have more metadata than data. Carl, what are your thoughts on this prediction? >> Yeah, I think that what Dave is describing as data derivatives is in a way, another word for what I was calling operational metadata, which not about the data itself, but how it's used, where it came from, what the rules are governing it, and that kind of thing. If you have a rich enough set of those things, then not only can you do a model of how well your vacation property rental may do in terms of income, but also how well your application that's measuring that is doing for you. In other words, how many times have I used it, how much data have I used and what is the relationship between the data that I've used and the benefits that I've derived from using it? Well, we don't have ways of doing that. What's interesting to me is that folks in the content world are way ahead of us here, because they have always tracked their content using these kinds of attributes. Where did it come from? When was it created, when was it modified? Who modified it? And so on and so forth. We need to do more of that with the structure data that we have, so that we can track what it's used. And also, it tells us how well we're doing with it. Is it really benefiting us? Are we being efficient? Are there improvements in processes that we need to consider? Because maybe data gets created and then it isn't used or it gets used, but it gets altered in some way that actually misleads people. (laughs) So, we need the mechanisms to be able to do that. So, I would say that that's... And I'd say that it's true that we need that stuff. I think, that starting to expand is probably the right way to put it. It's going to be expanding for some time. I think, we're still a distance from having all that stuff really working together. >> Maybe we should say it's gestating. (Dave and Carl laughing) >> Sorry, if I may- >> Sanjeev, yeah, I was going to say this... Sanjeev, please comment. This sounds to me like it supports Zhamak Dehghani's principles, but please. >> Absolutely. So, whether we call it data mesh or not, I'm not getting into that conversation, (Dave chuckles) but data (audio breaking) (Tony laughing) everything that I'm hearing what Dave is saying, Carl, this is the year when data products will start to take off. I'm not saying they'll become mainstream. They may take a couple of years to become so, but this is data products, all this thing about vacation rentals and how is it doing, that data is coming from different sources. I'm packaging it into our data product. And to Carl's point, there's a whole operational metadata associated with it. The idea is for organizations to see things like developer productivity, how many releases am I doing of this? What data products are most popular? I'm actually in right now in the process of formulating this concept that just like we had data catalogs, we are very soon going to be requiring data products catalog. So, I can discover these data products. I'm not just creating data products left, right, and center. I need to know, do they already exist? What is the usage? If no one is using a data product, maybe I want to retire and save cost. But this is a data product. Now, there's a associated thing that is also getting debated quite a bit called data contracts. And a data contract to me is literally just formalization of all these aspects of a product. How do you use it? What is the SLA on it, what is the quality that I am prescribing? So, data product, in my opinion, shifts the conversation to the consumers or to the business people. Up to this point when, Dave, you're talking about data and all of data discovery curation is a very data producer-centric. So, I think, we'll see a shift more into the consumer space. >> Yeah. Dave, can I just jump in there just very quickly there, which is that what Sanjeev has been saying there, this is really central to what Zhamak has been talking about. It's basically about making, one, data products are about the lifecycle management of data. Metadata is just elemental to that. And essentially, one of the things that she calls for is making data products discoverable. That's exactly what Sanjeev was talking about. >> By the way, did everyone just no notice how Sanjeev just snuck in another prediction there? So, we've got- >> Yeah. (group laughing) >> But you- >> Can we also say that he snuck in, I think, the term that we'll remember today, which is metadata museums. >> Yeah, but- >> Yeah. >> And also comment to, Tony, to your last year's prediction, you're really talking about it's not something that you're going to buy from a vendor. >> No. >> It's very specific >> Mm-hmm. >> to an organization, their own data product. So, touche on that one. Okay, last prediction. Let's bring them up. Doug Henschen, BI analytics is headed to embedding. What does that mean? >> Well, we all know that conventional BI dashboarding reporting is really commoditized from a vendor perspective. It never enjoyed truly mainstream adoption. Always that 25% of employees are really using these things. I'm seeing rising interest in embedding concise analytics at the point of decision or better still, using analytics as triggers for automation and workflows, and not even necessitating human interaction with visualizations, for example, if we have confidence in the analytics. So, leading companies are pushing for next generation applications, part of this low-code, no-code movement we've seen. And they want to build that decision support right into the app. So, the analytic is right there. Leading enterprise apps vendors, Salesforce, SAP, Microsoft, Oracle, they're all building smart apps with the analytics predictions, even recommendations built into these applications. And I think, the progressive BI analytics vendors are supporting this idea of driving insight to action, not necessarily necessitating humans interacting with it if there's confidence. So, we want prediction, we want embedding, we want automation. This low-code, no-code development movement is very important to bringing the analytics to where people are doing their work. We got to move beyond the, what I call swivel chair integration, between where people do their work and going off to separate reports and dashboards, and having to interpret and analyze before you can go back and do take action. >> And Dave Menninger, today, if you want, analytics or you want to absorb what's happening in the business, you typically got to go ask an expert, and then wait. So, what are your thoughts on Doug's prediction? >> I'm in total agreement with Doug. I'm going to say that collectively... So, how did we get here? I'm going to say collectively as an industry, we made a mistake. We made BI and analytics separate from the operational systems. Now, okay, it wasn't really a mistake. We were limited by the technology available at the time. Decades ago, we had to separate these two systems, so that the analytics didn't impact the operations. You don't want the operations preventing you from being able to do a transaction. But we've gone beyond that now. We can bring these two systems and worlds together and organizations recognize that need to change. As Doug said, the majority of the workforce and the majority of organizations doesn't have access to analytics. That's wrong. (chuckles) We've got to change that. And one of the ways that's going to change is with embedded analytics. 2/3 of organizations recognize that embedded analytics are important and it even ranks higher in importance than AI and ML in those organizations. So, it's interesting. This is a really important topic to the organizations that are consuming these technologies. The good news is it works. Organizations that have embraced embedded analytics are more comfortable with self-service than those that have not, as opposed to turning somebody loose, in the wild with the data. They're given a guided path to the data. And the research shows that 65% of organizations that have adopted embedded analytics are comfortable with self-service compared with just 40% of organizations that are turning people loose in an ad hoc way with the data. So, totally behind Doug's predictions. >> Can I just break in with something here, a comment on what Dave said about what Doug said, which (laughs) is that I totally agree with what you said about embedded analytics. And at IDC, we made a prediction in our future intelligence, future of intelligence service three years ago that this was going to happen. And the thing that we're waiting for is for developers to build... You have to write the applications to work that way. It just doesn't happen automagically. Developers have to write applications that reference analytic data and apply it while they're running. And that could involve simple things like complex queries against the live data, which is through something that I've been calling analytic transaction processing. Or it could be through something more sophisticated that involves AI operations as Doug has been suggesting, where the result is enacted pretty much automatically unless the scores are too low and you need to have a human being look at it. So, I think that that is definitely something we've been watching for. I'm not sure how soon it will come, because it seems to take a long time for people to change their thinking. But I think, as Dave was saying, once they do and they apply these principles in their application development, the rewards are great. >> Yeah, this is very much, I would say, very consistent with what we were talking about, I was talking about before, about basically rethinking the modern data stack and going into more of an end-to-end solution solution. I think, that what we're talking about clearly here is operational analytics. There'll still be a need for your data scientists to go offline just in their data lakes to do all that very exploratory and that deep modeling. But clearly, it just makes sense to bring operational analytics into where people work into their workspace and further flatten that modern data stack. >> But with all this metadata and all this intelligence, we're talking about injecting AI into applications, it does seem like we're entering a new era of not only data, but new era of apps. Today, most applications are about filling forms out or codifying processes and require a human input. And it seems like there's enough data now and enough intelligence in the system that the system can actually pull data from, whether it's the transaction system, e-commerce, the supply chain, ERP, and actually do something with that data without human involvement, present it to humans. Do you guys see this as a new frontier? >> I think, that's certainly- >> Very much so, but it's going to take a while, as Carl said. You have to design it, you have to get the prediction into the system, you have to get the analytics at the point of decision has to be relevant to that decision point. >> And I also recall basically a lot of the ERP vendors back like 10 years ago, we're promising that. And the fact that we're still looking at the promises shows just how difficult, how much of a challenge it is to get to what Doug's saying. >> One element that could be applied in this case is (indistinct) architecture. If applications are developed that are event-driven rather than following the script or sequence that some programmer or designer had preconceived, then you'll have much more flexible applications. You can inject decisions at various points using this technology much more easily. It's a completely different way of writing applications. And it actually involves a lot more data, which is why we should all like it. (laughs) But in the end (Tony laughing) it's more stable, it's easier to manage, easier to maintain, and it's actually more efficient, which is the result of an MIT study from about 10 years ago, and still, we are not seeing this come to fruition in most business applications. >> And do you think it's going to require a new type of data platform database? Today, data's all far-flung. We see that's all over the clouds and at the edge. Today, you cache- >> We need a super cloud. >> You cache that data, you're throwing into memory. I mentioned, MySQL heat wave. There are other examples where it's a brute force approach, but maybe we need new ways of laying data out on disk and new database architectures, and just when we thought we had it all figured out. >> Well, without referring to disk, which to my mind, is almost like talking about cave painting. I think, that (Dave laughing) all the things that have been mentioned by all of us today are elements of what I'm talking about. In other words, the whole improvement of the data mesh, the improvement of metadata across the board and improvement of the ability to track data and judge its freshness the way we judge the freshness of a melon or something like that, to determine whether we can still use it. Is it still good? That kind of thing. Bringing together data from multiple sources dynamically and real-time requires all the things we've been talking about. All the predictions that we've talked about today add up to elements that can make this happen. >> Well, guys, it's always tremendous to get these wonderful minds together and get your insights, and I love how it shapes the outcome here of the predictions, and let's see how we did. We're going to leave it there. I want to thank Sanjeev, Tony, Carl, David, and Doug. Really appreciate the collaboration and thought that you guys put into these sessions. Really, thank you. >> Thank you. >> Thanks, Dave. >> Thank you for having us. >> Thanks. >> Thank you. >> All right, this is Dave Valente for theCUBE, signing off for now. Follow these guys on social media. Look for coverage on siliconangle.com, theCUBE.net. Thank you for watching. (upbeat music)

Published Date : Jan 11 2023

SUMMARY :

and pleased to tell you (Tony and Dave faintly speaks) that led them to their conclusion. down, the funding in VC IPO market. And I like how the fact And I happened to have tripped across I talked to Walmart in the prediction of graph databases. But I stand by the idea and maybe to the edge. You can apply graphs to great And so, it's going to streaming data permeates the landscape. and to be honest, I like the tough grading the next 20 to 25% of and of course, the degree of difficulty. that sits on the side, Thank you for that. And I have to disagree. So, the catalog becomes Do you have any stats for just the reasons that And a lot of those catalogs about the modern data stack. and more, the data lakehouse. and the application stack, So, the alternative is to have metadata that SQL is the killer app for big data. but in the perception of the marketplace, and I had to take the NoSQL, being up on stage with Curt Monash. (group laughing) is that the core need in the data lake, And your prediction is the and examine derivatives of the data to optimize around a set of KPIs. that folks in the content world (Dave and Carl laughing) going to say this... shifts the conversation to the consumers And essentially, one of the things (group laughing) the term that we'll remember today, to your last year's prediction, is headed to embedding. and going off to separate happening in the business, so that the analytics didn't And the thing that we're waiting for and that deep modeling. that the system can of decision has to be relevant And the fact that we're But in the end We see that's all over the You cache that data, and improvement of the and I love how it shapes the outcome here Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Doug HenschenPERSON

0.99+

Dave MenningerPERSON

0.99+

DougPERSON

0.99+

CarlPERSON

0.99+

Carl OlofsonPERSON

0.99+

Dave MenningerPERSON

0.99+

Tony BaerPERSON

0.99+

TonyPERSON

0.99+

Dave ValentePERSON

0.99+

CollibraORGANIZATION

0.99+

Curt MonashPERSON

0.99+

Sanjeev MohanPERSON

0.99+

Christian KleinermanPERSON

0.99+

Dave ValentePERSON

0.99+

WalmartORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

SanjeevPERSON

0.99+

Constellation ResearchORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Ventana ResearchORGANIZATION

0.99+

2022DATE

0.99+

HazelcastORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Tony BearPERSON

0.99+

25%QUANTITY

0.99+

2021DATE

0.99+

last yearDATE

0.99+

65%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

five-yearQUANTITY

0.99+

TigerGraphORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

two servicesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

DavidPERSON

0.99+

RisingWave LabsORGANIZATION

0.99+

Dilip Kumar, AWS Applications | AWS re:Invent 2022


 

(lively music) >> Good afternoon and welcome back to beautiful Las Vegas, Nevada, where we're here live from the show floor, all four days of AWS re:Invent. I'm Savannah Peterson, joined with my co-host Dave Vellante. Dave, how you doing? >> Good. Beautiful and chilly Las Vegas. Can't wait to get back to New England where it's warm. >> Balmy, New England this time of year in December. Wow, Dave, that's a bold statement. I am super excited about the conversation that we're going to be having next. And, you know, I'm not even going to tee it up. I just want to bring Dilip on. Dilip, thank you so much for being here. How you doing? >> Savannah, Dave, thank you so much. >> Hey, Dilip. >> Excited to be here. >> It's joy to have you. So, you have been working at Amazon for about 20 years. >> Almost. Almost. >> Yes. >> Feels like 20, 19 1/2. >> Which is very exciting. You've had a lot of roles. I'm going to touch on some of them, but you just came over to AWS from the physical retail side. Talk to me about that. >> Yup, so I've been to Amazon for 19 1/2 years. Done pricing, supply chain. I was Jeff Bezos technical advisor for a couple years. >> Casual name drop. >> Casual name drop. >> Savannah: But a couple people here for that name before. >> Humble brag, hashtag. And then I, for the last several years, I was leading our physical retail initiatives. We just walk out Amazon One, bringing convenience to physical spaces. And then in August, with like as those things were getting a lot of traction and we were selling to third parties, we felt that it would be better suited in AWS. And, but along with that, there was also another trend that's been brewing, which is, you know, companies have loved building on AWS. They love the infrastructure services, but increasingly, they're also asking us to build applications that are higher up in the stack. Solving key, turnkey business problems. Just walk out Amazon One or examples of that, Amazon Connect. We just recently announced supply chain, so now there's a bevy interesting services all coming together, higher up in the stack for customers. So it's an exciting time. >> It was interesting that you're able to, you know, transfer from that retail. I mean, normally, in historically, if you're within an industry, retail, manufacturing, automotive whatever. You were kind as locked in a little bit. >> Dilip: Siloed a little bit. Yeah, yeah, yeah. >> Because they had their own, your own value chain. And I guess, data has changed that maybe, that you can traverse now. >> Yeah, if you think about the things that we did, even when we were in retail, the tenants was less about the industries and more about how can we bring convenience to physical spaces? The fact that you don't like to wait in line is no more like likely, you know, five years from now than it is today. So, it's a very durable tenant, but it's equally applicable whether you're in a grocery store, a convenience store, a stadium, an airport. So it actually transcends any, and like supply chain, think of supply chain. Supply chain isn't, you know, targeted to any one particular industry. It has broad applicability. So these things are very, you know, horizontally applicable. >> Anything that makes my life easier, I'm down. >> Savannah: We're all here for the easy button. We've been talking about it a bit this week. I'm in. And the retail store, I mean, I'm in San Francisco. I've had the experience of going through. Very interesting and seamless journey, honestly. It's very exciting. So tell us a little bit more about the applications group at AWS. >> Yup. So as I said, you know, we are, the applications group is a combination of several services. You know, we have communication developer services, which is the ability to add simple email service or video and embed video, voice chat using a chime SDK. In a higher up in the stack, we are taking care of things that IT administrators have to deal with where you can provision an entire desktop with the workspaces or provide a femoral access to it. And then as you go up even higher up in the stack, you have productivity applications like AWS Wicker, which we just did GA, you know, last week in AWS Clean Rooms which we announced as a service in preview. And then you have, you know, Connect, which is our cloud contact center, AWS supply chain. Just walk out Amazon One, it just feels like we're getting started. >> Just a couple things going on. >> So, clean rooms. Part of the governance play, part of data sharing. Can you explain, you know, we were talking offline, but I remember back in the disk drive days. We were in a clean room, they'd show you the clean room, you couldn't go near it unless you had a hazmat suit on. So now you're applying that to data. Explain that concept. >> Yeah, so the companies across, you know, financial services or healthcare, advertising, they all want to be able to combine and pull together data`sets with their partners in order to get these collaborative insights. The problem is either the data's fragmented, it's siloed or you have, you know, data governance issues that's preventing them from sharing. And the key requirement is that they want to be able to share this data without exposing any of the underlying data. Clean rooms are always emerged as a solution to that, but the problem with that is that they're hard to maintain. They're expensive. You have to write complex privacy queries. And if you make a mistake, you risk exposing the same data that you've been, you know, studiously trying to protect. >> Trying to protect. >> You know, take advertising as an industry, as an example. You know, advertisers care about, is my ad effective? But it turns out that if you're an advertiser and let's say you're a Nike or some other advertiser and your pop, you know, you place an ad on the website. Well, you want to stop showing the ad to people who have already purchased the product. However, people who purchased the product,- >> Savannah: It happens all the time. >> that purchasing data is not accessible to them easily. But if you could combine those insights, you know, the publishers benefit, advertisers benefits. So AWS Clean Rooms is that service that allows you very easily to be able to collaborate with a group of folks and then be able to gain these collaborative insights. >> And the consumers benefit. I mean, how many times you bought, you search it. >> It happens all the time. >> They know. And like, I just bought that guys, you know? >> Yeah, no, exactly. >> Four weeks. >> And I'm like, you don't need to serve me that, you know? And we understand the marketing backend. And it's just a waste of money and energy and resources. I mean, we're talking about sustainability as well. I don't think supply chain has ever had a hotter moment than it's had the last two and a half years. Tell me more about the announcement. >> Yup, so super excited about this. As you know, as you said, supply chains have always been very critical and very core for companies. The pandemic exacerbated it. So, ours way of sort of thinking about supply chains is to say that, you know, companies have taken, over the years many, like dozens, like millions and millions of dollars of investment in building their own supply chains. But the problem with supply chains is that the reason that they're not as functional as they could be is because of the lack of visibility. Because they're strung together very many disparate systems, that lack of visibility affects agility. And so, our approach in it was to say that, well, if we could have folks use their existing supply chain what can we do to improve the investment on the ROI of what they're getting? By creating a layer on top of it, that provides them that insights, connects all of these disparate data and then provides them insights to say, well, you know, here's where you overstock, here's where you under stock. You know, this is the, you know, the carbon emission impact of being able to transfer something. So like rather without requiring people to re-platform, what's the way that we can add value in it? And then also build upon Amazon's, you know, years of supply chain experience, to be able to build these predictive analytics for customers. >> So, that's a good, I like that you started with the why. >> Yes. >> Right now, what is it? It's an abstraction layer and then you're connecting into different data points. >> Yes, that's correct. >> Injecting ML. >> Feel like you can pick in, like if you think about supply chain, you can have warehouse management systems, order management systems. It could be in disparate things. We use ML to be able to bring all of this disparate data in and create our unified data lake. Once you have that unified data lake, you can then run an insights layer on top of it to be able to say, so that as the data changes, supply chain is not a static thing. Data's constantly changing. As the data's changing, the data lake now reflects the most up-to-date information. You can have alerts and insights set up on it to say that, what are the kinds of things that you're interested in? And then more importantly, supply chain and agility is about communication. In order to be able to make certain things happen, you need to be able to communicate, you need to make sure that everyone's on the same page. And we allow for a lot of the communication and collaboration tools to be built within this platform so that you're not necessarily leaving to go and toggle from one place to the other to solve your problems. >> And in the pie chart of how people spend their time, they're spending a lot less time communicating and being proactive. >> That's correct. >> And getting ahead of the curve. They're spending more time trying to figure out actually what's going on. >> Yes. >> And that's the problem that you're going to solve. >> Well, and it ensures that the customer at the other end of that supply chain experience is going to have their expectations managed in terms of when their good might get there or whatever's going to happen. >> Exactly. >> I feel like that expectation management has been such a big part of it. Okay, I just have to ask because I'm very curious. What was it like advising Jeff? >> Quite possibly the best job that I've ever had. You know, he's a fascinating individual. >> Did he pay you to say that? >> Nope. But I would've, like, I would've done it for like, it's remarkable seeing how he thinks and his approach to problem solving. It is, you know, you could be really tactical and go very deep. You could be extremely strategic. And to be able to sort of move effortlessly between those two is a unique skill. I learned a lot. >> Yeah, absolutely. So what made you want to evolve your career at Amazon after that? 'Cause I see on your LinkedIn, you say, it was the best job you ever had. With curiosity? >> Yeah, so one of the things, so the role is designed for you to be able to transition to something new. >> Savannah: Oh, cool. >> So after I finished that role, we were just getting into our foray with physical stores. And the idea between physical stores is that, you and I as consumers, we all have a lot of choices for physical stores. You know, there's a lot of options, there's a lot of formats. And so the last thing we wanted to do is come up with another me too offering. So, our approach was that what can we do to improve convenience in physical stores? That's what resulted in just walk out to Amazon Go. That's what resulted in Amazon One, which is another in a fast, convenient, contactless way to pay using the power of your palm. And now, what started in Amazon retail is now expanded to several third parties in, you know, stadiums, convention centers, airports. >> Airport, I just had, was in the Houston airport and got to do a humanless checkout. >> Dilip: Exactly. >> And actually in Honolulu a couple weeks ago as well too. Yeah, so we're going to see more and more of this. >> Yes. >> So what Amazon, I think has over a million employees. A lot of those are warehouse employees. But what advice would you give to somebody who's somewhere inside of Amazon, maybe they're on AWS, maybe they're Amazon. What advice would you give somebody inside that's maybe, you know, hey, I've been at this job for five, six years, three, four years, whatever it is. I want to do something else. And there's so much opportunity inside Amazon, right? What would you advise them? >> My single advice, which is actually transferable and I use it for myself is choose something that makes you a little uncomfortable. >> Dave: Get out of your comfort zone. >> It's like, you got to do that. It's like, it's not the easiest thing to hear, but it's also the most satisfying. Because almost every single time that I've done it for myself, it's resulted in like, you don't really know what the answer is. You don't really know exactly where you're going to end up, but the process and the journey through it, if you experience a little bit of discomfort constantly, it makes you non complacent. It makes you sort of not take the job, sort of in a stride. You have to be on it to do it. So that's the advice that I would give anyone. >> Yeah, that's good. So something that's maybe adjacent and maybe not completely foreign to you, but also something that, you know, you got to go dig a little bit and learn. >> You're planning a career change over here, Dave? >> No, I know a lot of people in Amazon are like, hey, I'm trying to figure out what I want to do next. I mean, I love it here. I live by the LPS, you know, but, and there's so much to choose from. >> It is, you know, when I joined in 2003, there were so many things that we were sort of doing today. None of those existed. It's a fascinating company. And the evolution, you could be in 20 different places and the breadth of the kinds of things that, you know, the Amazon experience provides is timeless. It's fascinating. >> And, you know, you look at a company like Amazon, and, you know, it's so amazing. You look at this ecosystem. I've been around- >> Even a show floor. >> I've been around a lot of time. And the show floor says it all. But I've seen a lot of, you know, waves. And each subsequent wave, you know, we always talk about how many companies were in the Fortune 1000 and aren't anymore. And, but the leaders, you know, survive and they thrive. And I think it's fascinating to try to better understand the culture that enables that. You know, you look at a company like Microsoft that was irrelevant and then came back. You know, even IBM was on death store for a while and they come back and so they. And so, but Amazon just feels, you know, at the moment you feel like, "Oh wow, nothing can stop this machine." 'Cause everybody's trying to disrupt Amazon and then, you know, only the paranoid survive, all that stuff. But it's not like, past is not prologue, all right? So that's why I asked these questions. And you just said that a lot of the services today that although the ideas didn't even exist, I mean, walkout. I mean, that's just amazing. >> I think one of the things that Amazon does really well culturally is that they create the single threaded leadership. They give people focus. If you have to get something done, you have to give people focus. You can't distract them with like seven different things and then say that, oh, by the way, your eighth job is to innovate. It just doesn't work that way. It's like it's hard. Like it can be- >> And where were the energy come from that? >> Exactly. And so giving people that single threaded focus is super important. >> Frank Slootman, the CEO of Snowflake, has a great quote. He wrote on his book. He said, "If you got 14 priorities, you got none." And he asks,- >> Well said. >> he challenges people. If you had to give up everything and do only one thing for the next 365 days, what would that be? It's a really hard question to answer. >> I feel like as we're around New Year's resolution times. I mean when we thinking about that, maybe we can all share our one thing. So, Dilip, you've been with the the applications team for five months. What's coming up next? >> Well, as I said, you know, it feels like it's still day one for applications. If you think about the things, the news that we introduced and the several services that we introduced, it has applicability across a variety of horizontal industries. But then we're also feeling that there's considerable vertical applications that can be built for specific things. Like, it could be in advertising, it could be in financial services, it could be in manufacturing. The opportunities are endless. I think the notion of people wanting applications higher up in the stack and a little more turnkey solutions is also, it's not new for us, but it's also new and creative too. You know, AWS has traditionally been doing. >> So again, this relates to what we were sort of talking about before. And maybe, this came from Jazzy or maybe it came from Bezos. But you hear a lot, it's okay to be misunderstood or if we were misunderstood for a long time. So when people hear up the stack, they think, when you think about apps, you know, in the last 10 years it was taking on-prem and bringing it into the cloud. Okay, you saw that with CREM, email, CRM, service management, you know, data warehouses, et cetera. Amazon is thinking about this in a different way. It's like you're looking at the world saying, okay, how can we improve whatever? Workflows, people's lives, doing something that's not been done before? And that seems to be the kind of applications that you guys are thinking about building. >> Yeah. >> And that's unique. It's not just, okay, we're going to take something on-prem put it in the cloud. Been there, done that. That S-curve is sort of flattening now. But there's a new S-curve which is completely new workflows and innovations and processes that we really haven't thought about yet. Or you're thinking about, I presume. >> Yeah. Having said that, I'd also like to sort of remind folks that when you consider the, you know, the entire spend, the portion of workloads that are running in the cloud is a teeny tiny fraction. It's like less than 5%, like 4% or something like that. So it's a very, there's still plenty of things that can sort of move to the cloud. But you're right that there is another trend of where in the stack and the types of applications that you can provide as well. >> Yeah, new innovation that haven't well thought of yet. >> So, Dilip, we have a new tradition here on theCUBE at re:Invent. Where we're looking for your 30 minute Instagram reel, your hot take, biggest key theme, either for you, your team, or just general vibe from the show. >> General vibe from the show. Well, 19 1/2 years at Amazon, this is actually my first re:Invent, believe it or not. This is my, as a AWS employee now, as re:Invent with like launching services. So that's the first. I've been to re:Invent before, but as an attendee rather than as a person who's, you know, a contributing number of the workforce. >> Working actually? >> If you will. >> Actually doing your job. >> And so I'm just amazed at the energy and the breadth. And the, you know, from the partners to the customers to the diversity of people who are coming here from everywhere. I had meetings from people in New Zealand. Like, you know, the UK, like customers are coming at us from like very many different places. And it's fascinating for me to see. It's new for me as well given, you know, some of my past experience. But this is a, it's been a blast. >> People are pumped. >> People are pumped. >> They can't believe the booth traffic. Not only that quality. >> Right. All of our guests have talked about that. >> Like, yeah, you know, we're going to throw half of these leads away, but they're saying no, I'm having like really substantive conversations with business people. This is, I think, my 10th re:Invent. And the first one was mostly developers. And I'm like, what are you talking about? And, you know, so. Now it's a lot more business people, a lot of developers too. >> Yeah. >> It's just. >> The community really makes it. Dilip, thank you so much for joining us today on theCube. >> Thank you for having me. >> You're fantastic. I could ask you a million questions. Be sure and tell Jeff that we said hi. >> Will do. >> Savannah: Next time you guys are hanging out. And thank all of you. >> You want to go into space? >> Yeah. Yes, yes, absolutely. I'm perhaps the most space obsessed on the show. And with that, we will continue our out of this world coverage shortly from fabulous Las Vegas where we are at AWS re:Invent. It is day four with Dave Vellante. I'm Savannah Peterson and you're watching theCUBE, the leader in high tech coverage. (lively music)

Published Date : Dec 1 2022

SUMMARY :

Dave, how you doing? Beautiful and chilly Las Vegas. And, you know, I'm not So, you have been working at Almost. but you just came over to AWS Yup, so I've been to here for that name before. that's been brewing, which is, you know, able to, you know, transfer Dilip: Siloed a little bit. that you can traverse now. is no more like likely, you know, Anything that makes And the retail store, I have to deal with where you Can you explain, you know, And if you make a mistake, you showing the ad to people that allows you very easily And the consumers benefit. that guys, you know? to serve me that, you know? is to say that, you know, I like that you started and then you're connecting like if you think about supply chain, And in the pie chart of And getting ahead of the curve. And that's the problem Well, and it ensures that I feel like that expectation management Quite possibly the best It is, you know, you So what made you want for you to be able to And so the last thing we wanted to do and got to do a humanless checkout. And actually in Honolulu a But what advice would you give to somebody that makes you a little uncomfortable. It's like, you got to do that. but also something that, you know, I live by the LPS, you know, but, And the evolution, you could And, you know, you look And, but the leaders, you If you have to get something done, And so giving people that He said, "If you got 14 If you had to give up the the applications team you know, it feels like that you guys are thinking about building. put it in the cloud. that you can provide as well. Yeah, new innovation that So, Dilip, we have a new tradition here you know, a contributing And the, you know, from the They can't believe the booth traffic. All of our guests And I'm like, what are you talking about? Dilip, thank you so much for I could ask you a million questions. you guys are hanging out. I'm perhaps the most space

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Frank SlootmanPERSON

0.99+

Dilip KumarPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SavannahPERSON

0.99+

HonoluluLOCATION

0.99+

DilipPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavePERSON

0.99+

Savannah PetersonPERSON

0.99+

JeffPERSON

0.99+

AmazonORGANIZATION

0.99+

fiveQUANTITY

0.99+

New ZealandLOCATION

0.99+

AWSORGANIZATION

0.99+

HoustonLOCATION

0.99+

2003DATE

0.99+

threeQUANTITY

0.99+

New EnglandLOCATION

0.99+

San FranciscoLOCATION

0.99+

millionsQUANTITY

0.99+

NikeORGANIZATION

0.99+

five monthsQUANTITY

0.99+

AugustDATE

0.99+

Las VegasLOCATION

0.99+

30 minuteQUANTITY

0.99+

Jeff BezosPERSON

0.99+

4%QUANTITY

0.99+

dozensQUANTITY

0.99+

20QUANTITY

0.99+

four yearsQUANTITY

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

Four weeksQUANTITY

0.99+

UKLOCATION

0.99+

six yearsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

less than 5%QUANTITY

0.99+

DecemberDATE

0.99+

last weekDATE

0.99+

14 prioritiesQUANTITY

0.99+

19 1/2 yearsQUANTITY

0.98+

todayDATE

0.98+

Las Vegas, NevadaLOCATION

0.98+

20 different placesQUANTITY

0.98+

oneQUANTITY

0.98+

first oneQUANTITY

0.98+

SnowflakeORGANIZATION

0.97+

about 20 yearsQUANTITY

0.97+

one thingQUANTITY

0.97+

Saad Malik & Tenry Fu, Spectro Cloud | KubeCon + CloudNativeCon NA 2022


 

>>Hey everybody. Welcome back. Good afternoon. Lisa Martin here with John Feer live in Detroit, Michigan. We are at Coon Cloud Native Con 2020s North America. John Thank is who. This is nearing the end of our second day of coverage and one of the things that has been breaking all day on this show is news. News. We have more news to >>Break next. Yeah, this next segment is a company we've been following. They got some news we're gonna get into. Managing Kubernetes life cycle has been a huge challenge when you've got large organizations, whether you're spinning up and scaling scale is the big story. Kubernetes is the center of the conversation. This next segment's gonna be great. It >>Is. We've got two guests from Specter Cloud here. Please welcome. It's CEO Chenery Fu and co-founder and it's c g a co-founder Sta Mallek. Guys, great to have you on the program. Thank >>You for having us. My pleasure. >>So Timary, what's going on? What's the big news? >>Yeah, so we just announced our Palace three this morning. So we add a bunch, a new functionality. So first of all we have a Nest cluster. So enable enterprise to easily provide Kubernete service even on top of their existing clusters. And secondly, we also support seamlessly migration for their existing cluster. We enable them to be able to migrate their cluster into our CNC for upstream Kubernete distro called Pallet extended Kubernetes, GX K without any downtime. And lastly, we also add a lot of focus on developer experience. Those additional capability enable developer to easily onboard and and deploy the application for. They have test and troubleshooting without, they have to have a steep Kubernetes lending curve. >>So big breaking news this morning, pallet 3.0. So you got the, you got the product. This is a big theme here. Developer productivity, ease of use is the top story here. As developers are gonna increase their code velocity cuz they're under a lot of pressure. This infrastructure's getting smarter. This is a big part of managing it. So the toil is now moving to the ops. Steves are now dev teams. Security, you gotta enable faster deployment of apps and code. This is what you guys solve while you getting this right. Is that, take us through that specific value proposition. What's the, what are the key things on in this news release? Yeah, >>You're exactly right. Right. So we basically provide our solution to platform engineering ship so that they can use our platform to enable Kubernetes service to serve their developers and their application ship. And then in the meantime, the developers will be able to easily use Kubernetes or without, They have to learn a lot of what Kubernetes specific things like. So maybe you can get in some >>Detail. Yeah. And absolutely the detail about it is there's a big separation between what operations team does and the development teams that are using the actual capabilities. The development teams don't necessarily to know the internals of Kubernetes. There's so much complexity when it comes, comes into it. How do I do things like deployment pause manifests just too much. So what our platform does, it makes it really simple for them to say, I have a containerized application, I wanna be able to model it. It's a really simple profile and from there, being able to say, I have a database service. I wanna attach to it. I have a specific service. Go run it behind the scenes. Does it run inside of a Nest cluster? Which we'll talk into a little bit. Does it run into a host cluster? Those are happen transparently for >>The developer. You know what I love about this? What you guys are doing in the news, it really points out what I love about DevOps. Because cloud, let's face a cloud early adopters, we're all the hardcore cloud folks as it goes mainstream. With Kubernetes, you start to see like words like platform engineering. I mean I love that term. That means as a platform, it's been around for a while. For people who are building their own stuff, that means it's gonna scale and enable people to enable value, build on top of it, move faster. This platform engineering is becoming now standard in enterprises. It wasn't like that before. What's your eyes reactions that, How do you see that evolving faster? Or do you believe that or what's your take on >>It? Yeah, so I think it's starting from the DevOps op team, right? That every application team, they all try to deploy and manage their application under their own ING infrastructure. But very soon all these each application team, they start realize they have to repeatedly do the same thing. So these will need to have a platform engineering team to basically bring some of common practice to >>That. >>And some people call them SREs like and that's really platform >>Engineering. It is, it is. I mean, you think about like Esther ability to deploy your applications at scale and monitoring and observability. I think what platform engineering does is codify all those best practices. Everything when it comes about how you monitor the actual applications. How do you do c i CD your backups? Instead of not having every single individual development team figuring how to do it themselves. Platform engineer is saying, why don't we actually build policy that we can provide as a service to different development teams so that they can operate their own applications at scale. >>So launching Pellet 3.0 today, you also had a launch in September, so just a few weeks ago. Talk about what these two announcements mean from Specter Cloud's perspective in terms of proof points, what you're delivering to the end users and the value that they're getting from that. >>Yeah, so our goal is really to help enterprise to deploy and around Kubernetes anywhere, right? Whether it's in cloud data center or even at Edge locations. So in September we also announce our HV two capabilities, which enable very easy deployment of Edge Kubernetes, right at at at any any location, like a retail stores restaurant, so on and so forth. So as you know, at Edge location, there's no cloud endpoint there. It's not easy to directly deploy and manage Kubernetes. And also at Edge location there's not, it's not as secure as as cloud or data center environment. So how to make the end to end system more secure, right? That it's temper proof, that is also very, very important. >>Right. Great, great take there. Thanks for explaining that. I gotta ask cuz I'm curious, what's the secret sauce? Is it nested clusters? What's, what's the core under the hood here on 3.0 that people should know about it's news? It's what's, what's the, what's that post important >>To? To be honest, it's about enabling developer velocity. Now how do you enable developer velocity? It's gonna be able for them to think about deploying applications without worrying about Kubernetes being able to build this application profiles. This NEA cluster that we're talking about enables them, they get access to it in complete cluster within seconds. They're essentially having access to be able to add any operations, any capabilities without having the ability to provision a cluster on inside of infrastructure. Whether it's Amazon, Google, or OnPrem. >>So, and you get the dev engine too, right? That that, that's a self-service provisioning in for environments. Is that, Yeah, >>So the dev engine itself are the capabilities that we offer to developers so that they can build these application profiles. What the application profiles, again they define aspects about, my application is gonna be a container, it's gonna be a database service, it's gonna be a helm chart. They define that entire structure inside of it. From there they can choose to say, I wanna deploy this. The target environment, whether it becomes an actual host cluster or a cluster itself is irrelevant to them. For them it's complete transparent. >>So transparency, enabling developer velocity. What's been some of the feedback so far? >>Oh, all developer love that. And also same for all >>The ops team. If it's easy and goods faster and the steps >>Win-win team. Yeah, Ops team, they need a consistency. They need a governance, they need visibility, but in the meantime, developers, they need the flexibility then theys or without a steep learning curve. So this really, >>So So I hear a lot of people say, I got a lot of sprawl, cluster sprawl. Yeah, let's get outta hand does, let's solve that. How do you guys solve that problem? Yeah, >>So the Neste cluster is a profit answer for that. So before you nest cluster, for a lot of enterprise to serving developers, they have to either create a very large TED cluster and then isolated by namespace, which not ideal for a lot of situation because name stay namespace is not a hard isolation and also a lot of global resource like CID and operator does not work in space. But the other way is you give each developer a separate, a separate ADE cluster, but that very quickly become too costly. Cause not every developer is working for four, seven, and half of the time your, your cluster is is a sit there idol and that costs a lot of money. So you cluster, you'll be able to basically do all these inside the your wholesale cluster, bring the >>Efficiency there. That is huge. Yeah. Saves a lot of time. Reduces the steps it takes. So I take, take a minute, my last question to you to explain what's in it for the developer, if they work with Spec Cloud, what is your value? What's the pitch? Not the sales pitch, but like what's the value pitch that >>You give them? Yeah, yeah. And the value for us is again, develop their number of different services and teams people are using today are so many, there are so many different languages or so many different libraries there so many different capabilities. It's too hard for developers to have to understand not only the internal development tools, but also the Kubernetes, the containers of technologies. There's too much for it. Our value prop is making it really easy for them to get access to all these different integrations and tooling without having to learn it. Right? And then being able to very easily say, I wanna deploy this into a cluster. Again, whether it's a Nest cluster or a host cluster. But the next layer on top of that is how do we also share those abilities with other teams. If I build my application profile, I'm developing an application, I should be able to share it with my team members. But Henry saying, Hey Tanner, why don't you also take a look at my app profile and let's build and collaborate together on that. So it's about collaboration and be able to move >>Really fast. I mean, more develops gotta be more productive. That's number one. Number one hit here. Great job. >>Exactly. Last question before we run out Time. Is this ga now? Can folks get their hands on it where >>Yes. Yeah. It is GA and available both as a, as a SaaS and also the store. >>Awesome guys, thank you so much for joining us. Congratulations on the announcement and the momentum that Specter Cloud is empowering itself with. We appreciate your insights on your time. >>Thank you. Thank you so much. Right, pleasure. >>Thanks for having us. For our guest and John Furrier, Lisa Martin here live in Michigan at Co con Cloud native PON 22. Our next guests join us in just a minute. So stick around.

Published Date : Oct 27 2022

SUMMARY :

This is nearing the end of our second day of coverage and one of the things that has been Kubernetes is the center of the conversation. Guys, great to have you on the program. You for having us. So enable enterprise to easily provide Kubernete service This is what you guys solve while you getting this right. So maybe you can get in some So what our platform does, it makes it really simple for them to say, Or do you believe that or what's your take on application team, they start realize they have to repeatedly do the same thing. I mean, you think about like Esther ability to deploy your applications at So launching Pellet 3.0 today, you also had a launch in September, So how to make the end to end system more secure, right? the hood here on 3.0 that people should know about it's news? It's gonna be able for them to think about deploying applications without worrying about Kubernetes being able So, and you get the dev engine too, right? So the dev engine itself are the capabilities that we offer to developers so that they can build these application What's been some of the feedback so far? And also same for all If it's easy and goods faster and the steps but in the meantime, developers, they need the flexibility then theys or without So So I hear a lot of people say, I got a lot of sprawl, cluster sprawl. for a lot of enterprise to serving developers, they have to either create a So I take, take a minute, my last question to you to explain what's in it for the developer, So it's about collaboration and be able to move I mean, more develops gotta be more productive. Last question before we run out Time. as a SaaS and also the store. Congratulations on the announcement and the momentum that Specter Cloud is Thank you so much. So stick around.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

John FeerPERSON

0.99+

Sta MallekPERSON

0.99+

MichiganLOCATION

0.99+

SeptemberDATE

0.99+

HenryPERSON

0.99+

TimaryPERSON

0.99+

GoogleORGANIZATION

0.99+

Specter CloudORGANIZATION

0.99+

Detroit, MichiganLOCATION

0.99+

TannerPERSON

0.99+

JohnPERSON

0.99+

two guestsQUANTITY

0.99+

each developerQUANTITY

0.99+

sevenQUANTITY

0.99+

two announcementsQUANTITY

0.99+

Saad MalikPERSON

0.99+

Tenry FuPERSON

0.99+

second dayQUANTITY

0.98+

Spectro CloudORGANIZATION

0.98+

oneQUANTITY

0.98+

KubernetesTITLE

0.98+

todayDATE

0.98+

fourQUANTITY

0.98+

bothQUANTITY

0.98+

each applicationQUANTITY

0.98+

Chenery FuPERSON

0.97+

OnPremORGANIZATION

0.96+

INGORGANIZATION

0.96+

few weeks agoDATE

0.95+

firstQUANTITY

0.94+

eveloperPERSON

0.94+

secondlyQUANTITY

0.94+

this morningDATE

0.93+

a minuteQUANTITY

0.93+

EdgeORGANIZATION

0.92+

GALOCATION

0.92+

two capabilitiesQUANTITY

0.91+

halfQUANTITY

0.9+

Co con CloudORGANIZATION

0.89+

StevesPERSON

0.89+

threeQUANTITY

0.87+

NA 2022EVENT

0.87+

KubeConEVENT

0.84+

Spec CloudTITLE

0.82+

singleQUANTITY

0.8+

DevOpsTITLE

0.79+

CloudNativeConEVENT

0.75+

PON 22LOCATION

0.74+

North AmericaLOCATION

0.73+

KuberneteTITLE

0.69+

Coon Cloud Native ConORGANIZATION

0.68+

Pellet 3.0TITLE

0.66+

Edge KubernetesTITLE

0.65+

PalletTITLE

0.64+

CloudORGANIZATION

0.64+

palletTITLE

0.58+

GX KTITLE

0.57+

secondsQUANTITY

0.55+

SpecterTITLE

0.54+

EstherTITLE

0.51+

3.0OTHER

0.45+

Michael Foster & Doron Caspin, Red Hat | KubeCon + CloudNativeCon NA 2022


 

(upbeat music) >> Hey guys, welcome back to the show floor of KubeCon + CloudNativeCon '22 North America from Detroit, Michigan. Lisa Martin here with John Furrier. This is day one, John at theCUBE's coverage. >> CUBE's coverage. >> theCUBE's coverage of KubeCon. Try saying that five times fast. Day one, we have three wall-to-wall days. We've been talking about Kubernetes, containers, adoption, cloud adoption, app modernization all morning. We can't talk about those things without addressing security. >> Yeah, this segment we're going to hear container and Kubernetes security for modern application 'cause the enterprise are moving there. And this segment with Red Hat's going to be important because they are the leader in the enterprise when it comes to open source in Linux. So this is going to be a very fun segment. >> Very fun segment. Two guests from Red Hat join us. Please welcome Doron Caspin, Senior Principal Product Manager at Red Hat. Michael Foster joins us as well, Principal Product Marketing Manager and StackRox Community Lead at Red Hat. Guys, great to have you on the program. >> Thanks for having us. >> Thank you for having us. >> It's awesome. So Michael StackRox acquisition's been about a year. You got some news? >> Yeah, 18 months. >> Unpack that for us. >> It's been 18 months, yeah. So StackRox in 2017, originally we shifted to be the Kubernetes-native security platform. That was our goal, that was our vision. Red Hat obviously saw a lot of powerful, let's say, mission statement in that, and they bought us in 2021. Pre-acquisition we were looking to create a cloud service. Originally we ran on Kubernetes platforms, we had an operator and things like that. Now we are looking to basically bring customers in into our service preview for ACS as a cloud service. That's very exciting. Security conversation is top notch right now. It's an all time high. You can't go with anywhere without talking about security. And specifically in the code, we were talking before we came on camera, the software supply chain is real. It's not just about verification. Where do you guys see the challenges right now? Containers having, even scanning them is not good enough. First of all, you got to scan them and that may not be good enough. Where's the security challenges and where's the opportunity? >> I think a little bit of it is a new way of thinking. The speed of security is actually does make you secure. We want to keep our images up and fresh and updated and we also want to make sure that we're keeping the open source and the different images that we're bringing in secure. Doron, I know you have some things to say about that too. He's been working tirelessly on the cloud service. >> Yeah, I think that one thing, you need to trust your sources. Even if in the open source world, you don't want to copy paste libraries from the web. And most of our customers using third party vendors and getting images from different location, we need to trust our sources and we have a really good, even if you have really good scanning solution, you not always can trust it. You need to have a good solution for that. >> And you guys are having news, you're announcing the Red Hat Advanced Cluster Security Cloud Service. >> Yes. >> What is that? >> So we took StackRox and we took the opportunity to make it as a cloud services so customer can consume the product as a cloud services as a start offering and customer can buy it through for Amazon Marketplace and in the future Azure Marketplace. So customer can use it for the AKS and EKS and AKS and also of course OpenShift. So we are not specifically for OpenShift. We're not just OpenShift. We also provide support for EKS and AKS. So we provided the capability to secure the whole cloud posture. We know customer are not only OpenShift or not only EKS. We have both. We have free cloud or full cloud. So we have open. >> So it's not just OpenShift, it's Kubernetes, environments, all together. >> Doron: All together, yeah. >> Lisa: Meeting customers where they are. >> Yeah, exactly. And we focus on, we are not trying to boil the ocean or solve the whole cloud security posture. We try to solve the Kubernetes security cluster. It's very unique and very need unique solution for that. It's not just added value in our cloud security solution. We think it's something special for Kubernetes and this is what Red that is aiming to. To solve this issue. >> And the ACS platform really doesn't change at all. It's just how they're consuming it. It's a lot quicker in the cloud. Time to value is right there. As soon as you start up a Kubernetes cluster, you can get started with ACS cloud service and get going really quickly. >> I'm going to ask you guys a very simple question, but I heard it in the bar in the lobby last night. Practitioners talking and they were excited about the Red Hat opportunity. They actually asked a question, where do I go and get some free Red Hat to test some Kubernetes out and run helm or whatever. They want to play around. And do you guys have a program for someone to get start for free? >> Yeah, so the cloud service specifically, we're going to service preview. So if people sign up, they'll be able to test it out and give us feedback. That's what we're looking for. >> John: Is that a Sandbox or is that going to be in the cloud? >> They can run it in their own environment. So they can sign up. >> John: Free. >> Doron: Yeah, free. >> For the service preview. All we're asking for is for customer feedback. And I know it's actually getting busy there. It's starting December. So the quicker people are, the better. >> So my friend at the lobby I was talking to, I told you it was free. I gave you the sandbox, but check out your cloud too. >> And we also have the open source version so you can download it and use it. >> Yeah, people want to know how to get involved. I'm getting a lot more folks coming to Red Hat from the open source side that want to get their feet wet. That's been a lot of people rarely interested. That's a real testament to the product leadership. Congratulations. >> Yeah, thank you. >> So what are the key challenges that you have on your roadmap right now? You got the products out there, what's the current stake? Can you scope the adoption? Can you share where we're at? What people are doing specifically and the real challenges? >> I think one of the biggest challenges is talking with customers with a slightly, I don't want to say outdated, but an older approach to security. You hear things like malware pop up and it's like, well, really what we should be doing is keeping things into low and medium vulnerabilities, looking at the configuration, managing risk accordingly. Having disparate security tools or different teams doing various things, it's really hard to get a security picture of what's going on in the cluster. That's some of the biggest challenges that we talk with customers about. >> And in terms of resolving those challenges, you mentioned malware, we talk about ransomware. It's a household word these days. It's no longer, are we going to get hit? It's when? It's what's the severity? It's how often? How are you guys helping customers to dial down some of the risk that's inherent and only growing these days? >> Yeah, risk, it's a tough word to generalize, but our whole goal is to give you as much security information in a way that's consumable so that you can evaluate your risk, set policies, and then enforce them early on in the cluster or early on in the development pipeline so that your developers get the security information they need, hopefully asynchronously. That's the best way to do it. It's nice and quick, but yeah. I don't know if Doron you want to add to that? >> Yeah, so I think, yeah, we know that ransomware, again, it's a big world for everyone and we understand the area of the boundaries where we want to, what we want to protect. And we think it's about policies and where we enforce it. So, and if you can enforce it on, we know that as we discussed before that you can scan the image, but we never know what is in it until you really run it. So one of the thing that we we provide is runtime scanning. So you can scan and you can have policy in runtime. So enforce things in runtime. But even if one image got in a way and get to your cluster and run on somewhere, we can stop it in runtime. >> Yeah. And even with the runtime enforcement, the biggest thing we have to educate customers on is that's the last-ditch effort. We want to get these security controls as early as possible. That's where the value's going to be. So we don't want to be blocking things from getting to staging six weeks after developers have been working on a project. >> I want to get you guys thoughts on developer productivity. Had Docker CEO on earlier and since then I had a couple people messaging me. Love the vision of Docker, but Docker Hub has some legacy and it might not, has does something kind of adoption that some people think it does. Are people moving 'cause there times they want to have these their own places? No one place or maybe there is, or how do you guys see the movement of say Docker Hub to just using containers? I don't need to be Docker Hub. What's the vis-a-vis competition? >> I mean working with open source with Red Hat, you have to meet the developers where they are. If your tool isn't cutting it for developers, they're going to find a new tool and really they're the engine, the growth engine of a lot of these technologies. So again, if Docker, I don't want to speak about Docker or what they're doing specifically, but I know that they pretty much kicked off the container revolution and got this whole thing started. >> A lot of people are using your environment too. We're hearing a lot of uptake on the Red Hat side too. So, this is open source help, it all sorts stuff out in the end, like you said, but you guys are getting a lot of traction there. Can you share what's happening there? >> I think one of the biggest things from a developer experience that I've seen is the universal base image that people are using. I can speak from a security standpoint, it's awesome that you have a base image where you can make one change or one issue and it can impact a lot of different applications. That's one of the big benefits that I see in adoption. >> What are some of the business, I'm curious what some of the business outcomes are. You talked about faster time to value obviously being able to get security shifted left and from a control perspective. but what are some of the, if I'm a business, if I'm a telco or a healthcare organization or a financial organization, what are some of the top line benefits that this can bubble up to impact? >> I mean for me, with those two providers, compliance is a massive one. And just having an overall look at what's going on in your clusters, in your environments so that when audit time comes, you're prepared. You can get through that extremely quickly. And then as well, when something inevitably does happen, you can get a good image of all of like, let's say a Log4Shell happens, you know exactly what clusters are affected. The triage time is a lot quicker. Developers can get back to developing and then yeah, you can get through it. >> One thing that we see that customers compliance is huge. >> Yes. And we don't want to, the old way was that, okay, I will provision a cluster and I will do scans and find things, but I need to do for PCI DSS for example. Today the customer want to provision in advance a PCI DSS cluster. So you need to do the compliance before you provision the cluster and make all the configuration already baked for PCI DSS or HIPAA compliance or FedRAMP. And this is where we try to use our compliance, we have tools for compliance today on OpenShift and other clusters and other distribution, but you can do this in advance before you even provision the cluster. And we also have tools to enforce it after that, after your provision, but you have to do it again before and after to make it more feasible. >> Advanced cluster management and the compliance operator really help with that. That's why OpenShift Platform Plus as a bundle is so popular. Just being able to know that when a cluster gets provision, it's going to be in compliance with whatever the healthcare provider is using. And then you can automatically have ACS as well pop up so you know exactly what applications are running, you know it's in compliance. I mean that's the speed. >> You mentioned the word operator, I get triggering word now for me because operator role is changing significantly on this next wave coming because of the automation. They're operating, but they're also devs too. They're developing and composing. It's almost like a dashboard, Lego blocks. The operator's not just manually racking and stacking like the old days, I'm oversimplifying it, but the new operators running stuff, they got observability, they got coding, their servicing policy. There's a lot going on. There's a lot of knobs. Is it going to get simpler? How do you guys see the org structures changing to fill the gap on what should be a very simple, turn some knobs, operate at scale? >> Well, when StackRox originally got acquired, one of the first things we did was put ACS into an operator and it actually made the application life cycle so much easier. It was very easy in the console to go and say, Hey yeah, I want ACS my cluster, click it. It would get provisioned. New clusters would get provisioned automatically. So underneath it might get more complicated. But in terms of the application lifecycle, operators make things so much easier. >> And of course I saw, I was lucky enough with Lisa to see Project Wisdom in AnsibleFest. You going to say, Hey, Red Hat, spin up the clusters and just magically will be voice activated. Starting to see AI come in. So again, operations operator is got to dev vibe and an SRE vibe, but it's not that direct. Something's happening there. We're trying to put our finger on. What do you guys think is happening? What's the real? What's the action? What's transforming? >> That's a good question. I think in general, things just move to the developers all the time. I mean, we talk about shift left security, everything's always going that way. Developers how they're handing everything. I'm not sure exactly. Doron, do you have any thoughts on that. >> Doron, what's your reaction? You can just, it's okay, say what you want. >> So I spoke with one of our customers yesterday and they say that in the last years, we developed tons of code just to operate their infrastructure. That if developers, so five or six years ago when a developer wanted VM, it will take him a week to get a VM because they need all their approval and someone need to actually provision this VM on VMware. And today they automate all the way end-to-end and it take two minutes to get a VM for developer. So operators are becoming developers as you said, and they develop code and they make the infrastructure as code and infrastructure as operator to make it more easy for the business to run. >> And then also if you add in DataOps, AIOps, DataOps, Security Ops, that's the new IT. It seems to be the new IT is the stuff that's scaling, a lot of data's coming in, you got security. So all that's got to be brought in. How do you guys view that into the equation? >> Oh, I mean you become big generalists. I think there's a reason why those cloud security or cloud professional certificates are becoming so popular. You have to know a lot about all the different applications, be able to code it, automate it, like you said, hopefully everything as code. And then it also makes it easy for security tools to come in and look and examine where the vulnerabilities are when those things are as code. So because you're going and developing all this automation, you do become, let's say a generalist. >> We've been hearing on theCUBE here and we've been hearing the industry, burnout, associated with security professionals and some DataOps because the tsunami of data, tsunami of breaches, a lot of engineers getting called in the middle of the night. So that's not automated. So this got to get solved quickly, scaled up quickly. >> Yes. There's two part question there. I think in terms of the burnout aspect, you better send some love to your security team because they only get called when things get broken and when they're doing a great job you never hear about them. So I think that's one of the things, it's a thankless profession. From the second part, if you have the right tools in place so that when something does hit the fan and does break, then you can make an automated or a specific decision upstream to change that, then things become easy. It's when the tools aren't in place and you have desperate environments so that when a Log4Shell or something like that comes in, you're scrambling trying to figure out what clusters are where and where you're impacted. >> Point of attack, remediate fast. That seems to be the new move. >> Yeah. And you do need to know exactly what's going on in your clusters and how to remediate it quickly, how to get the most impact with one change. >> And that makes sense. The service area is expanding. More things are being pushed. So things will, whether it's a zero day vulnerability or just attack. >> Just mix, yeah. Customer automate their all of things, but it's good and bad. Some customer told us they, I think Spotify lost the whole a full zone because of one mistake of a customer because they automate everything and you make one mistake. >> It scale the failure really. >> Exactly. Scaled the failure really fast. >> That was actually few contact I think four years ago. They talked about it. It was a great learning experience. >> It worked double edge sword there. >> Yeah. So definitely we need to, again, scale automation, test automation way too, you need to hold the drills around data. >> Yeah, you have to know the impact. There's a lot of talk in the security space about what you can and can't automate. And by default when you install ACS, everything is non-enforced. You have to have an admission control. >> How are you guys seeing your customers? Obviously Red Hat's got a great customer base. How are they adopting to the managed service wave that's coming? People are liking the managed services now because they maybe have skills gap issues. So managed service is becoming a big part of the portfolio. What's your guys' take on the managed services piece? >> It's just time to value. You're developing a new application, you need to get it out there quick. If somebody, your competitor gets out there a month before you do, that's a huge market advantage. >> So you care how you got there. >> Exactly. And so we've had so much Kubernetes expertise over the last 10 or so, 10 plus year or well, Kubernetes for seven plus years at Red Hat, that why wouldn't you leverage that knowledge internally so you can get your application. >> Why change your toolchain and your workflows go faster and take advantage of the managed service because it's just about getting from point A to point B. >> Exactly. >> Well, in time to value is, you mentioned that it's not a trivial term, it's not a marketing term. There's a lot of impact that can be made. Organizations that can move faster, that can iterate faster, develop what their customers are looking for so that they have that competitive advantage. It's definitely not something that's trivial. >> Yeah. And working in marketing, whenever you get that new feature out and I can go and chat about it online, it's always awesome. You always get customers interests. >> Pushing new code, being secure. What's next for you guys? What's on the agenda? What's around the corner? We'll see a lot of Red Hat at re:Invent. Obviously your relationship with AWS as strong as a company. Multi-cloud is here. Supercloud as we've been saying. Supercloud is a thing. What's next for you guys? >> So we launch the cloud services and the idea that we will get feedback from customers. We are not going GA. We're not going to sell it for now. We want to get customers, we want to get feedback to make the product as best what we can sell and best we can give for our customers and get feedback. And when we go GA and we start selling this product, we will get the best product in the market. So this is our goal. We want to get the customer in the loop and get as much as feedback as we can. And also we working very closely with our customers, our existing customers to announce the product to add more and more features what the customer needs. It's all about supply chain. I don't like it, but we have to say, it's all about making things more automated and make things more easy for our customer to use to have security in the Kubernetes environment. >> So where can your customers go? Clearly, you've made a big impact on our viewers with your conversation today. Where are they going to be able to go to get their hands on the release? >> So you can find it on online. We have a website to sign up for this program. It's on my blog. We have a blog out there for ACS cloud services. You can just go there, sign up, and we will contact the customer. >> Yeah. And there's another way, if you ever want to get your hands on it and you can do it for free, Open Source StackRox. The product is open source completely. And I would love feedback in Slack channel. It's one of the, we also get a ton of feedback from people who aren't actually paying customers and they contribute upstream. So that's an awesome way to get started. But like you said, you go to, if you search ACS cloud service and service preview. Don't have to be a Red Hat customer. Just if you're running a CNCF compliant Kubernetes version. we'd love to hear from you. >> All open source, all out in the open. >> Yep. >> Getting it available to the customers, the non-customers, they hopefully pending customers. Guys, thank you so much for joining John and me talking about the new release, the evolution of StackRox in the last season of 18 months. Lot of good stuff here. I think you've done a great job of getting the audience excited about what you're releasing. Thank you for your time. >> Thank you. >> Thank you. >> For our guest and for John Furrier, Lisa Martin here in Detroit, KubeCon + CloudNativeCon North America. Coming to you live, we'll be back with our next guest in just a minute. (gentle music)

Published Date : Oct 27 2022

SUMMARY :

back to the show floor Day one, we have three wall-to-wall days. So this is going to be a very fun segment. Guys, great to have you on the program. So Michael StackRox And specifically in the code, Doron, I know you have some Even if in the open source world, And you guys are having and in the future Azure Marketplace. So it's not just OpenShift, or solve the whole cloud security posture. It's a lot quicker in the cloud. I'm going to ask you Yeah, so the cloud So they can sign up. So the quicker people are, the better. So my friend at the so you can download it and use it. from the open source side that That's some of the biggest challenges How are you guys helping so that you can evaluate So one of the thing that we we the biggest thing we have I want to get you guys thoughts you have to meet the the end, like you said, it's awesome that you have a base image What are some of the business, and then yeah, you can get through it. One thing that we see that and make all the configuration and the compliance operator because of the automation. and it actually made the What do you guys think is happening? Doron, do you have any thoughts on that. okay, say what you want. for the business to run. So all that's got to be brought in. You have to know a lot about So this got to get solved and you have desperate environments That seems to be the new move. and how to remediate it quickly, And that makes sense. and you make one mistake. Scaled the contact I think four years ago. you need to hold the drills around data. And by default when you install ACS, How are you guys seeing your customers? It's just time to value. so you can get your application. and take advantage of the managed service Well, in time to value is, whenever you get that new feature out What's on the agenda? and the idea that we will Where are they going to be able to go So you can find it on online. and you can do it for job of getting the audience Coming to you live,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

Lisa MartinPERSON

0.99+

Michael FosterPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

DoronPERSON

0.99+

Doron CaspinPERSON

0.99+

2017DATE

0.99+

2021DATE

0.99+

DecemberDATE

0.99+

SpotifyORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

two minutesQUANTITY

0.99+

seven plus yearsQUANTITY

0.99+

second partQUANTITY

0.99+

John FurrierPERSON

0.99+

Detroit, MichiganLOCATION

0.99+

fiveDATE

0.99+

one mistakeQUANTITY

0.99+

KubeConEVENT

0.99+

SupercloudORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

a weekQUANTITY

0.99+

yesterdayDATE

0.99+

two providersQUANTITY

0.99+

Two guestsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

TodayDATE

0.99+

MichaelPERSON

0.99+

DockerORGANIZATION

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

LinuxTITLE

0.99+

four years agoDATE

0.98+

five timesQUANTITY

0.98+

one issueQUANTITY

0.98+

six years agoDATE

0.98+

zero dayQUANTITY

0.98+

six weeksQUANTITY

0.98+

CloudNativeConEVENT

0.98+

OpenShiftTITLE

0.98+

last nightDATE

0.98+

CUBEORGANIZATION

0.98+

one imageQUANTITY

0.97+

last yearsDATE

0.97+

FirstQUANTITY

0.97+

Azure MarketplaceTITLE

0.97+

One thingQUANTITY

0.97+

telcoORGANIZATION

0.97+

Day oneQUANTITY

0.97+

one thingQUANTITY

0.96+

Docker HubTITLE

0.96+

Docker HubORGANIZATION

0.96+

10 plus yearQUANTITY

0.96+

DoronORGANIZATION

0.96+

Project WisdomTITLE

0.96+

day oneQUANTITY

0.95+

LegoORGANIZATION

0.95+

one changeQUANTITY

0.95+

a minuteQUANTITY

0.95+

ACSTITLE

0.95+

CloudNativeCon '22EVENT

0.94+

KubernetesTITLE

0.94+

Thomas Cornely, Induprakas Keri & Eric Lockard | Accelerate Hybrid Cloud with Nutanix & Microsoft


 

(gentle music) >> Okay, we're back with the hybrid cloud power panel. I'm Dave Vellante, and with me Eric Lockard who is the Corporate Vice President of Microsoft Azure Specialized. Thomas Cornely is the Senior Vice President of Products at Nutanix and Indu Keri, who's the Senior Vice President of Engineering, NCI and NC2 at Nutanix. Gentlemen, welcome to The Cube. Thanks for coming on. >> It's good to be here. >> Thanks for having us. >> Eric, let's, let's start with you. We hear so much about cloud first. What's driving the need for hybrid cloud for organizations today? I mean, I want to just put everything in the public cloud. >> Yeah, well, I mean the public cloud has a bunch of inherent advantages, right? I mean it's, it has effectively infinite capacity the ability to, you know, innovate without a lot of upfront costs, you know, regions all over the world. So there is a trend towards public cloud, but you know not everything can go to the cloud, especially right away. There's lots of reasons. Customers want to have assets on premise you know, data gravity, sovereignty and so on. And so really hybrid is the way to achieve the best of both worlds, really to kind of leverage the assets and investments that customers have on premise but also take advantage of the cloud for bursting, originality or expansion especially coming out of the pandemic. We saw a lot of this from work from home and and video conferencing and so on driving a lot of cloud adoption. So hybrid is really the way that we see customers achieving the best of both worlds. >> Yeah, makes sense. I want to, Thomas, if you could talk a little bit I don't want to inundate people with the acronyms, but the Nutanix Cloud clusters on Azure, what is that? What problems does it solve? Give us some color there, please. >> Yeah, so, you know, cloud clusters on Azure which we actually call NC2 to make it simple. And so NC2 on Azure is really our solutions for hybrid cloud, right? And you think about hybrid cloud highly desirable, customers want it. They, they know this is the right way to do it for them given that they want to have workloads on premises at the edge, any public clouds, but it's complicated. It's hard to do, right? And the first thing that you deal with is just silos, right? You have different infrastructure that you have to go and deal with. You have different teams, different technologies, different areas of expertise. And dealing with different portals, networking get complicated, security gets complicated. And so you heard me say this already, you know hybrid can be complex. And so what we've done we then NC2 Azure is we make that simple, right? We allow teams to go and basically have a solution that allows you to go and take any application running on premises and move it as-is to any Azure region where NC2 is available. Once it's running there you keep the same operating model, right? And that's, so that actually super valuable to actually go and do this in a simple fashion. Do it faster, and basically do hybrid in a more (indistinct) fashion know for all your applications. And that's what's really special about NC2 today. >> So Thomas, just a quick follow up on that. So you're, you're, if I understand you correctly it's an identical experience. Did I get that right? >> This is the key for us, right? When you think you're sitting on premises you are used to way of doing things of how you run your applications, how you operate, how you protect them. And what we do here is we extend the Nutanix operating model to workloads running in Azure using the same core stack that you're running on premises, right? So once you have a cluster, deploy in NC2 Azure, it's going to look like the same cluster that you might be running at the edge or in your own data center, using the same tools, using the same admin constructs to go protect the workloads make them highly available do disaster recovery or secure them. All of that becomes the same. But now you are in Azure, and this is what we've spent a lot of time working with Eric and his teams on is you actually have access now to all of those suites of Azure services (indistinct) from those workloads. So now you get the best of both world, you know and we bridge them together and you to get seamless access of those services between what you get from Nutanix, what you get from Azure. >> Yeah. And as you alluded to this is traditionally been non-trivial and people have been looking forward to this for quite some time. So Indu, I want to understand from an engineering perspective, your team had to work with the Microsoft team, and I'm sure there was this is not just a press release, this is, or a PowerPoint you had to do some some engineering work. So what specific engineering work did you guys do and what's unique about this relative to other solutions in the marketplace? >> So let me start with what's unique about this. And I think Thomas and Eric both did a really good job of describing that. The best way to think about what we are delivering jointly with Microsoft is that it speeds up the journey to the public cloud. You know, one way to think about this is moving to the public cloud is sort of like remodeling your house. And when you start remodeling your house, you know, you find that you start with something and before you know it, you're trying to remodel the entire house. And that's a little bit like what journey to the public cloud sort of starts to look like when you start to refactor applications. Because it wasn't, most of the applications out there today weren't designed for the public cloud to begin with. NC2 allows you to flip that on its head and say that take your application as-is and then lift and shift it to the public cloud at which point you start the refactor journey. And one of the things that you have done really well with the NC2 on Azure is that NC2 is not something that sits by Azure side. It's fully integrated into the Azure fabric especially the software-defined networking, SDN piece. What that means is that, you know you don't have to worry about connecting your NC2 cluster to Azure to some sort of a network pipe. You have direct access to the Azure services from the same application that's now running on an NC2 cluster. And that makes your refactor journey so much easier. Your management claim looks the same, your high performance notes let the NVMe notes they look the same. And really, I mean, other than the fact that you're doing something in the public cloud all the Nutanix goodness that you're used to continue to receive that. There is a lot of secret sauce that we have had to develop as part of this journey. But if we had to pick one that really stands out it is how do we take the complexity, the network complexity offer public cloud, in this case Azure and make it as familiar to Nutanix's customers as the VPC, the virtual private cloud (indistinct) that allows them to really think of their on-prem networking and the public cloud networking in very similar terms. There's a lot more that's done on behind the scenes. And by the way, I'll tell you a funny sort of anecdote. My dad used to say when I grew up that, you know if you really want to grow up, you have to do two things. You have to like build a house and you have to marry your kid off to someone. And I would say our dad a third, do a cloud development with the public cloud provider of the partner. This has been just an absolute amazing journey with Eric and the Microsoft team and we're very grateful for their support. >> I need NC2 for my house. I live in a house that was built and it's 1687 and we connect all the new and it is a bolt on, but the secret sauce, I mean there's, there's a lot there but is it a (indistinct) layer. You didn't just wrap it in a container and shove it into the public cloud. You've done more than that, I'm inferring. >> You know, the, it's actually an infrastructure layer offering on top of (indistinct). You can obviously run various types of platform services. So for example, down the road if you have a containerized application you'll actually be able to take it from on prem and run it on NC2. But the NC2 offer itself, the NC2 offering itself is an infrastructure level offering. And the trick is that the storage that you're used to the high performance storage that you know define Nutanix to begin with the hypervisor that you're used to the network constructs that you're used to light micro segmentation for security purposes, all of them are available to you on NC2 in Azure the same way that we're used to do on-prem. And furthermore, managing all of that through Prism, which is our management interface and management console also remains the same. That makes your security model easier that makes your management challenge easier that makes it much easier for an application person or the IT office to be able to report back to the board that they have started to execute on the cloud mandate and they've done that much faster than they would be able to otherwise. >> Great. Thank you for helping us understand the plumbing. So now Thomas, maybe we can get to like the customers. What, what are you seeing, what are the use cases that are that are going to emerge for this solution? >> Yeah, I mean we've, you know we've had a solution for a while and you know this is now new on Azure is going to extend the reach of the solution and get us closer to the type of use cases that are unique to Azure in terms of those solutions for analytics and so forth. But the kind of key use cases for us the first one you know, talks about it is a migration. You know, we see customers on that cloud journey. They're looking to go and move applications wholesale from on premises to public cloud. You know, we make this very easy because in the end they take the same culture that were around the application and we make them available now in the Azure region. You can do this for any applications. There's no change to the application, no networking change the same IP constraint will work the same whether you're running on premises or in Azure. The app stays exactly the same manage the same way, protected the same way. So that's a big one. And you know, the type of drivers for (indistinct) maybe I want to go do something different or I want to go and shut down the location on premises I need to do that with a given timeline. I can now move first and then take care of optimizing the application to take advantage of all that Azure has to offer. So migration and doing that in a simple fashion in a very fast manner is, is a key use case. Another one, and this is classic for leveraging public cloud force, which we're doing on premises IT disaster recovery and something that we refer to as Elastic disaster recovery, being able to go and actually configure a secondary site to protect your on premises workloads. But I think that site sitting in Azure as a small site just enough to hold the data that you're replicating and then use the fact that you cannot get access to resources on demand in Azure to scale out the environment feed over workloads, run them with performance potentially fill them back to on premises, and then shrink back the environment in Azure to again optimize cost and take advantage of the elasticity that you get from public cloud models. Then the last one, building on top of that is just the fact that you cannot get bursting use cases and maybe running a large environment, typically desktop, you know, VDI environments that we see running on premises and I have, you know, a seasonal requirement to go and actually enable more workers to go get access the same solution. You could do this by sizing for the large burst capacity on premises wasting resources during the rest of the year. What we see customers do is optimize what they're running on premises and get access to resources on demand in Azure and basically move the workloads and now basically get combined desktops running on premises desktops running on NC2 on Azure same desktop images, same management, same services and do that as a burst use case during say you're a retailer that has to go and take care of your holiday season. You know, great use case that we see over and over again for our customers, right? And pretty much complimenting the notion of, look I want to go to desktop as a service, but right now I don't want to refactor the entire application stack. I just want to be able to get access to resources on demand in the right place at the right time. >> Makes sense. I mean this is really all about supporting customer's, digital transformations. We all talk about how that was accelerated during the pandemic and but the cloud is a fundamental component of the digital transformations generic. You, you guys have obviously made a commitment between Microsoft and Nutanix to simplify hybrid cloud and that journey to the cloud. How should customers, you know, measure that? What does success look like? What's the ultimate vision here? >> Well, the ultimate vision is really twofold, I think. The one is to, you know first is really to ease a customer's journey to the cloud to allow them to take advantage of all the benefits to the cloud, but to do so without having to rewrite their applications or retrain their administrators and or to obviate their investment that they already have and platforms like Nutanix. And so the work that companies have done together here, you know, first and foremost is really to allow folks to come to the cloud in the way that they want to come to the cloud and take really the best of both worlds, right? Leverage their investment in the capabilities of the Nutanix platform, but do so in conjunction with the advantages and capabilities of Azure. You know, second is really to extend some of the cloud capabilities down onto the on-premise infrastructure. And so with investments that we've done together with Azure arc for example, we're really extending the Azure control plane down onto on-premise Nutanix clusters and bringing the capabilities that provides to the Nutanix customer as well as various Azure services like our data services and Azure SQL server. So it's really kind of coming at the problem from two directions. One is from kind of traditional on-premise up into the cloud, and then the second is kind of from the cloud leveraging the investment customers have in on-premise HCI. >> Got it. Thank you. Okay, last question. Maybe each of you could just give us one key takeaway for our audience today. Maybe we start with Thomas and then Indu and then Eric you can bring us home. >> Sure. So the key takeaway is, you know, cloud customers on Azure is now GA you know, this is something that we've had tremendous demand from our customers both from the Microsoft side and the Nutanix side going back years literally, right? People have been wanting to go and see this this is now live GA open for business and you know we're ready to go and engage and ready to scale, right? This is our first step in a long journey in a very key partnership for us at Nutanix. >> Great, Indu. >> In our day, in a prior life about seven or eight years ago, I was a part of a team that took a popular text preparation software and moved it to the public cloud. And that was a journey that took us four years and probably several hundred million dollars. And if we had NC2 then it would've saved us half the money, but more importantly would've gotten there in one third the time. And that's really the value of this. >> Okay. Eric, bring us home please. >> Yeah, I'll just point out that, this is not something that's just bought on or something we started yesterday. This is something the teams both companies have been working on together for years really. And it's a way of deeply integrating Nutanix into the Azure Cloud. And with the ultimate goal of again providing cloud capabilities to the Nutanix customer in a way that they can, you know take advantage of the cloud and then compliment those applications over time with additional Azure services like storage, for example. So it really is a great on-ramp to the cloud for customers who have significant investments in Nutanix clusters on premise. >> Love the co-engineering and the ability to take advantage of those cloud native tools and capabilities, real customer value. Thanks gentlemen. Really appreciate your time. >> Thank you. >> Thank you. >> Okay. Keep it right there. You're watching accelerate hybrid cloud, that journey with Nutanix and Microsoft technology on The Cube, your leader in enterprise and emerging tech coverage. (gentle music)

Published Date : Sep 30 2022

SUMMARY :

the Senior Vice President everything in the public cloud. the ability to, you know, innovate but the Nutanix Cloud clusters And the first thing that you understand you correctly All of that becomes the same. in the marketplace? for the public cloud to begin with. it into the public cloud. or the IT office to be able to report back that are going to emerge the first one you know, talks and that journey to the cloud. and take really the best Maybe each of you could just and ready to scale, right? and moved it to the public cloud. This is something the teams Love the co-engineering and the ability hybrid cloud, that journey

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

ThomasPERSON

0.99+

Dave VellantePERSON

0.99+

Eric LockardPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Thomas CornelyPERSON

0.99+

NutanixORGANIZATION

0.99+

four yearsQUANTITY

0.99+

both companiesQUANTITY

0.99+

PowerPointTITLE

0.99+

first stepQUANTITY

0.99+

yesterdayDATE

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

one thirdQUANTITY

0.98+

NC2TITLE

0.98+

1687DATE

0.98+

AzureTITLE

0.98+

NC2ORGANIZATION

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

both worldsQUANTITY

0.98+

NCIORGANIZATION

0.97+

bothQUANTITY

0.97+

secondQUANTITY

0.97+

two thingsQUANTITY

0.97+

first thingQUANTITY

0.97+

firstQUANTITY

0.97+

one key takeawayQUANTITY

0.97+

Azure CloudTITLE

0.96+

eight years agoDATE

0.96+

InduPERSON

0.96+

two directionsQUANTITY

0.95+

several hundred million dollarsQUANTITY

0.94+

thirdQUANTITY

0.93+

pandemicEVENT

0.93+

Induprakas KeriPERSON

0.93+

first oneQUANTITY

0.93+

half the moneyQUANTITY

0.93+

NC2LOCATION

0.89+

Steven Jones, AWS, Phil Brotherton, NetApp, & Narayan Bharadwaj, VMware | VMware Explore 2022


 

>>Hey everyone. Welcome back to the Cube's day one coverage of VMware Explorer, 2022 live from San Francisco. I'm Lisa Martin, and I'm basically sitting with the cloud. I got a power panel here with me. You are not gonna wanna miss the segment, please. Welcome, nor Barage I probably did. I do. Okay on that. Great, thank you. VP and GM of cloud solutions at VMware. Thanks for joining us. Field brother tune is back our alumni VP solutions and alliances at NetApp bill. Great to see you in person. Thank you. And Steve Jones, GM SAP, and VMware cloud at Amazon. Welcome guys. Thank you. Pleasure. So we got VMware, NetApp and Amazon. I was telling Phil before we went live, I was snooping around on the NetApp website the other day. And I saw a tagline that said two is the company three is a cloud, but I get to sit with the cloud. This is fantastic. Nora, talk to us about the big news that came out just about 24 hours ago. These three powerhouse, we >>Were super excited. We are celebrating five years of VMware cloud this week. And with three powerhouses here, we're announcing the general availability of VMware cloud and AWS with NetApp on tap. We have AWS FSX. And so this solution is now generally available across all global regions. We are super excited with all our joint customers and partners to bring this to the market. >>So Steve, give us your perspective as AWS as the biggest hyperscaler. Talk about the importance of the partnership and the longstanding partnerships that you've had with both NetApp and VMware. >>Yeah, you bet. So first all, maybe I'll start with Ryan and VMware. So we've had a very long standing partnership with VMware for over five years now. One thing that we've heard consistently from customers is they, they want help in reducing the heavy lifting or the, the friction that typically comes with cloud adoption. And VMware's been right in the trenches with us and helping with that over the years with the VMware cloud on AWS offering. And, and now that we've got NetApp, right, the FSX on tap solution, a managed storage solution that is, is been known and trusted in the on-premises world. Now available since September on AWS, but now available for use with VMware cloud is just amazing for customers who are looking for that agility, >>Right? Phil talk about NetApp has done a phenomenal job in its own digital transformation journey. Talk about that as an enabler for what you announced yesterday and the, and the capabilities that NetApp is able to bring to its customers with VMware and with AWS. >>Yeah. You know, it started, it's interesting because we NetApp's always been a company that works very closely with our partners. VMware has been a huge partner of ours since gosh, 2005 probably, or sometime like that. I started working with Amazon back in about 20 13, 20 14, when we first took on tap and brought it to the Amazon platform in the marketplace ahead of what's. Now FSX ends like a dream to bring a fully managed ONAP onto the world's biggest cloud. So that work you you're really looking at about. I mean, it depends how you look at it, 15 years of work. And then as Ryan was saying that VMware was working in parallel with us on being a first party service on Amazon, we came together and, or Ryan and I came together and VMware and NetApp came together about probably about two years ago now with this vision of what we're announcing today and to have so to have GA of this combination for meaning global availability, anybody can try it today. It's just an amazing day. It's really a great day. >>Yeah. It's unbelievable how we have sort of partnered together and hard engineering problems to create a very simple outcome for customers and partners. One of the things, you know, VMware cloud is a very successful service offering with a lot of great consumption and different verticals. Things like cloud migration, you know, transforming your entire, you know, data center and moving to the cloud. Things like, you know, modernizing our apps, disaster recovery now ransomware this week. So really, really exciting uptake and innovation in that whole service. One thing customers always told us that they want more options for storage decouple from compute. And so that really helped customers to lower their total cost of ownership and get to, you know, get even more workloads into VMware cloud. And this partnership really creates that opportunity for us to provide customers with those options. >>Let me give you an example, just I was walking over here just before I walked over here. We were with a customer talking about exactly what Orion's talking about. We were modeling using a TCO calculator that we all put together as well on what we call data intensive workloads, which is in this case, it was a 500 gigabytes per VM. So not a huge amount of data per VM. The, the case study modeled out of 38% cost savings or reduction in total cost, which in the case was like 1.2 million per year of total cost down to 700 million. And just, you could do the, just depends on how many VMs you have and how big odes you have, but that's the kind of cost savings we're talking about. So the, this is a really easy value to talk about. You save a lot of money in it's exactly as nor Ryan said, because we can separate the compute and the storage. Yep. >>Yep. I was just gonna say the reason for that is it used to be with VMware cloud on AWS. If you wanted more storage for your workload, you would have to add another node. So with another node, you would get another compute node. You would get the compute, you'd get the memory and the storage, but now we've actually decoupled the ability to expand the storage footprint from the compute, allowing customers to really expand as their needs grow. And so it's, it's just a lot more flexibility. Yep. That customers had. Yeah. >>Flexibility is key. Every customer needs that they need to be agile. There's always a competitor waiting in the rear view mirror behind any business, waiting to take over. If, if they can't innovate fast enough, if they can't partner with the best of the best to deliver the infrastructure that's needed to enable those business outcomes, I wanna get your perspective, Steve, what are some of the outcomes that when you're talking to customers, you talked about fill the TCO. Those are huge numbers, very compelling. What are some of the other outcomes that customers can expect to achieve from this solution? >>That's a great question. I think customers want the flexibility. We talked about customers absolutely wanna be able to move fast. They're also very demanding customers who have had an experience with solutions like NetApp on tap on premises, right? So they've come to expect enterprise features like thin provisioning, snapshoting cloning, rapid cloning, right? And even replication of data given that customers now can leverage this type of functionality as well through the NetApp solution with VMC, they're getting all those enterprise class features from, from the storage in combination with what they already had with vs a and, and VMC. >>Steve earlier mentioned the word we used, we kind of took it from VMware or from Amazon was friction is so many workloads run in VMware VMs today to be able to just simply pick them up as is move them to Amazon makes cloud adoption. Just, I mean, frictionless is an extreme word, but it's really lowers the friction to cloud adoption. And as Steve said, then you've get all these enterprise features wherever you need to run. >>Just brings speed. >>I was just about to say, it's gotta be the speed. It has to be a huge factor here. Yep, >>Yep. Yeah. >>Sure. One of the things that we've seen with VMware cloud is operational consistency as, as a customer value because when customers are thinking about, you know, complex enterprise apps, moving that to the cloud, they need that operational consistency, which drives down their costs. They don't have to relearn new skills. They're used to VMware, they're used to NetApp. And so this partnership really fosters that operational consistency as a big customer value, and they can reuse those skills and really reapply them in this cloud model. The other thing is the cloud model here is super completely managed. If you think about that, right, customers have to do less VMware, AWS and NetApp is doing more for them. That's true in this model. >>So you're able to really deliver a lot of workforce efficiency, workforce productivity across the stack. >>Absolutely. >>And that's definitely true that it just, as it gets more complex, how do you manage it? Just continue, hear everybody talking about this, right. So when a completely managed service by VMware and Amazon is such a savings in com in management complexity, which then gets back to speed. How do I grow my plant faster? >>I mean, and really at the end of the day, customers are actually able to focus on what differentiate differentiates them, obviously versus the management of the underlying infrastructure and storage and all those, those things that are still critical, but exactly, but >>For, for the customer to be able to have to abstract the underlying underlying technology layer and focus on what differentiates them from the competition. That's like I said, right back here, right. That's especially if there's anything we've learned in the last couple of years, it's that it, that is critical for businesses across every industry, no industry exempt from this. >>None. One other thing, just an example of what you're talking about is we all work a lot on modernization techniques like using Kubernetes and container technologies. So with this, if you think about this, you, this solution, you can move an app as is modernize on the cloud. You can modernize, you can modernize and then move. You can, the flexibility that this enables like. So it's sort of like move to the cloud at your rate is a really big benefit. >>And we've seen so many customer examples of migrating modernize is how we like to summarize it, where customers are, you know, migrating, modernizing at their own pace. Yep. And the good, good thing about the platform and the service is that it is the home for all applications, virtual machines containers with Kubernetes backed by local storage, external storage options. The level of flexibility for all applications is really immense. And that drives down your TCO even more. >>What, from a target customer perspective, Noran, talk about that. Who, who is the target? Obviously I imagine VMware customers, it's NetApp customers, it's AWS, but is there, are there any targets kind of within that, that are really prime candidates for this solution? >>Yeah. A great question. First of all, the, the easy sort of overlap between all of us is our shared customer pool. And so VMware and NetApp have been partners for what, 20 years, something like that. And we have thousands of customers using our joint solutions in the data center. And so that's a very clear target for this solution, as they're considering use cases such as, you know, cloud migration, disaster recovery, virtual desktops, application modernization. So that's a very clear target and we see this day in and day out, obviously there are many other customers that would be interested in this solution, as well as they're considering, you know, AWS and we provide a whole range of consumption options for them. Right. And I think that's one of the, sort of the, the good things about our partnership, including with AWS, where customers can purchase this from VMware can purchase this from AWS and all of these different options, including from our partners really makes it very, very compelling. >>Talk a little bit about from each of your perspectives about the what's in it. For me as a partner of these companies, Steve, we'll start with you. >>I mean, what's in it for me is that it's what my customers have been asking for. And we, we have a long history, I think of providing managed services again, to remove that heavy lifting that customers often just don't want to have to do. Having seen the, the adoption of managed storage offerings, including the, the NetApp solution here and now being able to bring that into the VMware space where they're already using it in an on-premises world, and now they're moving those, those workloads being able to satisfy that need that a customer's asking for is awesome. >>We, every time we're at an AWS event, we are always talking about it's absolute customer obsession, and I know NetApp and VMware well, and know that that is a shared obsession across the three companies. >>Hey, Lisa, let me add one more thing. It's interesting, not everybody sees this, but it's really obvious that the NetApp on-prem installed base with VMware, which is tens of thousands of customers. This is an awesome solution. Not quite as obvious is that every on-prem VMware customer gets that TCO benefit. I mentioned that's not limited to the NetApp on-prem installed base. So we're really excited to be able to expose all the market that hasn't used our products on-prem to this cloud solution. And, and it's really clear customers are adopting the cloud, right? So we're, that's one of the reasons we're so excited about this is it opens up a huge new opportunity to work with new customers for us. Talk >>About those customer conversations, Phil, how, where are they happening at? What level are you talking with customers about migration to cloud? Has it changed in the last couple >>Of years? Oh yeah. You know, I've been working on this for years and a lot of the on-prem conversation, it's been a little bifurcated that on-prem is on-prem and cloud developers or cloud developers. And Amazon's done a huge amount to break that down. VMware getting in the game, a lot of it's networking complexities, those have gone down. A lot of people are cross connected and set up today, which that wasn't so true five years ago. So now it's a lot of conversations about, I hear carbon footprint reduction. I hear data all in around data center reduction. The cloud guys are super efficient operators of data center infrastructure. We were talking about different use cases like disaster recovery. It's it's everybody though. It's small companies, it's big companies. They're all sort of moving into this, it call it at least hybrid world. And that's why when I say we're get really excited about this, because it does get rid of a lot of friction for moving loads in those directions, at the rate, the customer wants to do it. >>And that one last really quick thing is I was using NetApp as an example, we have about 300 enterprise workloads. We wanna move to the cloud two, right? And so they're all running VMware, like most, most of the world. And so this solution is, looks really good to us and we're gonna do the exact, I was just out with our CIO. We're going, looking at those 300, which do we just lift and move? Which do we refactor? And how do we do that? In fact, that Ryan was out to dinner with us last night, talking about >>This it's more and more it's being driven top down. So in the early days, and I've been with Amazon for 10 years now. Yep. Early days, it was kind of developer oriented, often initiated projects. Now it's top level CIOs. Exactly. I >>Are two mandates today talking to customers. >>I think of reinvent as an it conference. Now in the way, some of these top down mandates are driven, but listen, I mean, we got great customer interest. We have been in preview for three to six months now, and we've seen a lot of customers were not able to drag their entire data center workloads because of different reasons of PCO data, intensive workloads, et cetera. And we've seen tremendous amounts of interest from them. And we're also seeing a lot of new customers in the pipeline that want to consider VMware cloud now that we have these great storage options. >>So there's a pretty healthy Tam I'm hearing. >>Absolutely. >>I think so. Yeah. It's interesting. Another, just both like WWT and Presidio, channel partners, big, huge channel partners. It takes no selling to explain. We, we just say, Hey, we're doing this. And they start building services. Presidio is here with us talking about a customer win that they got. So this is it. It's easy for people to see why this is a cool, a cool solution. >>The value prop is there >>Definitely >>There's no having appeal the onion to >>Find it. No, the money savings. It's just in what or Ryan said, a lot of people have seen the, the seen an obstacle of cost. Yeah. So the TCO benefit, I mentioned removes that obstacle. And then that opens the door to all the features Steve was talking about of the advanced storage features and things on the platform. >>So is there a customer that's been in beta on this solution that you can talk about in, in terms of what they were looking for, the challenges that you helped them erase and the outcomes they're achieving? >>Yeah, sure. I can. I can provide one example. A large financial customer was looking at this during the preview phase and you know, for, for, for reasons before that were already a customer, but they were not able to attract a lot of their other workloads from other business units. And with this solution, now the service is a much better candidate for those workloads and those business units that had not considered VMware cloud. So we're really excited to see new workloads coming from that particular customer, given this particular solution and the whole TCO math for them was very, very straightforward and simple. And this became a more attractive option for that particular customer. >>Is there a shadow it elimination factor here in this technology and who you're selling to? >>Not real, I, don't not intent. Wouldn't intentionally. I wouldn't say yeah, not intentionally. I, it was funny with the customers I was thinking is yes. The question, the customers that are in the preview are seeing the benefits that we're talking about. The, one of the reasons we started the project on our side a number of years ago was this very large cement company was looking for carbon CO2 reduction. Part of that was moving disaster recovery to the cloud. There was a lot of friction in the solution prior to this, the, the customers have done some of the things we're talking about, but there's a, it takes a lot of skill. And we were looking at working with that customer going, how could we simplify this? And that was from our point of NetApp's point of view, it, it drove us to VMware and to AWS saying, can't we pull some of the friction of this out. And I think that that's what we've seen in the, in the previews. And it's, that's what I meant. It's so exciting to go from having say, I know we have about 20 previews right now, going to the globe today is the, is the exciting news today. >>And is the solution here in booze that it can be demoed and folks can kind of get their hands on it. >>Yeah. Yeah. They can go to the VMware cloud booth at the expo and they can get their hands on their demo and they can take it for a test drive. >>Excellent. >>You can run TCO calculators and do your own math and see what you're gonna all this, the all that's integrated today. We >>Also have pilots where we can help walk customers through a scenario of their own. >>Yep. Excellent. Is there, is there a, a joint website that you guys have, we should drive folks to? >>Yeah, it's >>Actually talk about the press release. It's >>It's yours. So >>It's it's prominently on our website. Okay. VMware cloud. It is onc.vmware.com where we also have the other, you know, our corporate marketing websites that have this vmware.com is a great starting point. Yeah. And we feature the solution. Prominently customers can get started today and they can even participate in the hands on labs here and take the solution for a test drive. >>All right. Last question, nor Ryan, we'll start with you on this. Here we are. I love the theme of this event, the center of the multicloud universe. Does it not sound like a Marvel movie? I feel like there should be some, is there any superheroes running around? Cause I really feel like there should be, how is this solution an enabler of allowing customers to really extract the most of value from their multi-cloud world that they're living in? >>Yeah. I mean, look, I mean, our mission is to build, run, managed, secure applications in any cloud, right. And regu has been talking about this with the keynote this morning as well. You know, at least with NetApp, we share a very good joint vision of enabling customers to, you know, place applications with really good TCO across clouds. And so it's really good story I feel. And I think this is a really good step in that direction where customers have choice and flexibility in terms of where they put their applications in the TCO value that they get. >>Awesome. Guys, you gotta come back next with a customer would love to dig. Maybe at reinvent sounds, we can dig into more and to see a great story of how a customer came together and is really leveraging that the power that is sitting next to me here. Thank you all so much for joining me and having this great conversation. Congratulations on the announcement and it being GA. >>Thank you. Awesome. >>Thank you. Thanks Lisa. All right. Fun conversation. I told you power panel for my guests. I'm Lisa Martin. You're watching the cube, keep it right here for more live coverage of VMware Explorer, 2022 from downtown San Francisco. We'll be right back with our next guest.

Published Date : Aug 31 2022

SUMMARY :

And I saw a tagline that said two is the company three And with three powerhouses Talk about the importance of the partnership and the longstanding partnerships that And VMware's been right in the trenches with us and helping with that over the years with the VMware cloud on AWS the, and the capabilities that NetApp is able to bring to its customers with VMware and with AWS. So that work you you're really looking at about. One of the things, you know, VMware cloud is a very successful And just, you could do the, So with another node, What are some of the other outcomes that customers can expect to achieve from this solution? class features from, from the storage in combination with what they already had with vs a and, but it's really lowers the friction to cloud adoption. I was just about to say, it's gotta be the speed. moving that to the cloud, they need that operational consistency, which drives down their costs. So you're able to really deliver a lot of workforce efficiency, And that's definitely true that it just, as it gets more complex, how do you manage it? For, for the customer to be able to have to abstract the underlying underlying technology layer So it's sort of like move to the cloud at your rate And the good, for this solution? And I think that's one these companies, Steve, we'll start with you. the NetApp solution here and now being able to bring that into the VMware space We, every time we're at an AWS event, we are always talking about it's absolute customer obsession, but it's really obvious that the NetApp on-prem installed base with VMware, And Amazon's done a huge amount to break that down. And so this solution is, looks really good to us and we're gonna do the So in the early days, and I've been with Amazon to six months now, and we've seen a lot of customers were not able to drag their entire data center workloads It's easy for people to see why this is a cool, a cool solution. And then that opens the door to all the features Steve was talking about of the advanced storage features And with this solution, now the service is a much better candidate for those workloads and those of friction in the solution prior to this, the, the customers have done some of the things we're it for a test drive. You can run TCO calculators and do your own math and see what you're gonna all this, the all that's Is there, is there a, a joint website that you guys have, we should drive folks to? Actually talk about the press release. So And we feature the solution. I love the theme of this event, And I think this is a really good step in that direction where customers have choice and flexibility in that the power that is sitting next to me here. Thank you. I told you power panel for my guests.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

RyanPERSON

0.99+

Steve JonesPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

San FranciscoLOCATION

0.99+

NoraPERSON

0.99+

LisaPERSON

0.99+

PhilPERSON

0.99+

10 yearsQUANTITY

0.99+

Steven JonesPERSON

0.99+

38%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

WWTORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

OrionORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Phil BrothertonPERSON

0.99+

500 gigabytesQUANTITY

0.99+

Narayan BharadwajPERSON

0.99+

NetAppORGANIZATION

0.99+

2005DATE

0.99+

PresidioORGANIZATION

0.99+

six monthsQUANTITY

0.99+

SeptemberDATE

0.99+

FSXTITLE

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

five years agoDATE

0.99+

over five yearsQUANTITY

0.99+

three companiesQUANTITY

0.99+

bothQUANTITY

0.98+

yesterdayDATE

0.98+

NetAppTITLE

0.98+

VMCORGANIZATION

0.98+

oneQUANTITY

0.98+

NoranPERSON

0.98+

last nightDATE

0.98+

three powerhousesQUANTITY

0.98+

one exampleQUANTITY

0.98+

2022DATE

0.97+

Sheila Rohra & Omer Asad, HPE Storage | HPE Discover 2022


 

>> Announcer: "theCUBE" presents HPE Discover 2022. Brought to you by HPE. >> Welcome back to HPE Discover 2022. You're watching "theCUBE's" coverage. This is Day 2, Dave Vellante with John Furrier. Sheila Rohra is here. She's the Senior Vice President and GM of the Data Infrastructure Business at Hewlett Packard Enterprise, and of course, the storage division. And Omer Asad. Welcome back to "theCUBE", Omer. Senior Vice President and General Manager for Cloud Data Services, Hewlett Packard Enterprise storage. Guys, thanks for coming on. Good to see you. >> Thank you. Always a pleasure, man. >> Thank you. >> So Sheila, I'll start with you. Explain the difference. The Data Infrastructure Business and then Omer's Cloud Data Services. You first. >> Okay. So Data Infrastructure Business. So I'm responsible for the primary secondary storage. Basically, what you physically store, the data in a box, I actually own that. So I'm going to have Omer explain his business because he can explain it better than me. (laughing) Go ahead. >> So 100% right. So first, data infrastructure platforms, primary secondary storage. And then what I do from a cloud perspective is wrap up those things into offerings, block storage offerings, data protection offerings, and then put them on top of the GreenLake platform, which is the platform that Antonio and Fidelma talked about on main Keynote stage yesterday. That includes multi-tenancy, customer subscription management, sign on management, and then on top of that we build services. Services are cloud-like services, storage services or block service, data protection service, disaster recovery services. Those services are then launched on top of the platform. Some services like data protection services are software only. Some services are software plus hardware. And the hardware on the platform comes along from the primary storage business and we run the control plane for that block service on the GreenLake platform and that's the cloud service. >> So, I just want to clarify. So what we maybe used to know as 3PAR and Nimble and StoreOnce. Those are the products that you're responsible for? >> That is the primary storage part, right? And just to kind of show that, he and I, we do indeed work together. Right. So if you think about the 3PAR, the primary... Sorry, the Primera, the Alletras, the Nimble, right? All that, right? That's the technology that, you know, my team builds. And what Omer does with his magic is that he turns it into HPE GreenLake for storage, right? And to deliver as a service, right? And basically to create a self-service agility for the customer and also to get a very Cloud operational experience for them. >> So if I'm a customer, just so I get this right, if I'm a customer and I want Hybrid, that's what you're delivering as a Cloud service? >> Yes. >> And I don't care where the data is on-premises, in storage, or on Cloud. >> 100%. >> Is that right? >> So the way that would work is, as a customer, you would come along with the partner, because we're 100% partner-led. You'll come to the GreenLake Console. On the GreenLake Console, you will pick one of our services. Could be a data protection service, could be the block storage service. All services are hybrid in nature. Public Cloud is 100% participant in the ecosystem. You'll choose a service. Once you choose a service, you like the rate card for that service. That rate card is just like a hyperscaler rate card. IOPS, Commitment, MINCOMMIT's, whatever. Once you procure that at the price that you like with a partner, you buy the subscription. Then you go to console.greenLake.com, activate your subscription. Once the subscription is activated, if it's a service like block storage, which we talked about yesterday, service will be activated, and our supply chain will send you our platform gear, and that will get activated in your site. Two things, network cable, power cable, dial into the cloud, service gets activated, and you have a cloud control plane. The key difference to remember is that it is cloud-consumption model and cloud-operation model built in together. It is not your traditional as a service, which is just like hardware leasing. >> Yeah, yeah, yeah. >> That's a thing of the past. >> But this answers a question that I had, is how do you transfer or transform from a company that is, you know, selling boxes, of course, most of you are engineers are software engineers, I get that, to one that is selling services. And it sounds like the answer is you've organized, I know it's inside baseball here, but you organize so that you still have, you can build best of breed products and then you can package them into services. >> Omer: 100%. 100%. >> It's separate but complementary organization. >> So the simplest way to look at it would be, we have a platform side at the house that builds the persistence layers, the innovation, the file systems, the speeds and feeds, and then building on top of that, really, really resilient storage services. Then how the customer consumes those storage services, we've got tremendous feedback from our customers, is that the cloud-operational model has won. It's just a very, very simple way to operate it, right? So from a customer's perspective, we have completely abstracted away out hardware, which is in the back. It could be at their own data center, it could be at an MSP, or they could be using a public cloud region. But from an operational perspective, the customer gets a single pane of glass through our service console, whether they're operating stuff on-prem, or they're operating stuff in the public cloud. >> So they get storage no matter what? They want it in the cloud, they got it that way, and if they want it as a service, it just gets shipped. >> 100%. >> They plug it in and it auto configures. >> Omer: It's ready to go. >> That's right. And the key thing is simplicity. We want to take the headache away from our customers, we want our customers to focus on their business outcomes, and their projects, and we're simplifying it through analytics and through this unified cloud platform, right? On like how their data is managed, how they're stored, how they're secured, that's all taken care of in this operational model. >> Okay, so I have a question. So just now the edge, like take me through this. Say I'm a customer, okay I got the data saved on-premise action, cloud, love that. Great, sir. That's a value proposition. Come to HPE because we provide this easily. Yeah. But now at the edge, I want to deploy it out to some edge node. Could be a tower with Telecom, 5G or whatever, I want to box this out there, I want storage. What happens there? Just ship it out there and connects up? Does it work the same way? >> 100%. So from our infrastructure team, you'll consume one or two platforms. You'll consume either the Hyperconverged form factor, SimpliVity, or you might convert, the Converged form factor, which is proliant servers powered by Alletras. Alletra 6Ks. Either of those... But it's very different the way you would procure it. What you would procure from us is an edge service. That edge service will come configured with certain amount of compute, certain amount of storage, and a certain amount of data protection. Once you buy that on a dollars per gig per month basis, whichever rate card you prefer, storage rate card or a VMware rate card, that's all you buy. From that point on, the platform team automatically configures the back-end hardware from that attribute-based ordering and that is shipped out to your edge. Dial in the network cable, dial in the power cable, GreenLake cloud discovers it, and then you start running the- >> Self-service, configure it, it just shows up, plug it in, done. >> Omer: Self-service but partner-led. >> Yeah. >> Because we have preferred pricing for our partners. Our partners would come in, they will configure the subscriptions, and then we activate those customers, and then send out the hardware. So it's like a hyperscaler on-prem at-scale kind of a model. >> Yeah, I like it a lot. >> So you guys are in the data business. You run the data portion of Hewlett Packard Enterprise. I used to call it storage, even if we still call it storage but really, it's evolving into data. So what's your vision for the data business and your customer's data vision, if you will? How are you supporting that? >> Well, I want to kick it off, and then I'm going to have my friend, Omer, chime in. But the key thing is that what the first step is is that we have to create a unified platform, and in this case we're creating a unified cloud platform, right? Where there's a single pane of glass to manage all that data, right? And also leveraging lots of analytics and telemetry data that actually comes from our infosite, right? We use all that, we make it easy for the customer, and all they have to say, and they're basically given the answers to the test. "Hey, you know, you may want to increase your capacity. You may want to tweak your performance here." And all the customers are like, "Yes. No. Yes, no." Basically it, right? Accept and not accept, right? That's actually the easiest way. And again, as I said earlier, this frees up the bandwidth for the IT teams so then they actually focus more on the business side of the house, rather than figuring out how to actually manage every single step of the way of the data. >> Got it. >> So it's exactly what Sheila described, right? The way this strategy manifests itself across an operational roadmap for us is the ability to change from a storage vendor to a data services vendor, right? >> Sheila: Right. >> And then once we start monetizing these data services to our customers through the GreenLake platform, which gives us cloud consumption model and a cloud operational model, and then certain data services come with the platform layer, certain data services are software only. But all the services, all the data services that we provide are hybrid in nature, where we say, when you provision storage, you could provision it on-prem, or you can provision it in a hyperscaler environment. The challenge that most of our customers have come back and told us, is like, data center control planes are getting fragmented. On-premises, I mean there's no secrecy about it, right? VMware is the predominant hypervisor, and as a result of that, vCenter is the predominant configuration layer. Then there is the public cloud side, which is through either Ajour, or GCP, or AWS, being one of the largest ones out there. But when the customer is dealing with data assets, the persistence layer could be anywhere, it could be in AWS region, it could be your own data center, or it could be your MSP. But what this does is it creates an immense amount of fragmentation in the context in which the customers understand the data. Essentially, John, the customers are just trying to answer three questions: What is it that I store? How much of it do I store? Should I even be storing it in the first place? And surprisingly, those three questions just haven't been answered. And we've gotten more and more fragmented. So what we are trying to produce for our customers, is a context to ware data view, which allows the customer to understand structured and unstructured data, and the lineage of how it is stored in the organization. And essentially, the vision is around simplification and context to ware data management. One of the key things that makes that possible, is again, the age old infosite capability that we have continued to hone and develop over time, which is now up to the stage of like 12 trillion data points that are coming into the system that are not corroborated to give that back. >> And of course cost-optimizing it as well. We're up against the clock, but take us through the announcements, what's new from when we sort of last talked? I guess it was in September. >> Omer: Right. >> Right. What's new that's being announced here and, or, you know, GA? >> Right. So three major announcements that came out, because to keep on establishing the context when we were with you last time. So last time we announced GreenLake backup and recovery service. >> John: Right. >> That was VMware backup and recovery as a complete cloud, sort of SaaS control plane. No backup target management, no BDS server management, no catalog management, it's completely a SaaS service. Provide your vCenter address, boom, off you go. We do the backups, agentless, 100% dedup enabled. We have extended that into the public cloud domain. So now, we can back up AWS, EC2, and EBS instances within the same constructs. So a single catalog, single backup policy, single protection framework that protects you both in the cloud and on-prem, no fragmentation, no multiple solutions to deploy. And the second one is we've extended our Hyperconverged service to now be what we call the Hybrid Cloud On-Demand. So basically, you go to GreenLake Console control plane, and from there, you basically just start configuring virtual machines. It supports VMware and AWS at the same time. So you can provision a virtual machine on-prem, or you can provision a virtual machine in the public cloud. >> Got it. >> And, it's the same framework, the same catalog, the same inventory management system across the board. And then, lastly, we extended our block storage service to also become hybrid in nature. >> Got it. >> So you can manage on-prem and AWS, EBS assets as well. >> And Sheila, do you still make product announcements, or does Antonio not allow that? (Omer laughing) >> Well, we make product announcements, and you're going to see our product announcements actually done through the HPE GreenLake for block storage. >> Dave: Oh, okay. >> So our announcements will be coming through that, because we do want to make it as a service. Again, we want to take all of that headache of "What configuration should I buy? How do I actually deploy it? How do I...?" We really want to take that headache away. So you're going to see more feature announcements that's going to come through this. >> So feature acceleration through GreenLake will be exposed? >> Absolutely. >> This is some cool stuff going on behind the scenes. >> Oh, there's a lot good stuff. >> Hardware still matters, you know. >> Hardware still matters. >> Does it still matter? Does hardware matter? >> Hardware still matters, but what matters more is the experience, and that's actually what we want to bring to the customer. (laughing) >> John: That's good. >> Good answer. >> Omer: 100%. (laughing) >> Guys, thanks so much- >> John: Hardware matters. >> For coming on "theCUBE". Good to see you again. >> John: We got it. >> Thanks. >> And hope the experience was good for you Sheila. >> I know, I know. Thank you. >> Omer: Pleasure as always. >> All right, keep it right there. Dave Vellante and John Furrier will be back from HPE Discover 2022. You're watching "theCUBE". (soft music)

Published Date : Jun 29 2022

SUMMARY :

Brought to you by HPE. and of course, the storage division. Always a pleasure, man. Explain the difference. So I'm responsible for the and that's the cloud service. Those are the products that That's the technology that, you know, the data is on-premises, On the GreenLake Console, you And it sounds like the Omer: 100%. It's separate but is that the cloud-operational and if they want it as a and it auto configures. And the key thing is simplicity. So just now the edge, and that is shipped out to your edge. it just shows up, plug it in, done. and then we activate those customers, for the data business the answers to the test. and the lineage of how it is And of course and, or, you know, GA? establishing the context And the second one is we've extended And, it's the same framework, So you can manage on-prem the HPE GreenLake for block storage. that's going to come through this. going on behind the scenes. and that's actually what we Omer: 100%. Good to see you again. And hope the experience I know, I know. Dave Vellante and John

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SheilaPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Sheila RohraPERSON

0.99+

Dave VellantePERSON

0.99+

SeptemberDATE

0.99+

Dave VellantePERSON

0.99+

three questionsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

100%QUANTITY

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

OmerPERSON

0.99+

John FurrierPERSON

0.99+

two platformsQUANTITY

0.99+

Omer AsadPERSON

0.99+

firstQUANTITY

0.99+

HPEORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

first stepQUANTITY

0.99+

console.greenLake.comOTHER

0.99+

yesterdayDATE

0.99+

second oneQUANTITY

0.99+

OneQUANTITY

0.99+

AntonioPERSON

0.98+

12 trillion data pointsQUANTITY

0.98+

Two thingsQUANTITY

0.98+

AlletrasORGANIZATION

0.97+

HPE StorageORGANIZATION

0.97+

5GORGANIZATION

0.97+

theCUBETITLE

0.97+

bothQUANTITY

0.95+

GALOCATION

0.95+

StoreOnceORGANIZATION

0.95+

EBSORGANIZATION

0.94+

three major announcementsQUANTITY

0.94+

Cloud Data ServicesORGANIZATION

0.93+

PrimeraORGANIZATION

0.92+

AjourORGANIZATION

0.9+

GreenLakeORGANIZATION

0.9+

single paneQUANTITY

0.88+

single backup policyQUANTITY

0.86+

single catalogQUANTITY

0.86+

Day 2QUANTITY

0.85+

single protection frameworkQUANTITY

0.84+

VMwareTITLE

0.82+

theCUBEORGANIZATION

0.82+

EC2TITLE

0.79+

Alletra 6KsTITLE

0.77+

VMwareORGANIZATION

0.73+

KeynoteEVENT

0.72+

single stepQUANTITY

0.72+

HPE DiscoverORGANIZATION

0.7+

dollars per gigQUANTITY

0.7+

Joe Nolte, Allegis Group & Torsten Grabs, Snowflake | Snowflake Summit 2022


 

>>Hey everyone. Welcome back to the cube. Lisa Martin, with Dave ante. We're here in Las Vegas with snowflake at the snowflake summit 22. This is the fourth annual there's close to 10,000 people here. Lots going on. Customers, partners, analysts, cross media, everyone talking about all of this news. We've got a couple of guests joining us. We're gonna unpack snow park. Torston grabs the director of product management at snowflake and Joe. No NTY AI and MDM architect at Allegis group. Guys. Welcome to the program. Thank >>You so much for having >>Us. Isn't it great to be back in person? It is. >>Oh, wonderful. Yes, it >>Is. Indeed. Joe, talk to us a little bit about Allegis group. What do you do? And then tell us a little bit about your role specifically. >>Well, Allegis group is a collection of OPCA operating companies that do staffing. We're one of the biggest staffing companies in north America. We have a presence in AMEA and in the APAC region. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help individuals find >>People incredibly important these days, excuse me, incredibly important. These days. It is >>Very, it very is right >>There. Tell me a little bit about your role. You are the AI and MDM architect. You wear a lot of hats. >>Okay. So I'm a architect and I support both of those verticals within the company. So I work, I have a set of engineers and data scientists that work with me on the AI side, and we build data science models and solutions that help support what the company wants to do, right? So we build it to make business business processes faster and more streamlined. And we really see snow park and Python helping us to accelerate that and accelerate that delivery. So we're very excited about it. >>Explain snow park for, for people. I mean, I look at it as this, this wonderful sandbox. You can bring your own developer tools in, but, but explain in your words what it >>Is. Yeah. So we got interested in, in snow park because increasingly the feedback was that everybody wants to interact with snowflake through SQL. There are other languages that they would prefer to use, including Java Scala and of course, Python. Right? So then this led down to the, our, our work into snow park where we're building an infrastructure that allows us to host other languages natively on the snowflake compute platform. And now here, what we're, what we just announced is snow park for Python in public preview. So now you have the ability to natively run Python code on snowflake and benefit from the thousands of packages and libraries that the open source community around Python has contributed over the years. And that's a huge benefit for data scientists. It is ML practitioners and data engineers, because those are the, the languages and packages that are popular with them. So yeah, we very much look forward to working with the likes of you and other data scientists and, and data engineers around the Python ecosystem. >>Yeah. And, and snow park helps reduce the architectural footprint and it makes the data pipelines a little easier and less complex. We have a, we had a pipeline and it works on DMV data. And we converted that entire pipeline from Python, running on a VM to directly running down on snowflake. Right. We were able to eliminate code because you don't have to worry about multi threading, right? Because we can just set the warehouse size through a task, no more multi threading, throw that code away. Don't need to do it anymore. Right. We get the same results, but the architecture to run that pipeline gets immensely easier because it's a store procedure that's already there. And implementing that calling to that store procedure is very easy. The architecture that we use today uses six different components just to be able to run that Python code on a VM within our ecosystem to make sure that it runs on time and is scheduled and all of that. Right. But with snowflake, with snowflake and snow park and snowflake Python, it's two components. It's the store procedure and our ETL tool calling it. >>Okay. So you've simplified that, that stack. Yes. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? That you're actually taking the application development stack and the analytics stack and bringing them together? Are they merging? >>I don't know. I think in a way I'm not real sure how I would answer that question to be quite honest. I think with stream lit, there's a little bit of application that's gonna be down there. So you could maybe start to say that I'd have to see how that carries out and what we do and what we produce to really give you an answer to that. But yeah, maybe in a >>Little bit. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, injecting machine intelligence and ML and AI into apps, but there are two separate stacks today. Aren't they >>Certainly the two are getting closer >>To Python Python. It gets a little better. Explain that, >>Explain, explain how >>That I just like in the keynote, right? The other day was SRE. When she showed her sample application, you can start to see that cuz you can do some data pipelining and data building and then throw that into a training module within Python, right down inside a snowflake and have it sitting there. Then you can use something like stream lit to, to expose it to your users. Right? We were talking about that the other day, about how do you get an ML and AI, after you have it running in front of people, we have a model right now that is a Mo a predictive and prescriptive model of one of our top KPIs. Right. And right now we can show it to everybody in the company, but it's through a Jupyter notebook. How do I deliver it? How do I get it in the front of people? So they can use it well with what we saw was streamlet, right? It's a perfect match. And then we can compile it. It's right down there on snowflake. And it's completely easier time to delivery to production because since it's already part of snowflake, there's no architectural review, right. As long as the code passes code review, and it's not poorly written code and isn't using a library that's dangerous, right. It's a simple deployment to production. So because it's encapsulated inside of that snowflake environment, we have approval to just use it. However we see fit. >>It's very, so that code delivery, that code review has to occur irrespective of, you know, not always whatever you're running it on. Okay. So I get that. And, and, but you, it's a frictionless environment you're saying, right. What would you have had to do prior to snowflake that you don't have to do now? >>Well, one, it's a longer review process to allow me to push the solution into production, right. Because I have to explain to my InfoSec people, right? My other it's not >>Trusted. >>Well, well don't use that word. No. Right? It got, there are checks and balances in everything that we do, >>It has to be verified. And >>That's all, it's, it's part of the, the, what I like to call the good bureaucracy, right? Those processes are in place to help all of us stay protected. >>It's the checklist. Yeah. That you >>Gotta go to. >>That's all it is. It's like fly on a plane. You, >>But that checklist gets smaller. And sometimes it's just one box now with, with Python through snow park, running down on the snowflake platform. And that's, that's the real advantage because we can do things faster. Right? We can do things easier, right? We're doing some mathematical data science right now and we're doing it through SQL, but Python will open that up much easier and allow us to deliver faster and more accurate results and easier not to mention, we're gonna try to bolt on the hybrid tables to that afterwards. >>Oh, we had talk about that. So can you, and I don't, I don't need an exact metric, but when you say faster talking 10% faster, 20% faster, 50% path >>Faster, it really depends on the solution. >>Well, gimme a range of, of the worst case, best case. >>I, I really don't have that. I don't, I wish I did. I wish I had that for you, but I really don't have >>It. I mean, obviously it's meaningful. I mean, if >>It is meaningful, it >>Has a business impact. It'll >>Be FA I think what it will do is it will speed up our work inside of our iterations. So we can then, you know, look at the code sooner. Right. And evaluate it sooner, measure it sooner, measure it faster. >>So is it fair to say that as a result, you can do more. Yeah. That's to, >>We be able do more well, and it will enable more of our people because they're used to working in Python. >>Can you talk a little bit about, from an enablement perspective, let's go up the stack to the folks at Allegis who are on the front lines, helping people get jobs. What are some of the benefits that having snow park for Python under the hood, how does it facilitate them being able to get access to data, to deliver what they need to, to their clients? >>Well, I think what we would use snowflake for a Python for there is when we're building them tools to let them know whether or not a user or a piece of talent is already within our system. Right. Things like that. Right. That's how we would leverage that. But again, it's also new. We're still figuring out what solutions we would move to Python. We are, we have some targeted, like we're, I have developers that are waiting for this and they're, and they're in private preview. Now they're playing around with it. They're ready to start using it. They're ready to start doing some analytical work on it, to get some of our analytical work out of, out of GCP. Right. Because that's where it is right now. Right. But all the data's in snowflake and it just, but we need to move that down now and take the data outta the data wasn't in snowflake before. So there, so the dashboards are up in GCP, but now that we've moved all of that data down in, down in the snowflake, the team that did that, those analytical dashboards, they want to use Python because that's the way it's written right now. So it's an easier transformation, an easier migration off of GCP and get us into snow, doing everything in snowflake, which is what we want. >>So you're saying you're doing the visualization in GCP. Is that righting? >>It's just some dashboarding. That's all, >>Not even visualization. You won't even give for. You won't even give me that. Okay. Okay. But >>Cause it's not visualization. It's just some D boardings of numbers and percentages and things like that. It's no graphic >>And it doesn't make sense to run that in snowflake, in GCP, you could just move it into AWS or, or >>No, we, what we'll be able to do now is all that data before was in GCP and all that Python code was running in GCP. We've moved all that data outta GCP, and now it's in snowflake and now we're gonna work on taking those Python scripts that we thought we were gonna have to rewrite differently. Right. Because Python, wasn't available now that Python's available, we have an easier way of getting those dashboards back out to our people. >>Okay. But you're taking it outta GCP, putting it to snowflake where anywhere, >>Well, the, so we'll build the, we'll build those, those, those dashboards. And they'll actually be, they'll be displayed through Tableau, which is our enterprise >>Tool for that. Yeah. Sure. Okay. And then when you operationalize it it'll go. >>But the idea is it's an easier pathway for us to migrate our code, our existing code it's in Python, down into snowflake, have it run against snowflake. Right. And because all the data's there >>Because it's not a, not a going out and coming back in, it's all integrated. >>We want, we, we want our people working on the data in snowflake. We want, that's our data platform. That's where we want our analytics done. Right. We don't want, we don't want, 'em done in other places. We when get all that data down and we've, we've over our data cloud journey, we've worked really hard to move all of that data. We use out of existing systems on prem, and now we're attacking our, the data that's in GCP and making sure it's down. And it's not a lot of data. And we, we fixed it with one data. Pipeline exposes all that data down on, down in snowflake now. And we're just migrating our code down to work against the snowflake platform, which is what we want. >>Why are you excited about hybrid tables? What's what, what, what's the >>Potential hybrid tables I'm excited about? Because we, so some of the data science that we do inside of snowflake produces a set of results and there recommendations, well, we have to get those recommendations back to our people back into our, our talent management system. And there's just some delays. There's about an hour delay of delivering that data back to that team. Well, with hybrid tables, I can just write it to the hybrid table. And that hybrid table can be directly accessed from our talent management system, be for the recruiters and for the hiring managers, to be able to see those recommendations and near real time. And that that's the value. >>Yep. We learned that access to real time. Data it in recent years is no longer a nice to have. It's like a huge competitive differentiator for every industry, including yours guys. Thank you for joining David me on the program, talking about snow park for Python. What that announcement means, how Allegis is leveraging the technology. We look forward to hearing what comes when it's GA >>Yeah. We're looking forward to, to it. Nice >>Guys. Great. All right guys. Thank you for our guests and Dave ante. I'm Lisa Martin. You're watching the cubes coverage of snowflake summit 22 stick around. We'll be right back with our next guest.

Published Date : Jun 15 2022

SUMMARY :

This is the fourth annual there's close to Us. Isn't it great to be back in person? Yes, it Joe, talk to us a little bit about Allegis group. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help It is You are the AI and MDM architect. on the AI side, and we build data science models and solutions I mean, I look at it as this, this wonderful sandbox. and libraries that the open source community around Python has contributed over the years. And implementing that calling to that store procedure is very easy. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? we produce to really give you an answer to that. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, It gets a little better. And it's completely easier time to delivery to production because since to snowflake that you don't have to do now? Because I have to explain to my InfoSec we do, It has to be verified. Those processes are in place to help all of us stay protected. It's the checklist. That's all it is. And that's, that's the real advantage because we can do things faster. I don't need an exact metric, but when you say faster talking 10% faster, I wish I had that for you, but I really don't have I mean, if Has a business impact. So we can then, you know, look at the code sooner. So is it fair to say that as a result, you can do more. We be able do more well, and it will enable more of our people because they're used to working What are some of the benefits that having snow park of that data down in, down in the snowflake, the team that did that, those analytical dashboards, So you're saying you're doing the visualization in GCP. It's just some dashboarding. You won't even give for. It's just some D boardings of numbers and percentages and things like that. gonna have to rewrite differently. And they'll actually be, they'll be displayed through Tableau, which is our enterprise And then when you operationalize it it'll go. And because all the data's there And it's not a lot of data. so some of the data science that we do inside of snowflake produces a set of results and We look forward to hearing what comes when it's GA Thank you for our guests and Dave ante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

JoePERSON

0.99+

10%QUANTITY

0.99+

20%QUANTITY

0.99+

DavePERSON

0.99+

AllegisORGANIZATION

0.99+

Las VegasLOCATION

0.99+

Allegis GroupORGANIZATION

0.99+

Joe NoltePERSON

0.99+

50%QUANTITY

0.99+

north AmericaLOCATION

0.99+

PythonTITLE

0.99+

Java ScalaTITLE

0.99+

SQLTITLE

0.99+

bothQUANTITY

0.99+

one boxQUANTITY

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

Snowflake Summit 2022EVENT

0.98+

AWSORGANIZATION

0.98+

TableauTITLE

0.98+

six different componentsQUANTITY

0.98+

two componentsQUANTITY

0.98+

Python PythonTITLE

0.98+

Torsten GrabsPERSON

0.97+

oneQUANTITY

0.96+

todayDATE

0.96+

TorstonPERSON

0.96+

Allegis groupORGANIZATION

0.96+

OPCAORGANIZATION

0.95+

one dataQUANTITY

0.95+

two separate stacksQUANTITY

0.94+

InfoSecORGANIZATION

0.91+

Dave antePERSON

0.9+

fourth annualQUANTITY

0.88+

JupyterORGANIZATION

0.88+

parkTITLE

0.85+

snowflake summit 22EVENT

0.84+

10,000 peopleQUANTITY

0.82+

SnowflakeORGANIZATION

0.78+

AMEALOCATION

0.77+

snow parkTITLE

0.76+

snowORGANIZATION

0.66+

couple of guestsQUANTITY

0.65+

NTYORGANIZATION

0.6+

SnowflakeEVENT

0.59+

MDMORGANIZATION

0.58+

APACORGANIZATION

0.58+

premORGANIZATION

0.52+

GALOCATION

0.5+

snowTITLE

0.46+

SRETITLE

0.46+

litORGANIZATION

0.43+

streamTITLE

0.41+

22QUANTITY

0.4+

Sean Scott, PagerDuty | PagerDuty Summit 2022


 

>> Welcome back to theCube's coverage of PagerDuty Summit 22. Lisa Martin with you here on the ground. I've got one of our alumni back with me. Sean Scott joins me, the Chief Product Officer at PagerDuty. It's great to have you here in person. >> Super great to be here in person. >> Isn't it nice? >> Quite a change, quite a change. >> It is a change. We were talking before we went live about it. That's that readjustment to actually being with another human, but it's a good readjustment to have >> Awesome readjustment. I've been traveling more and more in the past few weeks and just speaking the offices, seeing the people the energy we get is the smiles, it's amazing. So it's so much better than just sitting at your home and. >> Oh, I couldn't agree more. For me it's the energy and the CEO of DocuSign talked about that with Jennifer during her fireside chat this morning, but yes, finally, someone like me who doesn't like working from home but as one of the things that you talked about in your keynote this morning was the ways traditionally that we've been working are no longer working. Talk to me about the future of work. What does it look like from PagerDuty's lens? >> Sure. So there's a few things. If we just take a step back and think about, what your day looks like from all the different slacks, chats, emails, you have your dashboards, you have more slacks coming in, you have more emails coming in, more chat and so just when you start the day off, you think you know what you're doing and then it kind of blows up out of the gate and so what we're all about is really trying to revolutionize operations so how do you help make sense of all the chaos that's happening and how do you make it simpler so you can get back to doing the more meaningful work and leave the tedium to the machines and just automate. >> That would be critical. One of the things that such an interesting dynamic two years that we've had obviously here we are in San Francisco with virtual event this year but there's so many problems out there that customer landscape's dealing with the great resignation. The data deluge, there's just data coming in everywhere and we have this expectation when we're on the consumer side, that we're going to be that a business will know us and have enough context to make us that the next best offer that actually makes sense but now what we're seeing is like the great resignation and the data overload is really creating for many organizations, this operational complexity that's now a problem really amorphously across the organization. It's no longer something that the back office has to deal with or just the front office, it's really across. >> Yeah, that's right. So you think about just the customer's experience, their expectations are higher than ever. I think there's been a lot of great consumer products that have taught the world, what good looks like, and I came from a consumer background and we measured the customer experience in milliseconds and so customers talking about minutes or hours of outages, customers are thinking in milliseconds so that's the disconnect and so, you have to be focused at that level and have everybody in your organization focused, thinking about milliseconds of customer experience, not seconds, minutes, hours, if that's where you're at, then you're losing customers. And then you think about, you mentioned the great resignation. Well, what does that mean for a given team or organization? That means lost institutional knowledge. So if you have the experts and they leave now, who's the experts? And do you have the processes and the tools and the runbooks to make sure that nothing falls on the ground? Probably not. Most of the people that we talk to, they're trying to figure it out as they go and they're getting better but there's a lot of institutional knowledge that goes out the door when people leave. And so part of our solution is also around our runbook automation and our process automation and some of our announcements today really help address that problem to keep the business running, keep the operations running, keep everything kind of moving and the customers happy ultimately and keep your business going where it needs to go. >> That customer experience is critical for organizations in every industry these days because we don't to your point. We'll tolerate milliseconds, but that's about it. Talk to me about you did this great keynote this morning that I had a chance to watch and you talked about how PagerDuty is revolutionizing operations and I thought, I want you to be able to break that down for this audience who may not have heard that. What are those four tenants or revolutionizing operations that PagerDuty is delivering to ORGS? >> Sure, so it starts with the data. So you mentioned the data deluge that's happening to everybody, right? And so we actually do, we integrate with over 650 systems to bring all that data in, so if you have an API or webhook, you can actually integrate with PagerDuty and push this data into PagerDuty and so that's where it starts, all these integrations and it's everything from a develop perspective, your CI/CD pipelines, your code repositories, from IT we have those systems are instrumented as well, even marketing, more tech stacks we can actually instrument and pull data in. The next step is now we have all this data, how do we make sense of it? So, we think we have machine learning algorithms that really help you focus your attention and kind of point you to the really relevant work, part of that is also noise suppression. So, our algorithms can suppress noise about 98% of the noise can just be eliminated and that helps you really focus where you need to spend your time 'cause if you think about human time and attention, it's pretty expensive and it's probably one of your company's most precious resources is that human time and so you want the humans doing the really meaningful work. Next step is automation, which is okay. We want the humans doing the special work, so what's the TDM? What's the toil that we can get rid of and push that to the machines 'cause machines are really good at doing very easy, repetitive task and there's a lot of them that we do day in, day out. The next step is just orchestrating the work and putting, getting everybody in the organization on the same page and that's where this morning I talked about our customer service operations product into the customer service is on the front lines and they're often getting signals from actual customers that nobody else in the organization may not even be aware of it yet So, I was running a system before and all our metrics are good and you get a customer feedback saying, "This isn't working for me," and you go look at the metrics and your dashboards and all looks good and then you go back and talk to the customer some more and they're like, "No, it's still not working," and you go back to your data, you're back to your dashboards, back to your metrics and sure enough, we had an instrumentation issue but the customer was giving us that feedback and so customer service is really on the front lines and they're often the kind of the unsung heroes for your customers but they're actually really helping and make sure that everything, the right signals are coming to the dev team, to the owners that own it and even in the case when you think you have everything instrumented, you may be missing something and that's where they can really help but our customer service operations product really helps bring everybody on the same page and then as the development teams and the IT teams and the SRA has pushed information back to customer service, then they're equipped, empowered to go tell the customer, "Okay, we know about the issue. Thank you." We should have it up in the next 30 minutes or whatever it is, five minutes, hopefully it's faster than longer, but they can inform the customer so to help that customer experience as opposed to the customer saying, "Oh, I'm just going to go shop somewhere else," or "I'm going to go buy somewhere else or do something else." And the last part is really around, how do we really enable our customers with the best practices? So those million users, the 21,000 companies in organizations we're working with, we've learned a lot around what good looks like. And so we've really embedded that back into our product in terms of our service standards which is really helps SRES and developers set quality standards for how services should be implemented at their company and then they can actually monitor and track across all their teams, what's the quality of the services and this team against different teams in their organization and really raise the quality of the overall system. >> So for businesses and like I mentioned, DocuSign was on this morning, I know some great brand customers that you guys have. I've seen on the website, Peloton Slack, a couple that popped out to me. When you're able to work with a customer to help them revolutionize operations, what are some of the business impacts? 'Cause some of the things that jump out to me would be like reduction in churn, retention rate or some of those things that are really overall impactful to the revenue of a business. >> Absolutely. And so there's a couple different parts of it. One is, all the work what PagerDuty is known for is orchestrating the work for a service outage or a website outage and so that's actually easy to measure 'cause you can measure your revenue that's coming in or missed revenue and how much we've shortened that. So that's the, I guess that's our kind of the history and our legacy but now we've moved into a lot of the cost side as well. So, helping customers really understand from an outage perspective where to focus our time as opposed to just orchestrating the work. Well now, we can say, we think we have a new feature we launched last year called Probable Origin. So when you have an outage, we can actually narrow in where we think the outage and just give you a few clues of this looks anomalous, for example. So let's start here. So that still focus on the top line and then from an automation perspective, there's lots and lots of just toil and noise that people are dealing with on a day in, day out basis and then some of it's easy work, some of it's harder work. One of the ones I really like is our automated diagnostics. So, if you have an incident, one of the first things you have to do is you have to go gather telemetry of what's actually happening on the servers, to say, is the CPU look good? Does the memory look good? Does the disc look good? Does the network look good? And that's all perfect work for automation. And so we can run our automated diagnostics and have all that data pumped directly into the incident so when the responder engages, it's all right there waiting for them and they don't have to go do all that basic task of getting data, cutting and pasting into the incident or if you're using one of those old ticketing systems, cutting and pacing into a a tickety system, it's all right there waiting for you. And that's on average 15 minutes during an outage of time that's saved. And the nice thing about that is that can all be kicked off at time zero so you can actually call from our event orchestration product, you can call directly into automation actions right there when that event first comes in. So you think about, there's a warning for a CPU and instantly it kicks off this diagnostics and then within seconds or even minutes, it's in the incident waiting for you to take action. >> One of the things that you also shared this morning that I loved was one of the stats around customer sale point that they had 60 different alerts coming in and PagerDuty was able to reduce that to one alert. So, 60 X reduction in alerts, getting rid of a lot of noise allowing them to focus on really those probably key high escalations that are going to make the biggest impact to their customers and to their business. >> That's right. You think about, you have a high severity incident like they actually had a database failure and so, when you're in the heat of the moment and you start getting these alerts, you're trying to figure out, is that one incident? Is it 10 incidents? Is it a hundred incidents that I'm having to deal with? And you probably have a good feeling like there's, I know it's probably this thing but you're not quite sure and so, with our machine learning we're able to eliminate a lot of the noise and in this case it was, going from 60 alerts down to one, just to let you know, this is the actual incident, but then also to focus your attention on where we think may be the cause and you think about all the different teams that historically have been had to pull in for a large scale incident. We can quickly narrow into the root cause and just get the right people involved. So we don't have these conference bridges of a hundred people on which you hear about. When these large cottages happen that everyone's on a call across the entire company and it's not just the dev teams and IT teams, you have PR, you have legal, you have everybody's involved in these. And so the more that we can workshop their work and get smarter about using machine learning, some of these other technologies then the more efficient it is for our customers and ultimately the better it is for their customers. >> Right and hopefully, PR, HR, legal doesn't have to be some of those incident response leaders that right now we're seeing across the organization. >> Exactly. Exactly. >> So when you're talking with customers and some of the things that you announced, you mentioned automated actions, incident workflows, what are you hearing from the voice of the customer as the chief product officer and what influence did that have in terms of this year's vision for the PagerDuty Summit? >> Sure. We listen to our customers all the time. It's one of our leadership principles and really trying to hear their feedback and it was interesting. I got sent some of the chat threads during the keynote afterwards, and there's a lot of excitement about the products we announced. So the first one is incident workflows, and this is really, it's a no code workflow based on or a recent acquisition of a company called Catalytic and what it does is it's, you can think of as kind of our next generation of response plays so you can actually go in and and build a workflow using no code tooling to say, when this incident happens or this type of incident happens, here's what that process looks like and so back to your original comment around the great resignation that loss institutional knowledge, well now, you're building all this into your processes through your incident response. And so, I think the incident workflows, if you want to create a incident specific slack channel or an incident specific zoom bridge, or even just in status updates, all that is right there for you and you can use our out of the box orchestrations or you can define your own 'cause we have back to the, our customer list, we have some of the biggest companies in the world, as customers and we have a very opinionated product and so if you're new to the whole DevOps and full service ownership, we help you through that. But then, a lot of our companies are evolving along that continuum, the operational maturity model continuum. And at the other end, we have customers that say "This is great, but we want to extend it. We want to like call this person or send this or update this system here." And so that's where the incident workflows is really powerful and it lets our customers just tailor it to their processes and really extend it for them. >> And that's GA later this year? >> Later this year, yes, we'll start ING probably the next few months and then GA later this year. >> Got it. Last question, as we're almost out of time here, what are some of the things that as you talk to customers day in and day out, as you see you saw the chats from this morning's live keynote, the excitement, the trust that PagerDuty is building with its customers, its partners, et cetera, What excites you about the future? >> So it's really why I came to PagerDuty. I've been here about a year and a half now, but revolutionizing operations, that's a big statement and I think we need it. I think Jennifer said in her keynote today, work is broken and I think our data, we surveyed our customers earlier this year and 42% of the respondents were working more hours in 2021 compared to 2020. And I don't think anyone goes home and if I could only work more hours, I think there's some and if I could only do more of this like TDM, the TDM, more toils, if I could only do more of that, I think life would be so good. We don't hear that. We don't hear that a lot. We hear about there's a lot of noise. We have a massive attrition that every company does. That's the type of feedback that we get and so we're really, that's what gets me excited about, the tools that we're building that and especially when I think just seeing the chat even this morning about some of the announcements, it shows we've been listening and it shows the excitement in our customers when they're, lots of I'm going to use this tool, that tool, I can just use PagerDuty which is awesome. >> The momentum is clear and it's palpable and I love being a part of that. Thank you so much Sean for joining me on theCube this afternoon, talking about what's new, what's exciting and how you guys are fixing work that's broken that validated me thinking the work was broken so thank you. >> Happy to be here and thanks for having me. >> My pleasure. For Sean Scott. I'm Lisa Martin, you're watching theCube's coverage of PagerDuty Summit 22 on the ground from the San Francisco. (soft music)

Published Date : Jun 8 2022

SUMMARY :

It's great to have you here in person. but it's a good readjustment to have and just speaking the offices, and the CEO of DocuSign talked about that and leave the tedium to the that the back office has to deal with and the tools and the runbooks and I thought, I want you to and even in the case 'Cause some of the things and so that's actually easy to measure and to their business. and it's not just the across the organization. Exactly. and so back to your original comment and then GA later this year. that as you talk to and 42% of the respondents the work was broken Happy to be here and of PagerDuty Summit 22 on the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JenniferPERSON

0.99+

Lisa MartinPERSON

0.99+

Sean ScottPERSON

0.99+

SeanPERSON

0.99+

10 incidentsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

60 alertsQUANTITY

0.99+

2020DATE

0.99+

PagerDutyORGANIZATION

0.99+

2021DATE

0.99+

DocuSignORGANIZATION

0.99+

21,000 companiesQUANTITY

0.99+

60 different alertsQUANTITY

0.99+

CatalyticORGANIZATION

0.99+

last yearDATE

0.99+

one alertQUANTITY

0.99+

five minutesQUANTITY

0.99+

INGORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

60 XQUANTITY

0.99+

one incidentQUANTITY

0.99+

42%QUANTITY

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

SRAORGANIZATION

0.98+

oneQUANTITY

0.98+

over 650 systemsQUANTITY

0.98+

about 98%QUANTITY

0.98+

OneQUANTITY

0.98+

million usersQUANTITY

0.97+

PagerDuty Summit 22EVENT

0.97+

first oneQUANTITY

0.97+

this yearDATE

0.97+

Peloton SlackORGANIZATION

0.96+

Later this yearDATE

0.96+

firstQUANTITY

0.96+

GALOCATION

0.96+

four tenantsQUANTITY

0.96+

later this yearDATE

0.96+

PagerDuty SummitEVENT

0.95+

this morningDATE

0.95+

next few monthsDATE

0.94+

this afternoonDATE

0.91+

earlier this yearDATE

0.91+

PagerDuty Summit 2022EVENT

0.87+

hundred incidentsQUANTITY

0.85+

hundred peopleQUANTITY

0.84+

about a year and a halfQUANTITY

0.83+

coupleQUANTITY

0.83+

theCubeORGANIZATION

0.8+

SRESORGANIZATION

0.8+

Probable OriginTITLE

0.79+

first thingsQUANTITY

0.78+

thingsQUANTITY

0.68+

next 30 minutesDATE

0.67+

PagerDutyTITLE

0.58+

runbooksORGANIZATION

0.53+

pastDATE

0.53+

yearDATE

0.49+

weeksDATE

0.48+

zeroQUANTITY

0.46+

Chad Dunn, Dell Technologies & Akanksha Mehrotra, Dell Technologies | Dell Technologies World 2022


 

>> "theCube" presents Dell Technologies World, brought to you by Dell. >> Hey everyone, Welcome back to "theCube's" continuing coverage of Dell Technologies World 2022. Live from the show floor in Las Vegas. We have been here since Monday evening. About seven to 8,000 folks here. It's been a fantastically well-attended event that Dell has done. Lots of talk about announcements, including APEX. Lisa Martin with Dave Vellante are going to unpack more of APEX with our next two Cube alumni who are returning, Akanksha Mehrotra, VP of APEX product marketing joins us, and Chad Dunn, VP of product management APEX. Guys, welcome back. >> Thank you. >> Thank you. >> Thank you for having us. >> It is really great to be back. >> So just in case there's anybody out there that's been under a rock since Monday, APEX has now been what GA for a year, celebrating a momentous year and some big news. Akanksha, walk us through that and then talk about some of the feedback that you've gotten on what you guys announced just two days ago. >> Yeah. So it's been an exciting week like you said. APEX just for sort of background is our portfolio of as-a-service solutions, we introduced it a year ago. We have now 10 plus services in our portfolio. We added our very first full stack managed service for cyber recovery this week. The feedback from customers over the past year and then the conversations we've had, you know, over the course of this week has been phenomenal. If I had to really summarize it, I would say the pain point that we're looking to solve, helping organizations manage data across disparate and fragmented environments across a variety of clouds, you know, on-premises, in a co-lo, on the far edge, at a hyperscale or in the telco edge is resonating. This is a pain point... This is very real pain point for them. And our goal in our vision to create a consistent and a secure experience across all of these different, you know, silos of data, if you will. It's something that they really want more of from us. >> Chad, talk a little bit about the influence of the customer in the last couple of years. Well, in the last year, in terms of releasing the cyber recovery solution on APEX, we have seen the threat landscape massively change. >> It increases every day. >> It increases every day, ransomware is no longer a... Is it going to happen too? It's a matter of when? >> Yes. >> Talk to us about the influence of the customer of this being the first full stack solution on APEX. >> Sure, like I don't think there's a boardroom in the world where this isn't being discussed as just such a high risk environment for cyber techs. It's damaging to lose your data. It's damaging to your reputation, it's financially damaging. So it's incredibly important into our customers. And we're finding that, you know, many of them don't necessarily have all the expertise to be able to defend against it themselves. And so that's where an as-a-service solution, like the one that we're offering really makes sense to them, right? They're much more apt to consume as-a-service when the competency doesn't necessarily already exist in their IT organizations. So we've been doing this for a few years as a solution with managed services. And in fact, we've deployed over 2000 of these, and making that a standardized offering with T-shirt sizing, subscription basis, really seems to be a winner. And every customer I've talked to has been absolutely over the moon with it. >> All right, so we have Chad in product management, Akanksha, you're in product marketing. So you knew going into this, that it was going to be different. So I'm interested in kind of what your learnings were, that internal transformation, which is ongoing now, I understand that, but how did it change how you manage, you know, deploy the life cycle of the product and communicate that. >> I'll get us started and I'm sure Chad will add on. So, you know, to your point, when we started this journey internally before we started it externally, we knew this was going to be a multi-year transformation for us. And a multi-year transformation that affects every part of the company, how we build products, how we market products, how we bring them to market, how we sell them, et cetera. And so we made a very conscious effort to kind of secure that buy-in early on. And it starts Michael on down. This is a strategic priority for him as I'm sure both of you know. And each function has kind of established, you know, areas where they know they need to transform and a north star goal for where they want to get to. So I'll speak for marketing as a place that's, you know, close to my heart. One, we know as we get into this space, we're going to be talking to different types of folks and having conversations with different types of personas within an account than we have had before. Using cyber recovery solution as an example, yes, we want to talk to, you know, IT administrators and CIO who we've been talking to. But as Chad said, this is something that CISOs care about. This is something that security teams care about. That those are a different set of personas for us to market to, to communicate with, whose pain points we need to understand better. So that's an example of a change. Another one is moving from a... I mean, events like this are great, and we certainly love to be back in person, but in as-a-service model, you want to have much more frequent communication with your intended audiences. So we've moved to more of an always on-marketing motion leveraging our blog, leveraging other vehicles. And that's that has also been a transformation for us. >> On the marketing side, I'm curious, sorry, Dave. Chad, you brought up one of the big things that is a huge challenge for any organization and any industry with respect to the cybersecurity in that threat landscape is brand reputation. >> Yeah. >> Are you having more conversations at the CMO level? I'm just curious if they're involved in this. We got to make sure that we don't have... We're not the next one on the news because customers will churn like crazy. Is that at all part of the conversation than persona change? >> It is certainly part of it. But, you know, we don't want to be motivated by fear, right? We want to be motivated by preparation and securing the business and growing the business. So, you know, it is a sea level discussion to, you know, understand how we need to protect our critical data. But it's really from a lens of, you know, how do we grow and we grow more quickly? And you know, if you look at APEX overall, yes, we've made a lot of internal changes to get where we are and we're going to continue to make those. And I'll talk through some examples. But this is also a journey for our customers, right? The change to, you know, consuming by the drip, consuming APEX, consuming as-a-service, you could take two companies with identical size and an identical vertical, and they're going to have different priorities about how they want to consume this infrastructure and these services. So we're on that journey with them just as we have to transform ourselves internally, from the way that we do accounting, from the way that we do sales compensation, from the way that we actually build product. And in fact, we just changed up the model by which we're, you know, developing product in APEX today. So I'm about 90 days into my role in APEX. I came from the HCI business. And I'm here with my engineering leader who was also in the HCI business. So we were able to be fortunate enough to work in an organization that went from zero to 4 billion in pretty quickly. So, Hey, let's see if we can apply some of that learning to this. But it's an incredible partnership inside of Dell with people like Dell Digital and our transformation office. Because we've done things roughly the same way for about 30 years. And this is all very new to us. So it's pretty amazing journey. >> I'm interested in what's different. You weren't first to market. The public cloud guys might say, "Eh, it's not cloud." >> No. >> Okay, so how are you different than public cloud and how are you different from your traditional on-prem competitors? >> Again, I'll get us started and chime in. I would say... I'll take your first example. I want to go back to kind of what our customers... Where they want help from us and what are they're asking us for. As I said, the debate is over. They have told us pretty definitively, and our data and your data shows it, that they will and the data will continue to grow in all these different fragmented silos. What they want is an experience that orchestrated across all of these different environments, by a vendor that they trust, right? And that's what we are committed to delivering to them. That's our north star, that's where we're going. I would argue that any one of the hyperscalers don't have incentives to kind of make that same experience happen across all those different environments. A vendor like Dell, who has been trusted by many years... You know, for many years from our customer, who doesn't have a single dog in the race, but is looking to partner with folks across the entire ecosystem, is looking to innovate with our software, our services, and our infrastructure is best positioned to help them orchestrate across. >> Yeah. Well, you know, if you're wondering what's different, you really have to look at what the value proposition is for public clouds versus keeping data on-prem or keeping it in a place where it's accessible to multiple clouds. You know, I think if you haven't been under a rock here at the show, you know it is all about multi-cloud, and you know that we're, you know, absolutely embracing it from, you know, Project Alpine where we're putting storage endpoints in public cloud, to what we're doing with APEX and our data storage services and the move of our customers into co-locations where the data can be accessible to multiple clouds. I think that getting the commerce capabilities in place that we've done over the course of the last year is a great first step. But look for us to double down on the day two management and operations, using that platform that we've created for APEX. And that's going to allow us to create more velocity and bring more solutions into the fold more quickly, and then provide more day two management optimization operation of the solutions by our customers. >> Okay. Sorry. So definitely agree with the public cloud. And I got to trust them to do my multi-cloud or what I call super-cloud. What about your traditional competitors? Is it the normal sort of what we'd expect for the Dell differentiators portfolio, supply chain, et cetera, or are there APEX specific differentiators? >> Yeah, absolutely. Yeah, so there are absolutely the Dell differentiators of the breadth of our sales force through both our direct sales teams as well as our partners, our secure supply chain, our services team, and the expertise that they've built, which we're obviously bringing to bear with this market and this offer. Those are kind of the Dell wide advantages that we bring to bear with this. But specifically for APEX against the traditional on-prem competitors, I would say the simplicity with which we are bringing our offers to market is a differentiator for us. And it's one that our customers in the past year have retreated back to us. So the commerce experience that Chad was just talking about, we have made very conscious efforts to simplify and abstract the way that complexity from our customers, so that they are picking very easy to understand outcomes that they care about. And then not really worry about the peace parts, whether it's the hardware, the software, or the services that help make that service level outcome happen. I would argue, you know, some of the other competitors, traditional competitors that we have haven't done that. And it's more of a... You know, that complexity is still there. And what we hear from our customers is, I want the simplicity and agility that public cloud provides. That's something that hyperscalers did get right. And we're bringing that experience to our infrastructure. >> Yeah. Like I think the other way that we'll differentiate ourselves is going to be by the breadth of the solutions, right? So we've got a tremendous amount of IP in solutions like cyber recovery, right? This wasn't a new thing for us. This is something we've been doing for a few years as there's tremendous consulting capabilities, services capabilities, the underlying products of course. Well, there's a pipeline of solutions lined up behind that. So as we move into high performance computer as-a-service, MLOps as-a-service, we can draw on those solutions that we've offered, but in a very custom way in the past now at a high velocity manner in the console. >> Well, the high velocity these days is critical. As we've seen the last two years, things have changed so dramatically for customers in every industry that needed to pivot with speed and accelerate their transformation. >> And the transparency. Right? So going back to his example, having that price transparency. You can go to our website and look at the pricing, pick in the two or three very simple options and see it right there and order it through the console. In a matter of minutes versus, you know, wait for two weeks to get the code and then wait for a month to get the hardware, and then wait for the services team to show up. So what we are hearing... I mean, we have truly been able to take deployments that used to take several months to a matter of days. And so that's how the simplicity kind of, you know, pays off not only in that initial deployment, but over the course of the subscription, the day two operations that Chad was just talking about and the innovation and the work that we're doing to simplify their lives in that process, allow them to focus in other areas. >> Oh, absolutely. That time to value, time to market has never been more critical. And the ability, to your point, Akanksha, to allow folks to be able to focus more on the strategic initiatives that will actually help move the business for... Add value, move the business forward and allow it to be competitive and differentiate itself is critical for everybody in every industry. Chad, I wanted to kind of pivot on multi-cloud for a second. You talked about that. And we had Chuck Whit on yesterday. He was talking about, you know, multi-cloud. A lot of organizations, many, many, many in multi-cloud by default. But what Dell wants to do is change that, multi-cloud by design. Is APEX going to be a facilitator of multi-cloud by design? Talk to us about that for customers. >> We absolutely will be. So if you look at what made customers multi-cloud by default, it's them going for the services that exist in the cloud and looking for best of breed services. Whether it's machine learning, speech recognition, database, they're going to those best of breed players. And so the value proposition for us is since you're in those clouds, you want access to your data and you want it centrally, so you can see it, leverage it, use it from any of those clouds, but you may have other reasons for keeping your data or even your compute on-prem or in a co-location. It could be data sovereignty, it could be policy compliance, it could be data gravity. So we want to make the concept of having your workloads or your data anywhere, very seamless for our customers, right? So it's really embracing the concept of multi-cloud and making it easier. >> The cyber recovery solution is really interesting to me. I was talking to one of the partners here and they said, "Dave, this was a really good show for us." And they probably had a quasi competitive solution. I don't really know. But like a lot of customers and, you know, got a lot of leads out of it. So it's the hot topic and that's what they said. This is cyber, everybody wants cyber. So how did that solution come together? Because I know you... You really... You've always been security conscious. But I never really full cracked the security solution. And now here it is in APEX, it's like, boom, out of the box or out of the service. How'd that come about? >> It really started back in 2014, specifically. It's funny when you can point to an event where, you know, something started like this. So there was a fairly high profile ransomware attack in 2014. And that caused us to look at the assets that we had from our data protection portfolio, from a software and storage perspective and say, "Hey, we can put something together that can really address this, right? Through novel use of existing technology." So we built out reference architectures. We built out the consulting service on how you protect your data. We partnered and built software to be able to secure the data in an air gaped imutable vault and offer the services to be able to manage that, monitor that, restore the data when needed. So we did that in a very custom way for years. In fact, as I said, over 2,000 systems deployed this way. So having a vehicle like APEX that has the as-a-service capability built in, the subscription capability built in, the ease and velocity of purchasing and operating was really a natural fit. So you know, we expect this is going to be a very high volume solution for us. >> Great. Awesome. >> Akanksha, can you talk a little bit about the partner ecosystem involved here in APEX? You know, when I think about ransomware in data protection, I think organizations need to be able to protect apps, users, data platforms. But we think of how data is so spread out, customers want that single pane of glass to be able to manage all that and know that that data is protected. Talk to us about how you're working with partners. I know the partner ecosystem at Dell's huge. How are you working with partners and how can they build upon APEX? >> Yeah So our partners are a very important part of our ecosystem. They help expand our reach. They also help complement our capabilities. You know, for example, in specific verticals. They may have services or expertise in a particular area. For the APEX portfolio, we actually offer a wide variety of ways for partners to engage with us. Starting out, they could refer our solutions and refer, you know, some of our services, if they want to take more of an advisory role in some capacity. They could resell our services with additional services included. In this scenario, for example, they would leverage our console, include some of their services in there and then offer it to their end customers. They could host APEX offers in their own data center or in a co-lo data center and build their practice on top of it. A lot of our partners and customers, we've got kind of joint customer partners that for example, have built a healthcare practice on top of an APEX solution, where they've added their services or built their business on top of it. And then finally, there's of course, technology and ISV partners, right? And that is where we might leverage, you know, some of their technology, built it to be part of a service or a solution that we're doing and join the go-to market. So I think the answer is lots of ways for partners to engage with APEX. And we absolutely are engaging with them in a wide variety of ways. And I think cyber recovery is no different. >> Well, there must be not a dull moment with what you guys have going on with APEX. Thank you for taking some time to talk to us about that. Sounds like the momentous year that you've had is going to continue. And it sounds like you've gotten great feedback from the customers and the partner so far. Thank you for joining "theCube" and telling us what's going on. And we can't wait to hear more next year. I'm sure there will be lots more next year. >> Yes indeed. >> Absolutely. Thank you very much. >> For our guests and Dave Vellante, I'm Lisa Martin, and you're watching "theCube's" coverage of Dell Technologies World 2022, live from Las Vegas. Stick around, we'll be right back with our next guest. (upbeat music)

Published Date : May 5 2022

SUMMARY :

brought to you by Dell. Live from the show floor in Las Vegas. to be back. what you guys announced over the course of this of the customer in the Is it going to happen too? influence of the customer And we're finding that, you know, life cycle of the product of the company, how we build products, On the marketing side, Is that at all part of the from the way that we do accounting, I'm interested in what's different. but is looking to partner with folks here at the show, you know And I got to trust them and the expertise that they've built, of the solutions, right? needed to pivot with speed And so that's how the And the ability, to your point, Akanksha, services that exist in the cloud But like a lot of customers and, you know, and offer the services to I know the partner and then offer it to their end customers. time to talk to us about that. Thank you very much. and you're watching "theCube's" coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Dave VellantePERSON

0.99+

2014DATE

0.99+

MichaelPERSON

0.99+

twoQUANTITY

0.99+

Akanksha MehrotraPERSON

0.99+

Chad DunnPERSON

0.99+

two weeksQUANTITY

0.99+

ChadPERSON

0.99+

DavePERSON

0.99+

threeQUANTITY

0.99+

Las VegasLOCATION

0.99+

APEXORGANIZATION

0.99+

DellORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

10 plus servicesQUANTITY

0.99+

zeroQUANTITY

0.99+

bothQUANTITY

0.99+

two companiesQUANTITY

0.99+

Monday eveningDATE

0.99+

yesterdayDATE

0.99+

next yearDATE

0.99+

Dell DigitalORGANIZATION

0.99+

a year agoDATE

0.99+

Project AlpineORGANIZATION

0.99+

Chuck WhitPERSON

0.99+

firstQUANTITY

0.99+

two days agoDATE

0.99+

first exampleQUANTITY

0.99+

last yearDATE

0.99+

4 billionQUANTITY

0.99+

over 2,000 systemsQUANTITY

0.99+

AkankshaPERSON

0.99+

each functionQUANTITY

0.98+

first stepQUANTITY

0.98+

this weekDATE

0.98+

MondayDATE

0.98+

about 30 yearsQUANTITY

0.98+

a monthQUANTITY

0.98+

oneQUANTITY

0.97+

8,000 folksQUANTITY

0.97+

single dogQUANTITY

0.97+

over 2000QUANTITY

0.95+

about 90 daysQUANTITY

0.95+

theCubeORGANIZATION

0.94+

Dell Technologies World 2022EVENT

0.94+

this weekDATE

0.93+

a yearQUANTITY

0.93+

About sevenQUANTITY

0.92+

OneQUANTITY

0.92+

todayDATE

0.92+

past yearDATE

0.92+

Dell Technologies WorldEVENT

0.87+

HCIORGANIZATION

0.86+

Dell Technologies World 2022EVENT

0.82+

CubeORGANIZATION

0.81+

Gil Shneorson, Dell | Dell Technologies World 2022


 

>>The cube presents. Dell technologies world brought to you by Dell. >>Welcome to Las Vegas. Lisa Martin, with Dave Volante. The cube is live at Dell technologies world 2022. Dave, hope you say live, live <laugh>. We are live. We are in person. We are three-D. We are also here on the first day of our coverage with an eight time, right? Eight time cube alum, GA Norris joins us the senior vice president of edge portfolio solutions at Dell technologies. Welcome back our friend. >>Thank you. It's great to be here in this forum with live people, you know, and 3d, >>Isn't it. We're amazing. We're not, we're not via a screen. This is actually real. So Gill a a lot, a lot of buzz, great attendance at this first event, since 20, lot's been going on since then, we're talking a lot about edge. It's not new, but there's a lot changing what's going on there. >>Well, you know, edge has been around for a while. Um, actually since, you know, the beginning of time people were doing, you know, compute and, and applications, they in the, um, in the physical space where data it, but more and more, um, data is based on sensors in cameras and machine vision. And if you wanna make real time decisions, there's a few reasons why you can't just send everything back to a data center or a cloud. Maybe you don't have the right latency, maybe, um, you it's too costly. Maybe you don't have the right end with maybe you have security challenges, maybe have compliance challenges. So the world's moving more and more resources towards where the data is created and to make real time decisions and to generate new business values, things are changing and they're becoming much more, um, um, involved than before, much more. Um, so basically that that's, what's changing. You know, we need to deal with distributed architectures much more than we needed before. >>I think one of the things we've learned in the last very dynamic two years is that access to realtime data is no longer a nice to have it's table stakes for whether we're talking about retail, healthcare, et cetera. So that the, the realtime data access is critical for everybody to these days. >>Right? And it, it could be a real time decision, or it could even be data collection either way. You need to place some device, some comput next to the source. And then, you know, you have a lot of them and you just multiply by multiple use cases and you be, you basically, you have a very complex problem to solve. And if you ask me what's new is that complexity is big coming more and more, um, critical to solve >>Critical. >>Oh, go ahead, please. >>I was just gonna say, talk to me about some of the, from a, from a complexity resolution perspective, what are some of the things that Dell is doing to help organizations as they spread out to the edge more to meet that consumer demand, but reduce that complexity from an infrastructure standpoint. >>So we focus on simplifying. I think that's what people need right now. So there are two things we do. We, we optimize our products, um, whether they need regularization or different temperature envelopes or, uh, management capability, remote management capability, and we create solutions. And so we develop, um, solutions that look at specific, um, outcomes and we size it and we create deployment guides. Um, we do everything we can, um, to simplify the, uh, the edge uses for our customers. >>You know, you guys is talking about, it's not new. I, and I know you do a lot in retail. I think of like the NCR cash register as the, the original edge, you know, but there's other use cases. Uh there's you Gil, you and I have talked about AI inferencing in, in real time, there was a question today in the analyst forum, uh, I think it went to Jeff or nobody wanted to take it. No, maybe it was Michael, but the metaverse, but that there's edge space is the edge industrial I OT. So how do you, I mean, the Tam is enormous. How do you think about the use cases? Are there ones that, that aren't necessarily sort of horizontal for you that you don't go after, like EVs and TA the cars? Or how are you thinking about >>It? Depends. I agree that the, uh, edge business is very verticalized. Um, at the same time, there are very, uh, there is, there are themes that emerge across every industry. Um, so we're trying to solve things horizontally being Dell, we need to solve for, um, repeatability and scale, but we do package, you know, vertical solutions on top of them because that's what people need. Um, so for example, you know, you said, um, NCR being the, uh, the original edge. If I asked you today, name how many applications are, are running in a retail store to enable your experience? You'd say, well, there's self checkout. Maybe there is a, um, fraud detection, >>Let's say a handful >>It's handful. The fact is it's not, it's about 30 different applications, 30 that are running. So you have, you know, digital labels and you have, you know, a curbside delivery and you have inventory management and you have crowd management and you have safety and security. And what happens today is that every one of those solar is purchased separately and deployed separately and connected to the network separately and secured separately. Hence you see the problem, right? And so I know what we do, and we create a solution. For example, we see, okay, infrastructure, what can we consolidate onto an infrastructure that could scale over time? And then we look at it in the context of a solution. So, you know, the solution we're announcing, or we announced last week does just that on the left side, it looks at a consolidated infrastructure based on VxRail and VMware stack. So you can run multiple applications on the right side, it working with a company called deep north for Inso analytics and actually people that, um, and the show they can go and see this in action, um, in our, um, you know, fake retail store, uh, back at the edge booth. Um, but the point is those elements of siloed applications and the need to consolidate their true for every industry. And that's what we're trying to solve for. >>I was just wondering, you said they're true for every industry. Every industry is facing the same challenges there. What, what makes retail so prime for transformation right now? >>That's a great question. So, you know, using my example from before, if you are faced with this set, have a shopper that buys online and they now are coming back to the stores and they need to, they want the same experience. They want the stuff that they search for. They want it available to them. Um, and in fact, we research that 80% of people say, if they have a bad experience will not come back to a retail store. So you've got all of those use cases that you need to put to, you've got this savvy shopping that comes in, you've got heightened labor costs. You've got a supply chain problem in most of those markets, labor >>Shortages as >>Well. It's a perfect storm. And you wanna give an experience, right? So CIOs are looking at this and they go, how do I do all of that? Um, and they, they, as I said before, the key management, the key problem is management of all of those things is why they can innovate faster. And so retail is in this perfect storm where they need to innovate and they want to innovate. And now they're looking for options and we're here to help them. >>You know, a lot of times we talk about the in industrial IOT, we talk about the it and the OT schism. Is there a similar sort of dissonance between it, your peeps, Dell's traditional market, and what's happening, you know, at the near edge, the retail infrastructure sort of different requirements. How are you thinking about that and managing that >>About, um, 50% of edge projects today are, are somehow involving it. Um, usually every project will involve it for networking and security, so they have to manage it either way. And today there's a lot of what we used to call shadow it. When we talked about cloud, this has happens at the edge as well. Now this happened for a good reason because the expertise are the OT people expertise on the, the specific use case. It's true for manufacturing. It's also for true for, for retail. Um, our traditional audience is the it audience and, and we will never be able to merger two worlds unless it was better able to service the OT buyers. And even in the show, I I've had multiple conversations today. We, with people to talk about the divide, how to bring it together, it will come together when it can deliver a better service to the OT, um, constituents. And that's definitely a job for Dell, right? This is what we do. If we enable our it buyer to do a better job in servicing the OT crowd or their business crowd in retail, um, more innovation will happen, you know, across the, those different dimensions. So I'm happy you asked that because that's actually part of the mission we're taking on. >>Where is one of the things I think about when you, you talk about that consumer experience and we're very demanding as consumers. We wanna ha as you described, we wanna have the same experience we expect to have that regardless of where we are. And if that doesn't happen, you, you mentioned that number of 80% of people's survey said, if I have a bad experience with a merchant, I'm out, I'm going somewhere else. Right. So where is the rest of the Csuite in the conversation? I can think of, um, a COO the chief marketing officer from brand value, brand reputation perspective. Are you talking with those folks as well to help make the connective so reality? >>Um, I, I, I don't know that we're having those conversation with those business owners. We we're a, um, a system, an infrastructure company. So, you know, we get involved once they understand, you know, what they want to do. We just look at it in. And so if you solve it one way, it's gonna be one outcome. Maybe there is a better way to look at it. Maybe there's an architecture, maybe there's a more, you know, thoughtful way to think about, you know, the problems before they happen. And, um, but the fact that they're all looking shows you, that their business owners are very, very concerned with, with this reality, their >>Key stakeholders. Can >>We come back to your announcement? Can you, can we unpack that a little bit, uh, for those who might not be familiar with it? What, what, what is it called again? And give us a peel, the onion a little bit Gil. Yeah. >>So, so we call it a Dell technologies validated design. Um, it is essentially reference architecture. Um, we take a use case, we size it. So we, you know, we, um, we save customers, the effort of, of testing and sizing. We document the deployment step by step. We just make it simpler. And as says, before we look for consolidation, so we took a VXL, which is our leading ACI product based on VMware technology with a VMware application management stack with Tansu. Um, and then we, we, we look at that as the infrastructure, and then we test it with a company called deep north and deep north, um, are, um, store analytics. So through machine vision, they can tell you where people are queuing up. If there is somebody in the store that needs help and nobody's approaching, if there is a water spill and somebody might, you know, slip and hurt themselves, if a fridge is open and something may get spot. >>And so all of those things together through machine vision and realtime decisions can have this much better experience. So we put all of this together, we created a design and now it's out there in the market for our partners to use for our customers to use. Um, this is an extension of our manufacturing solutions, where we did the same thing. We partner with a company called PTC. I know of obviously in a company called Litmos, um, to create, um, industrial and the leading solution. So this whole word of solutioning is supposed to look at the infrastructure and a use case and bring them together and document in a way that simplifies things for >>Customers. Do you ever see that becoming a Aku at some point in time or, >>Um, personal, if you ask me? I don't think so. And the reason is there's still a lot of variability in those and skewing, but that's a very formal, you know, internal discussion. Yeah. Um, the point is we are, we want people to buy as much of it as they need to, and, and we really want to help them if Aku could help them, we will get there, but we need to see repeatability before creating skews. >>Can you give us an example of a, of a retail or a manufacturing customer that's using this Dell validated design, this DVD, and that really has reduced or eliminated that complexity that was there before. >>So this solution is new. I mean, it's brand new, we just announced it. So, no, but, um, I don't know what names I can call out, cuz referenceability is probably examples though about generic, but I will tell you that most of the large retailers in the us are based in their stores on Dell technologies. Um, a lot of the trail is in, in those stores and you're talking about thousands of locations with remote management. Um, what we're doing here is we're taking it to the next step by looking at new use cases that they have not been implementing before and saying, look, same infrastructure is valid. You know, scalable is it's scalable. And here are the new use cases with machine vision and other things that here is how you do that. But we're seeing a lot of success in retail in the last few years. >>So what should we expect looking forward, you know, any gaps that customers are asking for trying to fill? What, what two to three years out, what should we expect? >>Um, I think we're gonna stay very true to our simplification message. We want to help people simplify. So if it's simplifying, um, maintenance, if it's simplifying management, if it's simplifying through solutioning, you're gonna see us more and more and more, um, investing in simplification of edge. Um, and that's through our own IP, through our partnerships. Um, there, there is a lot more coming if, if I may say it myself, but, but it's, it's a little too early to, uh, to talk about it. >>So for those folks that are here at the show that get to see it and play with it and touch it and feel it, what would you say some of the biggest impacts are that this technology can deliver tomorrow? >>Well, first of all, it's enabling to do what they want. See, we don't have to go and, and tell people, oh, you probably really need to move things through the edge. They know they need to do it. Our job is to tell them how to do it in a secure way, in a simplified way. So that's, that's a nice thing about this, this market it's happening, whether we want it or not. Um, people in this show can go see some things in action. They can see the solution in action. They can see the manufacturing solution in action and even more so. And I forgot to say part of our announcement was a set of solution centers in Limerick island and in Singapore, that was just open. And soon enough in Austin, Texas saw that, and we will have people come in and have the full experience of IOT OT and edge device devices in action. So AR and VR, I T IEN technology and scanning technology. So they could be, um, thinking about the art of the possible, right? Thinking about this immersive experience that will help them invent with us. And so we're expecting a lot of innovation to come out of those conversations for us and for them. >>So doing a lot of testing before deployment and really gleaning that testing >>Before deployment solution architecture, just ideation, if they're not there yet. So, and I've just been to Singapore in one of those, um, they asked me to, um, pretend I was a, um, retail ski enter in a distribution center and I didn't do so well, but I was still impressed with the technology. So, >>Well, eight time Q alumni. Now you have a career to fall back on if you need to. Exactly. >><laugh> >>GA it's been great to have you. Thank you so much for coming back, talking to us about what's new on day one of Dell technologies world 22. Thank >>You for having me again, >>Our pleasure for Dave Volante. I'm Lisa Martin, coming to you live from the Venetian in Las Vegas at Dell technologies world 2022. This is day one of our coverage stick around Dave and I will be right back with our next guest.

Published Date : May 3 2022

SUMMARY :

Dell technologies world brought to you by Dell. Dave, hope you say live, live <laugh>. It's great to be here in this forum with live people, you know, and 3d, a lot of buzz, great attendance at this first event, since 20, lot's been going on since then, have the right latency, maybe, um, you it's too costly. So that the, the realtime data access is critical for everybody to these days. you know, you have a lot of them and you just multiply by multiple use cases and you be, out to the edge more to meet that consumer demand, but reduce that complexity from an infrastructure standpoint. And so we develop, um, solutions that look at specific, um, outcomes and we size it and I think of like the NCR cash register as the, the original edge, you know, you know, you said, um, NCR being the, uh, the original edge. um, in our, um, you know, fake retail store, uh, back at the edge booth. I was just wondering, you said they're true for every industry. So, you know, using my example from before, if you are faced with And you wanna give an experience, right? you know, at the near edge, the retail infrastructure sort of different requirements. more innovation will happen, you know, across the, those different dimensions. We wanna ha as you described, we wanna have the same experience we expect to have that regardless And so if you solve it one way, it's gonna be one outcome. Can We come back to your announcement? So we, you know, So we put all of this together, we created a design Do you ever see that becoming a Aku at some point in time or, a lot of variability in those and skewing, but that's a very formal, you know, Can you give us an example of a, of a retail or a manufacturing customer that's using this Dell validated but I will tell you that most of the large retailers in the us are based in their stores So if it's simplifying, um, maintenance, and tell people, oh, you probably really need to move things through the edge. and I've just been to Singapore in one of those, um, they asked me to, um, pretend I was Now you have a career to fall back on if you need to. Thank you so much for coming back, talking to us about what's new on day one of Dell technologies I'm Lisa Martin, coming to you live from the Venetian

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

SingaporeLOCATION

0.99+

Dave VolantePERSON

0.99+

Gil ShneorsonPERSON

0.99+

JeffPERSON

0.99+

DavePERSON

0.99+

50%QUANTITY

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

80%QUANTITY

0.99+

last weekDATE

0.99+

three yearsQUANTITY

0.99+

MichaelPERSON

0.99+

todayDATE

0.99+

two thingsQUANTITY

0.99+

GilPERSON

0.99+

LitmosORGANIZATION

0.99+

ACIORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

oneQUANTITY

0.99+

eight timeQUANTITY

0.99+

30QUANTITY

0.99+

PTCORGANIZATION

0.98+

VenetianLOCATION

0.98+

Limerick islandLOCATION

0.98+

first dayQUANTITY

0.98+

thousandsQUANTITY

0.97+

VxRailTITLE

0.97+

tomorrowDATE

0.97+

Dell technologiesORGANIZATION

0.96+

TansuORGANIZATION

0.96+

two worldsQUANTITY

0.95+

Dell technologies world 2022EVENT

0.94+

Inso analyticsORGANIZATION

0.93+

2022DATE

0.93+

two yearsQUANTITY

0.92+

first eventQUANTITY

0.91+

Eight time cubeQUANTITY

0.91+

AkuORGANIZATION

0.9+

about 30 different applicationsQUANTITY

0.89+

day oneQUANTITY

0.88+

VMwareTITLE

0.88+

Technologies World 2022EVENT

0.87+

technologies world 22EVENT

0.84+

one wayQUANTITY

0.83+

VMwareORGANIZATION

0.83+

VXLORGANIZATION

0.81+

yearsQUANTITY

0.8+

deep northORGANIZATION

0.79+

one outcomeQUANTITY

0.77+

GAPERSON

0.74+

20QUANTITY

0.74+

GillPERSON

0.72+

NorrisPERSON

0.58+

edgeORGANIZATION

0.56+

GALOCATION

0.49+

NCRORGANIZATION

0.34+

Ami Badani, NVIDIA & Mike Capuano, Pluribus Networks


 

(upbeat music) >> Let's kick things off. We're here at Mike Capuano the CMO of Pluribus Networks, and Ami Badani VP of Networking, Marketing, and Developer of Ecosystem at NVIDIA. Great to have you welcome folks. >> Thank you. >> Thanks. >> So let's get into the the problem situation with cloud unified networking. What problems are out there? What challenges do cloud operators have Mike? Let's get into it. >> The challenges that we're looking at are for non hyperscalers that's enterprises, governments Tier 2 service providers, cloud service providers. And the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth cyber attacks. It's not slowing down. It's only getting worse and solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >> With that goal in mind, what's the Pluribus vision how does this tie together? >> So basically what we see is that this demands a new architecture and that new architecture has four tenets. The first tenet is unified and simplified cloud networks. If you look at cloud networks today, there's sort of like discreet bespoke cloud networks per hypervisor, per private cloud, edge cloud, public cloud. Each of the public clouds have different networks, that needs to be unified. If we want these folks to be able to be agile they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is, like I mentioned distributed security. Distributed security without compromise, extended out to the host is absolutely critical. So micro segmentation and distributed firewalls. But it doesn't stop there. They also need pervasive visibility. It's sort of like with security you really can't see you can't protect you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure, that really needs to be built in to this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction. Abstract the complexity of all these discreet networks whatever's down there in the physical layer. I don't want to see it. I want to abstract it. I want to define things in software but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenet is SDN automation. >> Mike, we've been talking on theCUBE a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations, NextGen. How do we get there? How do customer customers get this vision realized? >> That's a great question. And I appreciate the tee up. We're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where Pluribus is headed with our partners like NVIDIA long term. And that is about deploying a common operating model SDN enabled, SDN automated, hardware accelerated across all clouds. And whether that's underlay and overlay switch or server, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular, we're very proud of the fact that it's deployed in over 100 Tier 1 mobile operators as the network fabric for their 4G and 5G virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this NVIDIA BlueField-2 DPU. We know there's. >> Hold that up real quick. That's a good prop. That's the BlueField NVIDIA card. >> It's the NVIDIA BlueField-2 DPU, data processing unit. What we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric, out to the host. But it does take processing power. So we knew that we didn't want to do we didn't want to implement that running on the CPUs which is what some other companies do. Because it consumes revenue generating CPUs from the application. So a DPU is a perfect way to implement this. And we knew that NVIDIA was the leader with this BlueField-2. And so that is the first, that's the first step into getting, into realizing this vision. >> NVIDIA has always been powering some great workloads of GPUs, now you got DPUs. Networking and NVIDIA as here. What is the relationship with Pluribus? How did that come together? Tell us the story. >> We've been working with Pluribus for quite some time. I think the last several months was really when it came to fruition. And what Pluribus is trying to build and what NVIDIA has. So we have, this concept of a blue field data processing unit, which, if you think about it, conceptually does really three things, offload, accelerate, and isolate. So offload your workloads from your CPU to your data processing unit, infrastructure workloads that is. Accelerate, so there's a bunch of acceleration engines. You can run infrastructure workloads much faster than you would otherwise. And then isolation, So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, a couple years ago. And with Pluribus we've been talking to the Pluribus team for quite some months now. And I think really the combination of what Pluribus is trying to build, and what they've developed around this unified cloud fabric fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what Pluribus is really trying to do is extending the network fabric from the host from the switch to the host and really have that single pane of glass for network operators to be able to configure, provision, manage all of the complexity of the network environment. So that's really how the partnership truly started. And so it started really with extending the network fabric and now we're also working with them on security. If you sort of take that concept of isolation and security isolation, what Pluribus has within their fabric is the concept of micro segmentation. And so now you can take that extend it to the data processing unit and really have isolated micro segmentation workloads whether it's bare metal, cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud, hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on the DPU. >> You know what I love about this conversation is it reminds me of when you have these changing markets. The product gets pulled out of the market and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you how do you guys differentiate what sets this apart for customers? What's in it for the customer? >> So I mentioned three things in terms of the value of what the BlueField brings. There's offloading, accelerating and isolating. And that's sort of the key core tenets of BlueField. So that, if you sort of think about what BlueField what we've done, in terms of the differentiation. We're really a robust platform for innovation. So we introduced BlueField-2 last year. We're introducing BlueField-3 which is our next generation of blue field. It'll have 5X the ARM compute capacity. It will have 400 gig line rate acceleration, 4X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, chips to our portfolio every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is that if you look at NVIDIA, what we're sort of known for is really known for our AI, our artificial intelligence and our artificial intelligence software, as well as our GPU. So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates faster, more efficient secure AI systems from, the core of your data center, all the way out to the edge. And so with NVIDIA we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. One of the key, one of our key motivations at NVIDIA is really to have our partner ecosystem embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU we've created an SDK, which is an open SDK called DOCA. And it's an open SDK for our partners to really build and develop solutions using BlueField and using all these accelerated libraries that we expose through DOCA. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >> What's exciting is when I hear you talk it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment, super cloud or these new capabilities. They can really craft their own I'd say custom environment at scale with easy tools. And it's all kind of that again this is the new architecture Mike, you were talking about. How does customers run this effectively, cost effectively? And how do people migrate? >> I think that is the key question. So we've got this beautiful architecture. Amazon Nitro is a good example of a SmartNIC architecture that has been successfully deployed but, enterprises and Tier 2 service providers and Tier 1 service providers and governments are not Amazon. So they need to migrate there and they need this architecture to be cost of effective. And that's super key. I mean, the reality is DPU are moving fast but they're not going to be deployed everywhere on day one. Some servers will have have DPUs right away. Some servers will have DPUs in a year or two. And then there are devices that may never have DPUs. IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU. And by leveraging the NVIDIA BlueField DPU what we really like about it is, it's open and that drives cost efficiencies. And then, with this our architectural approach effectively you get a unified solution across switch and DPU, workload independent. It doesn't matter what hypervisor it is. Integrated visibility, integrated security and that can create tremendous cost efficiencies and really extract a lot of the expense from a capital perspective out of the network as well as from an operational perspective because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service, or to deploy a security policy and is deployed everywhere automatically saving the network operations team and the security operations team time. >> So let me rewind that 'cause that's super important. Got the unified cloud architecture. I'm the customer, it's implemented. What's the value again, take me through the value to me. I have a unified environment. What's the value? >> I mean the value is effectively, there's a few pieces of value. The first piece of value is I'm creating this clean demark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU there's some conflict between the DevOps team who own the server, and the NetOps team who own the network because they're installing software on the CPU stealing cycles from what should be revenue generating CPUs. So now by terminating the networking on the DPU we create this real clean demark. So the DevOps folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetOps team because they want to control the networking. And now we've got this clean demark where the DevOps folks get the services they need and the NetOps folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential I mentioned it earlier, pushing out micro segmentation and distributed firewall basically at the application level, where I create these small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall they're contained once they get inside. 'Cause the worst thing is a bad actor penetrates perimeter firewall and can go wherever they want in wreak havoc. And so that's why this is so essential. And the next benefit obviously is this unified networking operating model. Having an operating model across switch and server, underlay and overlay, workload agnostic, making the life of the NetOps teams much easier so they can focus their time on really strategy instead of spending an afternoon deploying a single VLAN for example. >> Awesome, and I think also for my stand point I mean perimeter security is pretty much, that out there, I guess the firewall still out there exists but pretty much they're being breached all the time the perimeter. You have to have this new security model. And I think the other thing that you mentioned the separation between DevOps is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is huge new control plan. I think you guys have a new architecture that enables the security to be handled more flexible. That seems to be the killer feature here. >> If you look at the data processing unit, I think one of the great things about sort of this new architecture it's really the foundation for zero trust. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between Pluribus and NVIDIA is the DPU is really the foundation of zero trust and Pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >> This is super exciting. This is illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with NVIDIA? >> We're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market so we're jointly marketing it. Obviously we appreciate that NVIDIA's open that's sort of in our DNA, we're about a open networking. They've got other ISVs who are going to run on BlueField-2. We're probably going to run on other DPUs in the future. But right now we feel like we're partnered with the number one provider of DPUs in the world and super excited about making a splash with it. >> Oh man NVIDIA got the hot product. >> So BlueField-2 as I mentioned was GA last year, we're introducing, well we now also have the converged accelerator. So I talked about artificial intelligence our artificial intelligence software with the BlueField DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads, so if you have an artificial intelligence workload and an infrastructure workload, you can work on them separately on the same platform or you can actually use you can actually run artificial intelligence applications on the BlueField itself. So that's what the converged accelerator really brings to the table. So that's available now. Then we have BlueField-3 which will be available late this year. And I talked about sort of, how much better that next generation of BlueField is in comparison to BlueField-2. So we'll see BlueField-3 shipping later on this year. And then our software stack which I talked about, which is called DOCA. We're on our second version, our DOCA 1.2 we're releasing DOCA 1.3 in about two months from now. And so that's really our open ecosystem framework. So allow you to program the BlueField. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called DOCA. And it really gives that simplicity to our partners to be able to develop on top of BlueField. So as we add new generations of BlueField, next year we'll have another version and so on and so forth. DOCA is really that unified layer that allows BlueField to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once. And then it automatically works with future generations of BlueField. So that's sort of the nice thing around DOCA. And then in terms of our go to market model we're working with every major OEM. Later on this year you'll see, major server manufacturers releasing BlueField enabled servers, so more to come. >> Awesome, save money, make it easier, more capabilities, more workload power. This is the future of cloud operations. >> And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and will be working with us for our early field trial starting late April early May. We are accepting registrations. You can go to www.pluribusnetworks.com/eft. If you're interested in signing up for being part of our field trial and providing feedback on the product >> Awesome innovation and networking. Thanks so much for sharing the news. Really appreciate, thanks so much. In a moment we'll be back to look deeper in the product the integration, security, zero trust use cases. You're watching theCUBE, the leader in enterprise tech coverage. (upbeat music)

Published Date : Mar 16 2022

SUMMARY :

the CMO of Pluribus Networks, So let's get into the And the way to do it is to So that's the fourth and customers are looking at this. And I appreciate the tee up. That's the BlueField NVIDIA card. And so that is the first, What is the relationship with Pluribus? DPU and running that on the DPU So I have to ask you how So that's sort of one of the And it's all kind of that again So that's the beauty of a solution that Got the unified cloud architecture. and the NetOps team who own the network that enables the security is the DPU is really the in the go to market with NVIDIA? on other DPUs in the future. So that's sort of the This is the future of cloud operations. and providing feedback on the product Thanks so much for sharing the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

StefaniePERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

MichaelPERSON

0.99+

NVIDIAORGANIZATION

0.99+

AWSORGANIZATION

0.99+

ManasiPERSON

0.99+

LisaPERSON

0.99+

PluribusORGANIZATION

0.99+

John FurrierPERSON

0.99+

Stephanie ChirasPERSON

0.99+

2015DATE

0.99+

Ami BadaniPERSON

0.99+

Stefanie ChirasPERSON

0.99+

AmazonORGANIZATION

0.99+

2008DATE

0.99+

Mike CapuanoPERSON

0.99+

two companiesQUANTITY

0.99+

two yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

90%QUANTITY

0.99+

yesterdayDATE

0.99+

MikePERSON

0.99+

RHELTITLE

0.99+

ChicagoLOCATION

0.99+

2021DATE

0.99+

Pluribus NetworksORGANIZATION

0.99+

second versionQUANTITY

0.99+

last yearDATE

0.99+

next yearDATE

0.99+

AnsibleORGANIZATION

0.99+

Changing the Game for Cloud Networking | Pluribus Networks


 

>>Everyone wants a cloud operating model. Since the introduction of the modern cloud. Last decade, the entire technology landscape has changed. We've learned a lot from the hyperscalers, especially from AWS. Now, one thing is certain in the technology business. It's so competitive. Then if a faster, better, cheaper idea comes along, the industry will move quickly to adopt it. They'll add their unique value and then they'll bring solutions to the market. And that's precisely what's happening throughout the technology industry because of cloud. And one of the best examples is Amazon's nitro. That's AWS has custom built hypervisor that delivers on the promise of more efficiently using resources and expanding things like processor, optionality for customers. It's a secret weapon for Amazon. As, as we, as we wrote last year, every infrastructure company needs something like nitro to compete. Why do we say this? Well, Wiki Bon our research arm estimates that nearly 30% of CPU cores in the data center are wasted. >>They're doing work that they weren't designed to do well, specifically offloading networking, storage, and security tasks. So if you can eliminate that waste, you can recapture dollars that drop right to the bottom line. That's why every company needs a nitro like solution. As a result of these developments, customers are rethinking networks and how they utilize precious compute resources. They can't, or won't put everything into the public cloud for many reasons. That's one of the tailwinds for tier two cloud service providers and why they're growing so fast. They give options to customers that don't want to keep investing in building out their own data centers, and they don't want to migrate all their workloads to the public cloud. So these providers and on-prem customers, they want to be more like hyperscalers, right? They want to be more agile and they do that. They're distributing, networking and security functions and pushing them closer to the applications. >>Now, at the same time, they're unifying their view of the network. So it can be less fragmented, manage more efficiently with more automation and better visibility. How are they doing this? Well, that's what we're going to talk about today. Welcome to changing the game for cloud networking made possible by pluribus networks. My name is Dave Vellante and today on this special cube presentation, John furrier, and I are going to explore these issues in detail. We'll dig into new solutions being created by pluribus and Nvidia to specifically address offloading, wasted resources, accelerating performance, isolating data, and making networks more secure all while unifying the network experience. We're going to start on the west coast and our Palo Alto studios, where John will talk to Mike of pluribus and AMI, but Donnie of Nvidia, then we'll bring on Alessandra Bobby airy of pluribus and Pete Lummus from Nvidia to take a deeper dive into the technology. And then we're gonna bring it back here to our east coast studio and get the independent analyst perspective from Bob Liberte of the enterprise strategy group. We hope you enjoy the program. Okay, let's do this over to John >>Okay. Let's kick things off. We're here at my cafe. One of the TMO and pluribus networks and NAMI by Dani VP of networking, marketing, and developer ecosystem at Nvidia. Great to have you welcome folks. >>Thank you. Thanks. >>So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into it. >>Yeah, it really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, um, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies. And second, they need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Um, really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. Um, we're seeing a growth in cyber attacks. Um, it's, it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah. So, um, basically what we see is, uh, that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discreet bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, um, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. >>You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all of these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want to abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cube a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. Um, the first is a unified cloud networking vision, and that is a vision of where pluribus is headed with our partners like Nvidia longterm. Um, and that is about, uh, deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds. Um, and whether that's underlying overlay switch or server, um, hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. Um, the first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. Um, and what's nice about this is we're not starting from scratch. We have a, a, an award-winning adaptive cloud fabric product that is deployed globally. Um, and in particular, uh, we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade, uh, networking infrastructure, what we're doing now, um, to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We know there's a, >>Hold that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And, um, uh, you know, what we're doing, uh, fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision. >>I mean, Nvidia has always been powering some great workloads of GPU. Now you've got DPU networking and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and, uh, what pluribus is trying to build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads that is, uh, accelerate. So there's a bunch of acceleration engines. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric, uh, is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have, um, isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>You know, what I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the customer? >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield. Um, so that, you know, if you sort of think about what, um, what Bluefields, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to, uh, last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration, four X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, uh, chips to our portfolio every, every 18 months to two years. Um, so that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the, you know, faster, more efficient, secure AI systems from the core of your data center, all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, what's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say, custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be, um, deployed everywhere on day one. Some servers will have DPS right away, some servers will have use and a year or two. And then there are devices that may never have DPS, right. IOT gateways, or legacy servers, even mainframes. Um, so that's the beauty of a solution that creates a fabric across both the switch and the DPU, right. >>Um, and by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open. Um, and that drives, uh, cost efficiencies. And then, um, uh, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can, uh, create tremendous cost efficiencies and, and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oppor, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer guy, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively, um, that, so there's a few pieces of value. The first piece of value is, um, I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating. Uh CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the DevOps folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. Um, the next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor, penetrates a perimeter firewall and can go wherever they want and wreak havoc. Right? And so that's why this, this is so essential. Um, and the next benefit obviously is this unified networking operating model, right? Having, uh, uh, uh, an operating model across switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single villain, for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge. Um, new control points. I think you guys have a new architecture that enables the security to be handled more flexible. >>Right. >>That seems to be the killer feature here, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with, uh, allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>This is super exciting. This is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I gotta ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. Um, I mean, we're, you know, we're super excited about the partnership, obviously we're here together. Um, we think we've got a really good solution for the market, so we're jointly marketing it. Um, uh, you know, obviously we appreciate that Nvidia is open. Um, that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the, in the future, but right now, um, we're, we feel like we're partnered with the number one, uh, provider of DPS in the world and, uh, super excited about, uh, making a splash with it. >>I'm in get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing, uh, well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, uh, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings to the table. Uh, so that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, uh, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are Doka one dot two. >>We're releasing Doka one dot three, uh, in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, um, security libraries, that's all packed into this STK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth Doka is really that unified unified layer that allows, um, Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once, and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, um, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So, uh, later on this year, you'll see, you know, major server manufacturers, uh, releasing Bluefield enabled servers. So, um, more to come >>Awesome, save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. >>Yeah. And, and, and, uh, one thing I'll add is, um, we are, um, we have a number of customers as you'll hear in the next segment, um, that are already signed up and we'll be working with us for our, uh, early field trial starting late April early may. Um, we are accepting registrations. You can go to www.pluribusnetworks.com/e F T a. If you're interested in signing up for, um, uh, being part of our field trial and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to look deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage, >>Cloud networking is complex and fragmented slowing down your business. How can you simplify and unify your cloud networks to increase agility and business velocity? >>Pluribus unified cloud networking provides a unified simplify and agile network fabric across all clouds. It brings the simplicity of a public cloud operation model to private clouds, dramatically reducing complexity and improving agility, availability, and security. Now enterprises and service providers can increase their business philosophy and delight customers in the distributed multi-cloud era. We achieve this with a new approach to cloud networking, pluribus unified cloud fabric. This open vendor, independent network fabric, unifies, networking, and security across distributed clouds. The first step is extending the fabric to servers equipped with data processing units, unifying the fabric across switches and servers, and it doesn't stop there. The fabric is unified across underlay and overlay networks and across all workloads and virtualization environments. The unified cloud fabric is optimized for seamless migration to this new distributed architecture, leveraging the power of the DPU for application level micro-segmentation distributed fireball and encryption while still supporting those servers and devices that are not equipped with a DPU. Ultimately the unified cloud fabric extends seamlessly across distributed clouds, including central regional at edge private clouds and public clouds. The unified cloud fabric is a comprehensive network solution. That includes everything you need for clouds, networking built in SDN automation, distributed security without compromises, pervasive wire speed, visibility and application insight available on your choice of open networking switches and DP use all at the lowest total cost of ownership. The end result is a dramatically simplified unified cloud networking architecture that unifies your distributed clouds and frees your business to move at cloud speed, >>To learn more, visit www.pluribusnetworks.com. >>Okay. We're back I'm John ferry with the cube, and we're going to go deeper into a deep dive into unified cloud networking solution from Clovis and Nvidia. And we'll examine some of the use cases with Alessandra Burberry, VP of product management and pullovers networks and Pete Bloomberg who's director of technical marketing and video remotely guys. Thanks for coming on. Appreciate it. >>Yeah. >>So deep dive, let's get into the what and how Alexandra we heard earlier about the pluribus Nvidia partnership and the solution you're working together on what is it? >>Yeah. First let's talk about the water. What are we really integrating with the Nvidia Bluefield, the DPO technology, uh, plugable says, um, uh, there's been shipping, uh, in, uh, in volume, uh, in multiple mission critical networks. So this advisor one network operating systems, it runs today on a merchant silicone switches and effectively it's a standard open network operating system for data center. Um, and the novelty about this system that integrates a distributed control plane for, at water made effective in SDN overlay. This automation is a completely open and interoperable and extensible to other type of clouds is not enclosed them. And this is actually what we're now porting to the Nvidia DPO. >>Awesome. So how does it integrate into Nvidia hardware and specifically how has pluribus integrating its software with the Nvidia hardware? >>Yeah, I think, uh, we leverage some of the interesting properties of the Bluefield, the DPO hardware, which allows actually to integrate, uh, um, uh, our software, our network operating system in a manner which is completely isolated and independent from the guest operating system. So the first byproduct of this approach is that whatever we do at the network level on the DPU card that is completely agnostic to the hypervisor layer or OSTP layer running on, uh, on the host even more, um, uh, we can also independently manage this network, know that the switch on a Neek effectively, um, uh, managed completely independently from the host. You don't have to go through the network operating system, running on x86 to control this network node. So you throw yet the experience effectively of a top of rack for virtual machine or a top of rack for, uh, Kubernetes bots, where instead of, uh, um, if you allow me with the analogy instead of connecting a server knee directly to a switchboard, now you're connecting a VM virtual interface to a virtual interface on the switch on an ache. >>And, uh, also as part of this integration, we, uh, put a lot of effort, a lot of emphasis in, uh, accelerating the entire, uh, data plane for networking and security. So we are taking advantage of the DACA, uh, Nvidia DACA API to program the accelerators. And these accomplished two things with that. Number one, uh, you, uh, have much greater performance, much better performance. They're running the same network services on an x86 CPU. And second, this gives you the ability to free up, I would say around 20, 25% of the server capacity to be devoted either to, uh, additional workloads to run your cloud applications, or perhaps you can actually shrink the power footprint and compute footprint of your data center by 20%, if you want to run the same number of compute workloads. So great efficiencies in the overall approach, >>And this is completely independent of the server CPU, right? >>Absolutely. There is zero code from running on the x86, and this is what we think this enables a very clean demarcation between computer and network. >>So Pete, I gotta get, I gotta get you in here. We heard that, uh, the DPU is enabled cleaner separation of dev ops and net ops. Can you explain why that's important because everyone's talking DevSecOps right now, you've got net ops, net, net sec ops, this separation. Why is this clean separation important? >>Yeah, I think it's a, you know, it's a pragmatic solution in my opinion. Um, you know, we wish the world was all kind of rainbows and unicorns, but it's a little, a little messier than that. And I think a lot of the dev ops stuff and that, uh, mentality and philosophy, there's a natural fit there. Right? You have applications running on servers. So you're talking about developers with those applications integrating with the operators of those servers. Well, the network has always been this other thing and the network operators have always had a very different approach to things than compute operators. And, you know, I think that we, we in the networking industry have gotten closer together, but there's still a gap there's still some distance. And I think in that distance, isn't going to be closed. And so, you know, again, it comes down to pragmatism and I think, you know, one of my favorite phrases is look good fences, make good neighbors. And that's what this is. >>Yeah. That's a great point because dev ops has become kind of the calling card for cloud, right. But dev ops is as simply infrastructure as code and infrastructure is networking, right? So if infrastructure is code, you know, you're talking about, you know, that part of the stack under the covers under the hood, if you will, this is super important distinction. And this is where the innovation is. Can you elaborate on how you see that? Because this is really where the action is right now. >>Yeah, exactly. And I think that's where, um, one from, from the policy, the security that the zero trust aspect of this, right? If you get it wrong on that network side, all of a sudden you, you can totally open up that those capabilities. And so security is part of that. But the other part is thinking about this at scale, right? So we're taking one top of rack switch and adding, you know, up to 48 servers per rack. And so that ability to automate, orchestrate and manage at scale becomes absolutely critical. >>I'll Sandra, this is really the why we're talking about here, and this is scale. And again, getting it right. If you don't get it right, you're going to be really kind of up, you know what you know, so this is a huge deal. Networking matters, security matters, automation matters, dev ops, net ops, all coming together, clean separation, um, help us understand how this joint solution with Nvidia fits into the pluribus unified cloud networking vision, because this is what people are talking about and working on right now. >>Yeah, absolutely. So I think here with this solution, we're attacking two major problems in cloud networking. One is, uh, operation of, uh, cloud networking. And the second is a distributing security services in the cloud infrastructure. First, let me talk about the first water. We really unifying. If we're unifying something, something must be at least fragmented or this jointed and the, what is this joint that is actually the network in the cloud. If you look holistically, how networking is deployed in the cloud, you have your physical fabric infrastructure, right? Your switches and routers, you'll build your IP clause fabric leaf in spine typologies. This is actually a well understood the problem. I, I would say, um, there are multiple vendors, uh, uh, with, uh, um, uh, let's say similar technologies, um, very well standardized, whether you will understood, um, and almost a commodity, I would say building an IP fabric these days, but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, two services are actually now moved into the compute layer where you actually were called builders, have to instrument the, a separate, uh, network virtualization layer, where they deploy segmentation and security closer to the workloads. >>And this is where the complication arise. These high value part of the cloud network is where you have a plethora of options that they don't talk to each other. And they are very dependent on the kind of hypervisor or compute solution you choose. Um, for example, the networking API to be between an GSXI environment or an hyper V or a Zen are completely disjointed. You have multiple orchestration layers. And when, and then when you throw in also Kubernetes in this, in this, in this type of architecture, uh, you're introducing yet another level of networking. And when Kubernetes runs on top of VMs, which is a prevalent approach, you actually just stacking up multiple networks on the compute layer that they eventually run on the physical fabric infrastructure. Those are all ships in the nights effectively, right? They operate as completely disjointed. And we're trying to attack this problem first with the notion of a unified fabric, which is independent from any workloads, whether it's this fabric spans on a switch, which can be con connected to a bare metal workload, or can span all the way inside the DPU, uh, where, um, you have, uh, your multi hypervisor compute environment. >>It's one API, one common network control plane, and one common set of segmentation services for the network. That's probably the number one, >>You know, it's interesting you, man, I hear you talking, I hear one network month, different operating models reminds me of the old serverless days. You know, there's still servers, but they call it serverless. Is there going to be a term network list? Because at the end of the day, it should be one network, not multiple operating models. This, this is a problem that you guys are working on. Is that right? I mean, I'm not, I'm just joking server listen network list, but the idea is it should be one thing. >>Yeah, it's effectively. What we're trying to do is we are trying to recompose this fragmentation in terms of network operation, across physical networking and server networking server networking is where the majority of the problems are because of the, uh, as much as you have standardized the ways of building, uh, physical networks and cloud fabrics with IP protocols and internet, you don't have that kind of, uh, uh, sort of, uh, um, um, uh, operational efficiency, uh, at the server layer. And, uh, this is what we're trying to attack first. The, with this technology, the second aspect we're trying to attack is are we distribute the security services throughout the infrastructure, more efficiently, whether it's micro-segmentation is a stateful firewall services, or even encryption. Those are all capabilities enabled by the blue field, uh, uh, the Butte technology and, uh, uh, we can actually integrate those capabilities directly into the nettle Fabrica, uh, limiting dramatically, at least for east-west traffic, the sprawl of, uh, security appliances, whether virtual or physical, that is typically the way the people today, uh, segment and secure the traffic in the cloud. >>Awesome. Pete, all kidding aside about network lists and serverless kind of fun, fun play on words there, the network is one thing it's basically distributed computing, right? So I love to get your thoughts about this distributed security with zero trust as the driver for this architecture you guys are doing. Can you share in more detail the depth of why DPU based approach is better than alternatives? >>Yeah, I think what's, what's beautiful and kind of what the DPU brings. That's new to this model is a completely isolated compute environment inside. So, you know, it's the, uh, yo dog, I heard you like a server, so I put a server inside your server. Uh, and so we provide, uh, you know, armed CPU's memory and network accelerators inside, and that is completely isolated from the host. So the server, the, the actual x86 host just thinks it has a regular Nick in there, but you actually have this full control plane thing. It's just like taking your top of rack switch and shoving it inside of your compute node. And so you have not only the separation, um, within the data plane, but you have this complete control plane separation. So you have this element that the network team can now control and manage, but we're taking all of the functions we used to do at the top of rack switch, and we're just shooting them now. >>And, you know, as time has gone on we've, we've struggled to put more and more and more into that network edge. And the reality is the network edge is the compute layer, not the top of rack switch layer. And so that provides this phenomenal enforcement point for security and policy. And I think outside of today's solutions around virtual firewalls, um, the other option is centralized appliances. And even if you can get one that can scale large enough, the question is, can you afford it? And so what we end up doing is we kind of hope that of aliens good enough, or we hope that if the excellent tunnel is good enough and we can actually apply more advanced techniques there because we can't physically, you know, financially afford that appliance to see all of the traffic. And now that we have a distributed model with this accelerator, we could do it. >>So what's the what's in it for the customer. I real quick, cause I think this is interesting point. You mentioned policy, everyone in networking knows policy is just a great thing and it adds, you hear it being talked about up the stack as well. When you start getting to orchestrating microservices and whatnot, all that good stuff going on there, containers and whatnot and modern applications. What's the benefit to the customers with this approach? Because what I heard was more scale, more edge deployment, flexibility, relative to security policies and application enablement. I mean, is that what what's the customer get out of this architecture? What's the enablement. >>It comes down to, uh, taking again the capabilities that were in that top of rack switch and asserting them down. So that makes simplicity smaller blast radiuses for failure, smaller failure domains, maintenance on the networks, and the systems become easier. Your ability to integrate across workloads becomes infinitely easier. Um, and again, you know, we always want to kind of separate each one of those layers. So just as in say, a VX land network, my leaf and spine don't have to be tightly coupled together. I can now do this at a different layer. And so you can run a DPU with any networking in the core there. And so you get this extreme flexibility. You can start small, you can scale large. Um, you know, to me, the, the possibilities are endless. Yes, >>It's a great security control plan. Really flexibility is key. And, and also being situationally aware of any kind of threats or new vectors or whatever's happening in the network. Alessandra, this is huge upside, right? You've already identified some successes with some customers on your early field trials. What are they doing and why are they attracted to the solution? >>Yeah, I think the response from customers has been, uh, the most, uh, encouraging and, uh, exciting, uh, for, uh, for us to, uh, to sort of continue and work and develop this product. And we have actually learned a lot in the process. Um, we talked to tier two tier three cloud providers. Uh, we talked to, uh, SP um, software Tyco type of networks, uh, as well as a large enterprise customers, um, in, uh, one particular case. Um, uh, one, uh, I think, um, let me, let me call out a couple of examples here, just to give you a flavor. Uh, there is a service provider, a cloud provider, uh, in Asia who is actually managing a cloud, uh, where they are offering services based on multiple hypervisors. They are native services based on Zen, but they also are on ramp into the cloud, uh, workloads based on, uh, ESI and, uh, uh, and KVM, depending on what the customer picks from the piece on the menu. >>And they have the problem of now orchestrating through their orchestrate or integrating with the Zen center with vSphere, uh, with, uh, open stack to coordinate these multiple environments and in the process to provide security, they actually deploy virtual appliances everywhere, which has a lot of costs, complication, and eats up into the server CPU. The problem is that they saw in this technology, they call it actually game changing is actually to remove all this complexity of in a single network and distribute the micro-segmentation service directly into the fabric. And overall, they're hoping to get out of it, uh, uh, tremendous, uh, um, opics, uh, benefit and overall, um, uh, operational simplification for the cloud infrastructure. That's one potent a use case. Uh, another, uh, large enterprise customer global enterprise customer, uh, is running, uh, both ESI and hyper V in that environment. And they don't have a solution to do micro-segmentation consistently across hypervisors. >>So again, micro-segmentation is a huge driver security looks like it's a recurring theme, uh, talking to most of these customers and in the Tyco space, um, uh, we're working with a few types of customers on the CFT program, uh, where the main goal is actually to our Monet's network operation. They typically handle all the VNF search with their own homegrown DPDK stack. This is overly complex. It is frankly also as low and inefficient, and then they have a physical network to manage the, the idea of having again, one network, uh, to coordinate the provision in our cloud services between the, the take of VNF, uh, and, uh, the rest of the infrastructure, uh, is extremely powerful on top of the offloading capability of the, by the bluefin DPOs. Those are just some examples. >>That was a great use case, a lot more potential. I see that with the unified cloud networking, great stuff, feed, shout out to you guys at Nvidia had been following your success for a long time and continuing to innovate as cloud scales and pluribus here with the unified networking, kind of bring it to the next level. Great stuff. Great to have you guys on. And again, software keeps driving the innovation again, networking is just a part of it, and it's the key solution. So I got to ask both of you to wrap this up. How can cloud operators who are interested in, in this, uh, new architecture and solution, uh, learn more because this is an architectural shift. People are working on this problem. They're trying to think about multiple clouds of trying to think about unification around the network and giving more security, more flexibility, uh, to their teams. How can people learn more? >>Yeah, so, uh, all Sandra and I have a talk at the upcoming Nvidia GTC conference. Um, so that's the week of March 21st through 24th. Um, you can go and register for free and video.com/at GTC. Um, you can also watch recorded sessions if you ended up watching us on YouTube a little bit after the fact. Um, and we're going to dive a little bit more into the specifics and the details and what we're providing in the solution. >>Alexandra, how can people learn more? >>Yeah, absolutely. People can go to the pluribus, a website, www boost networks.com/eft, and they can fill up the form and, uh, they will contact durables to either know more or to know more and actually to sign up for the actual early field trial program, which starts at the end of April. >>Okay. Well, we'll leave it there. Thanks. You both for joining. Appreciate it up next. You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. I'm John ferry with the >>Cube. Thanks for watching. >>Okay. We've heard from the folks at networks and Nvidia about their effort to transform cloud networking and unify bespoke infrastructure. Now let's get the perspective from an independent analyst and to do so. We welcome in ESG, senior analysts, Bob LA Liberte, Bob. Good to see you. Thanks for coming into our east coast studios. >>Oh, thanks for having me. It's great to be >>Here. Yeah. So this, this idea of unified cloud networking approach, how serious is it? What's what's driving it. >>Yeah, there's certainly a lot of drivers behind it, but probably the first and foremost is the fact that application environments are becoming a lot more distributed, right? So the, it pendulum tends to swing back and forth. And we're definitely on one that's swinging from consolidated to distributed. And so applications are being deployed in multiple private data centers, multiple public cloud locations, edge locations. And as a result of that, what you're seeing is a lot of complexity. So organizations are having to deal with this highly disparate environment. They have to secure it. They have to ensure connectivity to it and all that's driving up complexity. In fact, when we asked in one of our last surveys and last year about network complexity, more than half 54% came out and said, Hey, our network environment is now either more or significantly more complex than it used to be. >>And as a result of that, what you're seeing is it's really impacting agility. So everyone's moving to these modern application environments, distributing them across areas so they can improve agility yet it's creating more complexity. So a little bit counter to the fact and, you know, really counter to their overarching digital transformation initiatives. From what we've seen, you know, nine out of 10 organizations today are either beginning in process or have a mature digital transformation process or initiative, but their top goals, when you look at them, it probably shouldn't be a surprise. The number one goal is driving operational efficiency. So it makes sense. I've distributed my environment to create agility, but I've created a lot of complexity. So now I need these tools that are going to help me drive operational efficiency, drive better experience. >>I mean, I love how you bring in the data yesterday. Does a great job with that. Uh, questions is, is it about just unifying existing networks or is there sort of a need to rethink kind of a do-over network, how networks are built? >>Yeah, that's a, that's a really good point because certainly unifying networks helps right. Driving any kind of operational efficiency helps. But in this particular case, because we've made the transition to new application architectures and the impact that's having as well, it's really about changing and bringing in new frameworks and new network architectures to accommodate those new application architectures. And by that, what I'm talking about is the fact that these new modern application architectures, microservices, containers are driving a lot more east west traffic. So in the old days, it used to be easier in north south coming out of the server, one application per server, things like that. Right now you've got hundreds, if not thousands of microservices communicating with each other users communicating to them. So there's a lot more traffic and a lot of it's taking place within the servers themselves. The other issue that you starting to see as well from that security perspective, when we were all consolidated, we had those perimeter based legacy, you know, castle and moat security architectures, but that doesn't work anymore when the applications aren't in the castle, right. >>When everything's spread out that that no longer happens. So we're absolutely seeing, um, organizations trying to, trying to make a shift. And, and I think much, like if you think about the shift that we're seeing with all the remote workers and the sassy framework to enable a secure framework there, this it's almost the same thing. We're seeing this distributed services framework come up to support the applications better within the data centers, within the cloud data centers, so that you can drive that security closer to those applications and make sure they're, they're fully protected. Uh, and that's really driving a lot of the, you know, the zero trust stuff you hear, right? So never trust, always verify, making sure that everything is, is, is really secure micro-segmentation is another big area. So ensuring that these applications, when they're connected to each other, they're, they're fully segmented out. And that's again, because if someone does get a breach, if they are in your data center, you want to limit the blast radius, you want to limit the amount of damage that's done. So that by doing that, it really makes it a lot harder for them to see everything that's in there. >>You know, you mentioned zero trust. It used to be a buzzword, and now it's like become a mandate. And I love the mode analogy. You know, you build a moat to protect the queen and the castle, the Queens left the castles, it's just distributed. So how should we think about this, this pluribus and Nvidia solution. There's a spectrum, help us understand that you've got appliances, you've got pure software solutions. You've got what pluribus is doing with Nvidia, help us understand that. >>Yeah, absolutely. I think as organizations recognize the need to distribute their services to closer to the applications, they're trying different models. So from a legacy approach, you know, from a security perspective, they've got these centralized firewalls that they're deploying within their data centers. The hard part for that is if you want all this traffic to be secured, you're actually sending it out of the server up through the rack, usually to in different location in the data center and back. So with the need for agility, with the need for performance, right, that adds a lot of latency. Plus when you start needing to scale, that means adding more and more network connections, more and more appliances. So it can get very costly as well as impacting the performance. The other way that organizations are seeking to solve this problem is by taking the software itself and deploying it on the servers. Okay. So that's a, it's a great approach, right? It brings it really close to the applications, but the things you start running into there, there's a couple of things. One is that you start seeing that the DevOps team start taking on that networking and security responsibility, which they >>Don't want to >>Do, they don't want to do right. And the operations teams loses a little bit of visibility into that. Um, plus when you load the software onto the server, you're taking up precious CPU cycles. So if you're really wanting your applications to perform at an optimized state, having additional software on there, isn't going to, isn't going to do it. So, you know, when we think about all those types of things, right, and certainly the other side effects of that is the impact of the performance, but there's also a cost. So if you have to buy more servers because your CPU's are being utilized, right, and you have hundreds or thousands of servers, right, those costs are going to add up. So what, what Nvidia and pluribus have done by working together is to be able to take some of those services and be able to deploy them onto a smart Nick, right? >>To be able to deploy the DPU based smart SMARTNICK into the servers themselves. And then pluribus has come in and said, we're going to unify create that unified fabric across the networking space, into those networking services all the way down to the server. So the benefits of having that are pretty clear in that you're offloading that capability from the server. So your CPU's are optimized. You're saving a lot of money. You're not having to go outside of the server and go to a different rack somewhere else in the data center. So your performance is going to be optimized as well. You're not going to incur any latency hit for every trip round trip to the, to the firewall and back. So I think all those things are really important. Plus the fact that you're going to see from a, an organizational aspect, we talked about the dev ops and net ops teams. The network operations teams now can work with the security teams to establish the security policies and the networking policies. So that they've dev ops teams. Don't have to worry about that. So essentially they just create the guardrails and let the dev op team run. Cause that's what they want. They want that agility and speed. >>Yeah. Your point about CPU cycles is key. I mean, it's estimated that 25 to 30% of CPU cycles in the data center are wasted. The cores are wasted doing storage offload or, or networking or security offload. And, you know, I've said many times everybody needs a nitro like Amazon nugget, but you can't go, you can only buy Amazon nitro if you go into AWS. Right. Everybody needs a nitro. So is that how we should think about this? >>Yeah. That's a great analogy to think about this. Um, and I think I would take it a step further because it's, it's almost the opposite end of the spectrum because pluribus and video are doing this in a very open way. And so pluribus has always been a proponent of open networking. And so what they're trying to do is extend that now to these distributed services. So leverage working with Nvidia, who's also open as well, being able to bring that to bear so that organizations can not only take advantage of these distributed services, but also that unified networking fabric, that unified cloud fabric across that environment from the server across the switches, the other key piece of what pluribus is doing, because they've been doing this for a while now, and they've been doing it with the older application environments and the older server environments, they're able to provide that unified networking experience across a host of different types of servers and platforms. So you can have not only the modern application supported, but also the legacy environments, um, you know, bare metal. You could go any type of virtualization, you can run containers, et cetera. So a wide gambit of different technologies hosting those applications supported by a unified cloud fabric from pluribus. >>So what does that mean for the customer? I don't have to rip and replace my whole infrastructure, right? >>Yeah. Well, think what it does for, again, from that operational efficiency, when you're going from a legacy environment to that modern environment, it helps with the migration helps you accelerate that migration because you're not switching different management systems to accomplish that. You've got the same unified networking fabric that you've been working with to enable you to run your legacy as well as transfer over to those modern applications. Okay. >>So your people are comfortable with the skillsets, et cetera. All right. I'll give you the last word. Give us the bottom line here. >>So yeah, I think obviously with all the modern applications that are coming out, the distributed application environments, it's really posing a lot of risk on these organizations to be able to get not only security, but also visibility into those environments. And so organizations have to find solutions. As I said, at the beginning, they're looking to drive operational efficiency. So getting operational efficiency from a unified cloud networking solution, that it goes from the server across the servers to multiple different environments, right in different cloud environments is certainly going to help organizations drive that operational efficiency. It's going to help them save money for visibility, for security and even open networking. So a great opportunity for organizations, especially large enterprises, cloud providers who are trying to build that hyperscaler like environment. You mentioned the nitro card, right? This is a great way to do it with an open solution. >>Bob, thanks so much for, for coming in and sharing your insights. Appreciate it. >>You're welcome. Thanks. >>Thanks for watching the program today. Remember all these videos are available on demand@thekey.net. You can check out all the news from today@siliconangle.com and of course, pluribus networks.com many thanks diplomas for making this program possible and sponsoring the cube. This is Dave Volante. Thanks for watching. Be well, we'll see you next time.

Published Date : Mar 16 2022

SUMMARY :

And one of the best examples is Amazon's nitro. So if you can eliminate that waste, and Pete Lummus from Nvidia to take a deeper dive into the technology. Great to have you welcome folks. Thank you. So let's get into the, the problem situation with cloud unified network. and the first mandate for them is to become as agile as a hyperscaler. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, the host, from the switch to the host, and really have that single pane of glass for So it really is a magical partnership between the two companies with pulled out of the market and, and you guys step up and create these new solutions. Um, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers So they need to migrate there and they need this architecture to be cost-effective. And then, um, uh, you know, with this, with this, our architectural approach effectively, Get the unified cloud architecture, I'm the customer guy, So now by, by terminating the networking on the DPU, Um, and the next benefit obviously So you have to have this new security model. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the the go to market with an Nvidia? in the future, but right now, um, we're, we feel like we're partnered with the number one, And I talked about sort of, you know, uh, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future of, of cloud operations. You can go to www.pluribusnetworks.com/e Thanks so much for sharing the news. How can you simplify and unify your cloud networks to increase agility and business velocity? Ultimately the unified cloud fabric extends seamlessly across And we'll examine some of the use cases with Alessandra Burberry, Um, and the novelty about this system that integrates a distributed control So how does it integrate into Nvidia hardware and specifically So the first byproduct of this approach is that whatever And second, this gives you the ability to free up, I would say around 20, and this is what we think this enables a very clean demarcation between computer and So Pete, I gotta get, I gotta get you in here. And so, you know, again, it comes down to pragmatism and I think, So if infrastructure is code, you know, you're talking about, you know, that part of the stack And so that ability to automate, into the pluribus unified cloud networking vision, because this is what people are talking but this is not the place where you deploy most of your services in the cloud, particularly from a security standpoint, on the kind of hypervisor or compute solution you choose. That's probably the number one, I mean, I'm not, I'm just joking server listen network list, but the idea is it should the Butte technology and, uh, uh, we can actually integrate those capabilities directly So I love to get your thoughts about Uh, and so we provide, uh, you know, armed CPU's memory scale large enough, the question is, can you afford it? What's the benefit to the customers with this approach? And so you can run a DPU You've already identified some successes with some customers on your early field trials. couple of examples here, just to give you a flavor. And overall, they're hoping to get out of it, uh, uh, tremendous, and then they have a physical network to manage the, the idea of having again, one network, So I got to ask both of you to wrap this up. Um, so that's the week of March 21st through 24th. more or to know more and actually to sign up for the actual early field trial program, You're going to hear an independent analyst perspective and review some of the research from the enterprise strategy group ESG. Now let's get the perspective It's great to be What's what's driving it. So organizations are having to deal with this highly So a little bit counter to the fact and, you know, really counter to their overarching digital transformation I mean, I love how you bring in the data yesterday. So in the old days, it used to be easier in north south coming out of the server, So that by doing that, it really makes it a lot harder for them to see And I love the mode analogy. but the things you start running into there, there's a couple of things. So if you have to buy more servers because your CPU's are being utilized, the server and go to a different rack somewhere else in the data center. So is that how we should think about this? environments and the older server environments, they're able to provide that unified networking experience across environment, it helps with the migration helps you accelerate that migration because you're not switching different management I'll give you the last word. that it goes from the server across the servers to multiple different environments, right in different cloud environments Bob, thanks so much for, for coming in and sharing your insights. You're welcome. You can check out all the news from today@siliconangle.com and of course,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DonniePERSON

0.99+

Bob LibertePERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Alessandra BurberryPERSON

0.99+

SandraPERSON

0.99+

Dave VolantePERSON

0.99+

NvidiaORGANIZATION

0.99+

Pete BloombergPERSON

0.99+

MichaelPERSON

0.99+

AsiaLOCATION

0.99+

AlexandraPERSON

0.99+

hundredsQUANTITY

0.99+

Pete LummusPERSON

0.99+

AWSORGANIZATION

0.99+

Bob LA LibertePERSON

0.99+

MikePERSON

0.99+

JohnPERSON

0.99+

ESGORGANIZATION

0.99+

BobPERSON

0.99+

two companiesQUANTITY

0.99+

25QUANTITY

0.99+

Alessandra BobbyPERSON

0.99+

two yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

BluefieldORGANIZATION

0.99+

NetAppsORGANIZATION

0.99+

demand@thekey.netOTHER

0.99+

20%QUANTITY

0.99+

last yearDATE

0.99+

a yearQUANTITY

0.99+

March 21stDATE

0.99+

FirstQUANTITY

0.99+

www.pluribusnetworks.com/eOTHER

0.99+

TycoORGANIZATION

0.99+

late AprilDATE

0.99+

DokaTITLE

0.99+

400 gigQUANTITY

0.99+

yesterdayDATE

0.99+

second versionQUANTITY

0.99+

two servicesQUANTITY

0.99+

first stepQUANTITY

0.99+

third areaQUANTITY

0.99+

oneQUANTITY

0.99+

second aspectQUANTITY

0.99+

OneQUANTITY

0.99+

EachQUANTITY

0.99+

www.pluribusnetworks.comOTHER

0.99+

PetePERSON

0.99+

last yearDATE

0.99+

one applicationQUANTITY

0.99+

two thingsQUANTITY

0.99+

Mike Capuano and Ami Badani


 

>>Okay, let's kick things off. We're here at my capital. One of the CMO of pluribus networks and AMI by Dani VP of networking, marketing developer ecosystem at Nvidia. Great to have you welcome folks. Thank you. Thanks. So let's get into the, the problem situation with cloud unified network. What problems are out there? What challenges do cloud operators have Mike let's get into? >>Yeah, really, you know, the challenges we're looking at are for non hyperscalers that's enterprises, governments, tier two service providers, cloud service providers, and the first mandate for them is to become as agile as a hyperscaler. So they need to be able to deploy services and security policies in seconds. They need to be able to abstract the complexity of the network and define things in software while it's accelerated in hardware. Really ultimately they need a single operating model everywhere. And then the second thing is they need to distribute networking and security services out to the edge of the host. We're seeing a growth in cyber attacks. It's it's not slowing down. It's only getting worse and, you know, solving for this security problem across clouds is absolutely critical. And the way to do it is to move security out to the host. >>Okay. With that goal in mind, what's the pluribus vision. How does this tie together? >>Yeah, so basically what we see is that this demands a new architecture and that new architecture has four tenants. The first tenant is unified and simplified cloud networks. If you look at cloud networks today, there's, there's sort of like discrete bespoke cloud networks, you know, per hypervisor, per private cloud edge cloud public cloud. Each of the public clouds have different networks that needs to be unified. You know, if we want these folks to be able to be agile, they need to be able to issue a single command or instantiate a security policy across all of those locations with one command and not have to go to each one. The second is like I mentioned, distributed security, distributed security without compromise, extended out to the host is absolutely critical. So micro-segmentation and distributed firewalls, but it doesn't stop there. They also need pervasive visibility. You know, it's, it's, it's sort of like with security, you really can't see you can't protect what you can't see. >>So you need visibility everywhere. The problem is visibility to date has been very expensive. Folks have had to basically build a separate overlay network of taps, packet brokers, tap aggregation infrastructure that really needs to be built into this unified network I'm talking about. And the last thing is automation. All of this needs to be SDN enabled. So this is related to my comment about abstraction abstract, the complexity of all these discreet networks, physic whatever's down there in the physical layer. Yeah. I don't want to see it. I want abstract it. I wanted to find things in software, but I do want to leverage the power of hardware to accelerate that. So that's the fourth tenant is SDN automation. >>Mike, we've been talking on the cable a lot about this architectural shift and customers are looking at this. This is a big part of everyone who's looking at cloud operations next gen, how do we get there? How do customers get this vision realized? >>That's a great question. And I appreciate the tee up. I mean, we're, we're here today for that reason. We're introducing two things today. The first is a unified cloud networking vision. And that is a vision of where pluribus is headed with our partners like Nvidia longterm. And that is about deploying a common operating model, SDN enabled SDN, automated hardware, accelerated across all clouds and whether that's underlying overlay switch or server hype, any hypervisor infrastructure containers, any workload doesn't matter. So that's ultimately where we want to get. And that's what we talked about earlier. The first step in that vision is what we call the unified cloud fabric. And this is the next generation of our adaptive cloud fabric. And what's nice about this is we're not starting from scratch. We have an award-winning adaptive cloud fabric product that is deployed globally. And in particular we're very proud of the fact that it's deployed in over a hundred tier one mobile operators as the network fabric for their 4g and 5g virtualized cores. We know how to build carrier grade networking infrastructure. What we're doing now to realize this next generation unified cloud fabric is we're extending from the switch to this Nvidia Bluefield to DPU. We, >>No, there's all that up real quick. That's a good, that's a good prop. That's the blue field and video. >>It's the Nvidia Bluefield two DPU data processing unit. And you know, what we're doing fundamentally is extending our SDN automated fabric, the unified cloud fabric out to the host, but it does take processing power. So we knew that we didn't want to do, we didn't want to implement that running on the CPU, which is what some other companies do because it consumes revenue generating CPU's from the application. So a DPU is a perfect way to implement this. And we knew that Nvidia was the leader with this blue field too. And so that is the first that's, that's the first step in the getting into realizing this vision >>And video has always been powering some great workloads of GPU. Now you've got DP networking, and then video is here. What is the relationship with clothes? How did that come together? Tell us the story. >>Yeah. So, you know, we've been working with pluribus for quite some time. I think the last several months was really when it came to fruition and what pluribus is trying build and what Nvidia has. So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, conceptually does really three things, offload, accelerate an isolate. So offload your workloads from your CPU to your data processing unit infrastructure workloads, that is a accelerate. So there's a bunch of acceleration engine. So you can run infrastructure workloads much faster than you would otherwise, and then isolation. So you have this nice security isolation between the data processing unit and your other CPU environment. And so you can run completely isolated workloads directly on the data processing unit. So we introduced this, you know, a couple of years ago, and with pluribus, you know, we've been talking to the pluribus team for quite some months now. >>And I think really the combination of what pluribus is trying to build and what they've developed around this unified cloud fabric is fits really nicely with the DPU and running that on the DPU and extending it really from your physical switch, all the way to your host environment, specifically on the data processing unit. So if you think about what's happening as you add data processing units to your environment. So every server we believe over time is going to have data processing units. So now you'll have to manage that complexity from the physical network layer to the host layer. And so what pluribus is really trying to do is extending the network fabric from the host, from the switch to the host, and really have that single pane of glass for network operators to be able to configure provision, manage all of the complexity of the network environment. >>So that's really how the partnership truly started. And so it started really with extending the network fabric, and now we're also working with them on security. So, you know, if you sort of take that concept of isolation and security isolation, what pluribus has within their fabric is the concept of micro-segmentation. And so now you can take that extended to the data processing unit and really have isolated micro-segmentation workloads, whether it's bare metal cloud native environments, whether it's virtualized environments, whether it's public cloud, private cloud hybrid cloud. So it really is a magical partnership between the two companies with their unified cloud fabric running on, on the DPU. >>I love about this conversation is it reminds me of when you have these changing markets, the product gets pulled out of the market and, and you guys step up and create these new solutions. And I think this is a great example. So I have to ask you, how do you guys differentiate what sets this apart for customers with what's in it for the, >>Yeah. So I mentioned, you know, three things in terms of the value of what the Bluefield brings, right? There's offloading, accelerating, isolating, that's sort of the key core tenants of Bluefield, so that, you know, if you sort of think about what, what Bluefield, what we've done, you know, in terms of the differentiation, we're really a robust platform for innovation. So we introduced Bluefield to last year, we're introducing Bluefield three, which is our next generation of Bluefields, you know, we'll have five X, the arm compute capacity. It will have 400 gig line rate acceleration for X better crypto acceleration. So it will be remarkably better than the previous generation. And we'll continue to innovate and add, you know, chips to our portfolio every, every 18 months to two years. So that's sort of one of the key areas of differentiation. The other is the, if you look at Nvidia and, and you know, what we're sort of known for is really known for our AI artificial intelligence and our artificial intelligence software, as well as our GPU. >>So you look at artificial intelligence and the combination of artificial intelligence plus data processing. This really creates the faster, more efficient, secure AI systems from, you know, the core of your data center all the way out to the edge. And so with Nvidia, we really have these converged accelerators where we've combined the GPU, which does all your AI processing with your data processing with the DPU. So we have this convergence really nice convergence of, of that area. And I would say the third area is really around our developer environment. So, you know, one of the key, one of our key motivations at Nvidia is really to have our partner ecosystem, embrace our technology and build solutions around our technology. So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called Doka, and it's an open SDK for our partners to really build and develop solutions using Bluefield and using all these accelerated libraries that we expose through Doka. And so part of our differentiation is really building this open ecosystem for our partners to take advantage and build solutions around our technology. >>You know, it's exciting is when I hear you talk, it's like you realize that there's no one general purpose network anymore. Everyone has their own super environment Supercloud or these new capabilities. They can really craft their own, I'd say custom environment at scale with easy tools. Right. And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run this effectively? Cost-effectively and how do people migrate? >>Yeah, I think that is the key question, right? So we've got this beautiful architecture. You, you know, Amazon nitro is a, is a good example of, of a smart NIC architecture that has been successfully deployed, but enterprises and serve tier two service providers and tier one service providers and governments are not Amazon, right? So they need to migrate there and they need this architecture to be cost-effective. And, and that's, that's super key. I mean, the reality is deep user moving fast, but they're not going to be deployed everywhere on day one. Some servers will ha have DPS right away. Some servers will have deep use in a year or two. And then there are devices that may never have DPS, right? IOT gateways, or legacy servers, even mainframes. So that's the beauty of a solution that creates a fabric across both the switch and the DPU, right? >>And by leveraging the Nvidia Bluefield DPU, what we really like about it is it's open and that drives cost efficiencies. And then, you know, with this, with this, our architectural approach effectively, you get a unified solution across switch and DPU workload independent doesn't matter what hypervisor it is, integrated visibility, integrated security, and that can create tremendous cost efficiencies and really extract a lot of the expense from, from a capital perspective out of the network, as well as from an operational perspective, because now I have an SDN automated solution where I'm literally issuing a command to deploy a network service or to create or deploy our security policy and is deployed everywhere, automatically saving the oper, the network operations team and the security operations team time. >>All right. So let me rewind that because that's super important. Get the unified cloud architecture, I'm the customer, but it's implemented, what's the value again, take, take me through the value to me. I have a unified environment. What's the value. >>Yeah. So I mean, the value is effectively. So there's a few pieces of value. The first piece of value is I'm creating this clean D mark. I'm taking networking to the host. And like I mentioned, we're not running it on the CPU. So in implementations that run networking on the CPU, there's some conflict between the dev ops team who owned the server and the NetApps team who own the network because they're installing software on the, on the CPU stealing cycles from what should be revenue generating CPU's. So now by, by terminating the networking on the DPU, we click create this real clean DMARC. So the dev ops folks are happy because they don't necessarily have the skills to manage network and they don't necessarily want to spend the time managing networking. They've got their network counterparts who are also happy the NetApps team, because they want to control the networking. >>And now we've got this clean DMARC where the dev ops folks get the services they need and the NetApp folks get the control and agility they need. So that's a huge value. The next piece of value is distributed security. This is essential. I mentioned earlier, you know, put pushing out micro-segmentation and distributed firewall, basically at the application level, right, where I create these small, small segments on an application by application basis. So if a bad actor does penetrate the perimeter firewall, they're contained once they get inside. Cause the worst thing is a bad actor penetrates at perimeter firewall, and it can go wherever they want and wreak havoc, right? And so that's why this, this is so essential. And the next benefit obviously is this unified networking operating model, right? Having an operating model, switch and server underlay and overlay, workload agnostic, making the life of the NetApps teams much easier so they can focus their time on really strategy instead of spending an afternoon, deploying a single V LAN for example. >>Awesome. And I think also from my standpoint, I mean, perimeter security is pretty much, I mean, they're out there, it gets the firewall still out there exists, but pretty much they're being breached all the time, the perimeter. So you have to have this new security model. And I think the other thing that you mentioned, the separation between dev ops is cool because the infrastructure is code is about making the developers be agile and build security in from day one. So this policy aspect is, is huge new control points. I think you guys have a new architecture that enables the security to be handled more flexible. Right. That seems to be the killer feature, >>Right? Yeah. If you look at the data processing unit, I think one of the great things about sort of this new architecture, it's really the foundation for zero trust it's. So like you talked about the perimeter is getting breached. And so now each and every compute node has to be protected. And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the foundation of zero trust. And pluribus is really building on that vision with allowing sort of micro-segmentation and being able to protect each and every compute node as well as the underlying network. >>And this is an illustration of how the market's evolving architectures are being reshaped and refactored for cloud scale and all this new goodness with data. So I got to ask how you guys go into market together. Michael, start with you. What's the relationship look like in the go to market with an Nvidia? >>Sure. I mean, we're, you know, we're super excited about the partnership. Obviously we're here together. We think we've got a really good solution for the market, so we're jointly marketing it. You know, obviously we appreciate that Nvidia is open that's, that's sort of in our DNA, we're about open networking. They've got other ISV who are gonna run on Bluefield too. We're probably going to run on other DPS in the future, but right now we're we feel like we're partnered with the number one provider of DPS in the world and super excited about making a splash with it >>In video, get the hot product. >>Yeah. So Bluefield too, as I mentioned was GA last year, we're introducing well, we now also have the converged accelerator. So I talked about artificial intelligence or artificial intelligence software with the Bluefield DPU, all of that put together on a converged accelerator. The nice thing there is you can either run those workloads. So if you have an artificial intelligence workload and an infrastructure workload, you can warn them separately on the same platform or you can actually use, you can actually run artificial intelligence applications on the Bluefield itself. So that's what the converged accelerator really brings the table. So that's available now. Then we have Bluefield three, which will be available late this year. And I talked about sort of, you know, how much better that next generation of Bluefield is in comparison to Bluefield two. So we will see Bluefield three shipping later on this year, and then our software stack, which I talked about, which is called Doka we're on our second version are DACA one dot two. >>We're releasing Doka one dot three in about two months from now. And so that's really our open ecosystem framework. So allow you to program the Bluefields. So we have all of our acceleration libraries, security libraries, that's all packed into this SDK called Doka. And it really gives that simplicity to our partners to be able to develop on top of Bluefield. So as we add new generations of Bluefield, you know, next, next year, we'll have, you know, another version and so on and so forth. Doka is really that unified unified layer that allows Bluefield to be both forwards compatible and backwards compatible. So partners only really have to think about writing to that SDK once and then it automatically works with future generations of Bluefields. So that's sort of the nice thing around, around Doka. And then in terms of our go to market model, we're working with every, every major OEM. So later on this year, you'll see, you know, major server manufacturers releasing Bluefield enabled servers, so more to come >>Save money, make it easier, more capabilities, more workload power. This is the future of, of cloud operations. Yeah. >>And one thing I'll add is we are, we have a number of customers as you'll hear in the next segment that are already signed up and we'll be working with us for our early field trial starting late April early may. We are accepting registrations. You can go to www.pluribusnetworks.com/e F T if you're interested in signing up for being part of our field trial and, and providing feedback on the product, >>Awesome innovation and network. Thanks so much for sharing the news. Really appreciate it. Thanks so much. Okay. In a moment, we'll be back to the deeper in the product, the integration security zero trust use cases. You're watching the cube, the leader in enterprise tech coverage.

Published Date : Mar 4 2022

SUMMARY :

Great to have you welcome folks. So they need to be able to deploy services and security policies in seconds. How does this tie together? Each of the public clouds have different networks that needs to be unified. So that's the fourth tenant How do customers get this vision realized? And I appreciate the tee up. That's the blue field and video. And so that is the first that's, that's the first step in the getting into realizing What is the relationship with clothes? So we have, you know, this concept of a Bluefield data processing unit, which if you think about it, So if you think about what's happening as you add data So it really is a magical partnership between the two companies with out of the market and, and you guys step up and create these new solutions. of Bluefield, so that, you know, if you sort of think about what, So if you look at what we've done with the DPU, with credit and an SDK, which is an open SDK called And it's all kind of, again, this is the new architecture Mike, you were talking about, how does customers run So they need to migrate there and they need this architecture to be cost-effective. And then, you know, with this, with this, our architectural approach effectively, So let me rewind that because that's super important. So the dev ops folks are happy because they don't necessarily have the skills to And the next benefit obviously And I think the other thing that you mentioned, And I think that's sort of what you see with the partnership between pluribus and Nvidia is the DPU is really the So I got to ask how you of DPS in the world and super excited about making a And I talked about sort of, you know, how much better that next generation of Bluefield So as we add new generations of Bluefield, you know, next, This is the future And one thing I'll add is we are, we have a number of customers Thanks so much for sharing the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DaniPERSON

0.99+

MichaelPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike CapuanoPERSON

0.99+

NvidiaORGANIZATION

0.99+

Ami BadaniPERSON

0.99+

BluefieldORGANIZATION

0.99+

MikePERSON

0.99+

two yearsQUANTITY

0.99+

two companiesQUANTITY

0.99+

late AprilDATE

0.99+

400 gigQUANTITY

0.99+

last yearDATE

0.99+

EachQUANTITY

0.99+

DokaTITLE

0.99+

second versionQUANTITY

0.99+

second thingQUANTITY

0.99+

one commandQUANTITY

0.99+

first stepQUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

late this yearDATE

0.98+

first pieceQUANTITY

0.98+

NetAppsORGANIZATION

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

four tenantsQUANTITY

0.97+

first mandateQUANTITY

0.97+

third areaQUANTITY

0.97+

BluefieldCOMMERCIAL_ITEM

0.96+

fourth tenantQUANTITY

0.96+

secondQUANTITY

0.96+

two thingsQUANTITY

0.96+

BluefieldsORGANIZATION

0.96+

a yearQUANTITY

0.95+

first tenantQUANTITY

0.95+

each oneQUANTITY

0.94+

over a hundred tier oneQUANTITY

0.94+

eachQUANTITY

0.93+

three thingsQUANTITY

0.92+

one thingQUANTITY

0.91+

fiveQUANTITY

0.9+

zero trustQUANTITY

0.9+

day oneQUANTITY

0.9+

single commandQUANTITY

0.9+

4gQUANTITY

0.89+

single paneQUANTITY

0.88+

18 monthsQUANTITY

0.88+

BluefieldTITLE

0.87+

a couple of years agoDATE

0.86+

this yearDATE

0.85+

singleQUANTITY

0.84+

5gQUANTITY

0.83+

tier two serviceQUANTITY

0.82+

OneQUANTITY

0.81+

SupercloudTITLE

0.81+

Patrick Jean, OutSystems | AWS re:Invent 2021


 

>>Welcome to the cubes, continuing coverage of AWS reinvent 2021 find Lisa Martin and we are running one of the industry's most important and largest hybrid tech events with AWS in this ecosystem partners. This year, we have two live sets, two remote sites over 100 guests talking about the next decade in cloud innovation. And we're excited to be joined by Patrick Jeanne, the CTO of OutSystems Patrick. Welcome to the program. >>Thank you. I appreciate being one of those 100 guests, >>One of the 100, one of the elite, 100, we'll say it like that. Right? So, so OutSystems has some revolutionary news. You guys are saying, you know, what developer experience needs to change? Tell us more. >>It does. I mean, it needs to change. And I've been in the industry developing applications for too many years dimensions basically since I was 12 years old writing software and, you know, going over that time and thinking about it, doing the traditional software development route. So many applications that take too long was, you know, costly to build so much risk involved in it. Eventually it didn't meet all the requirements. And if you look at the investment we make in software, which is important, I mean, software is a, is a unique differentiator for, for businesses. That investment has such a high risk and a high cost, and that needs to change and it needs to change just because of the complexity that is in that process inherent in it that's. And that is what we are doing and OutSystems is tackling that problem. And, um, from a business standpoint, it must change. >>It must change that that is strong words there. So talk to me about what you're announcing, what, what were the gaps in the market customer feedback? Was it, or were there any catalysts from the pandemic going we've got to change this developer experience and this is the time >>For sure. I mean, if you think about from the pandemic and I mean, we were on a journey for digital transformation. We've been on this journey for a number of years and it really accelerated that the experiences that we have with each other, with you and me, we're not the same studio today. I mean, there's there reasons that we have used this experience remote, we have a technology that can do it, the pandemic accelerated that. And so, so much of the experiences we have are digital experiences. And if you think about it, there's a device in between us. There's going to be a device in between all the people viewing what we're looking at, that experience that, uh, that they will have with us will be basically surfaced through an application on that device. And the pandemic has really accelerated that. And that's an area that we play in, obviously for what's considered low code application development. >>And if you just think about application development in general, that's what powers all of these experiences. And going back to that, you know, statement about that, it needs to change if we need these experiences to be diverse, if we need these experiences to be meaningful, if we need them to make sure that when people engage, as far as what that device is, something that brings, you know, delight and pleasure to them, we need developers across the board. Investing in that today, there is a very constrained market for professional developers, but because of the inherent complexity in software development. And so if you think about how that's almost almost you're limiting access to the people who can create those experiences, that's not a good situation. There's about 25 million developers in the world that would consider themselves developers today, 7, 8, 9, 10 billion devices out there. Think of that disparity between those two numbers. >>And so we need a larger number of people to actually develop applications. So that experience can be much more diverse. We need to expose development to many more people. That is the problem today with software development is that it is complex. It is too specialized. It's too inherit as far as with failure when you get it together. And so either you shy away from that as an organization or as an individual to do development, or you go on these very long development as far as cycles to actually create these applications. What we do is we take the approach of let's make it very simple to get into, you know, some terms and call it citizen developer, low code, basically all they're saying is let's, let's reduce the risk of development. Let's go into a process where we make it accessible to more and more people. You can go through and develop applications with the lower risk. You can build change into that process and you can get value into end users as rapidly as possible. So that's, that is the value proposition. That is what needs to change >>Strong value proposition well said, Patrick, talking about reducing the complexity, uh, the risk as well. So, so go ahead and crack crack open what you guys are actually announcing today. >>Yeah, for sure. So with, we we've been doing this for many years, we have, um, software development, we have 14 million plus as far as end-users using applications that have been developed with the Al systems platform, what we're announcing is taking some of the great benefits that we have to what you'd consider as the first part of that low code process, where you have a, you have a developer that has an idea, and there's a canvas in front of you. You know, you're, you're an artist, right? But again, this is what you are as a developer. And so you go in and you create that application. We've been doing this for many years and it works really well. But thing that we're improving upon now is the ability to do that and scale that out to millions of end-users 10 millions of end-users. So if you think about that inherent speed of developing an application, using a platform like OutSystems, we're taking that same concept and rolling that into an internet scale application, hosting architecture. >>So any developer that uses our systems, basically like it would be comparable to a traditional development team that has application architects, cloud architects, security, engineers, database engineers, a whole team of very smart individuals that generally the, the biggest technology companies in the world can put together. Most companies can't do that. You don't have access to that type of that type of skillset. And so we're providing that with project Neo, which is what we're announcing today in our, um, at our user conference and customer conference, is this brand new as far as platform that allows you to build these applications at scale. And this is initially built on AWS using all the great AWS technologies. If you look at what AWS has done and provided to developers today, it's amazing. It is absolutely amazing. The amount of technologies that you can leverage. It's also daunting because as a traditional developer, you have to go in and choose, you know, what do you do? It's like, there's just massive cognitive load as far as upfront when you're going to design and application and what type of messaging what's at the data store. Well, how do I host my application? What type of network, you know, as far as security do I use, we're taking all that heavy lifting, all that undifferentiated, heavy lifting off of the developers, putting it into the project, Neo platform, allowing a single developer or a small group of developers to actually leverage that best in class architecture on AWS today. >>So when you're talking to developers, what are some of the things that you described as the unique differentiators of project Neo? It sounds like this was really apt and apt time for change, but when you're talking to those folks, what do you say? You know, 1, 2, 3, these are the things that make project Neo unique. >>Yeah. So you're the first is don't worry about the application architecture. Like I mentioned, don't when you go in that, the idea, the concept of that application and what it means to, to deliver some value, whether it's into a business or a hobby or whatever. I mean, however, you're developing application, you're doing it for a reason. You want that value to come out as quickly as possible. You want that experience. And so that first thing is you don't have to worry about the architecture anymore. So in the past, you know, you'd have to think about if it's a very large application, it's millions and millions of end-users. How do you structure that? How do you put it together? That concern is removed from you in that process? The other thing is we solve the problem of software disintegration. So with traditional development, when you develop an application and you get it into the hands of end, users get immediately starts to disintegrate. >>So there will be bugs that will appear. There will be, as far as, um, security flaws that will come up services that you use will become deprecated. We'll swap out cloud services, you know, AWS or Azure or Google, we'll swap out cloud services with different services behind the scenes version that we new versions of those that is software disintegration. As soon as you develop software today and all of these beautiful cloud services that you use and components, they often something will become outdated almost by the time you release it. A lot of times with software development projects, it literally is you start with some version or some component before you can get that out in a traditional mode. Something becomes outdated. We solve that issue. What I like to call software disintegration, we, as far as our systems, ensure we invest in that platform. And so when we need to change out those components, so services, those versions fix is for a security flaws, fixed bugs. >>We do that and it seamless. And so your application, you do not have to rewrite your application. You do not have to go through that process as a tradition, as a developer on our systems like you would, as your traditional developer, we solve that software disintegration issue. So it is it's, it's very empowering to developers to not have to worry about that. There are many, you look at the numbers today about how much is invested in innovation versus maintenance. You know, a lot of companies start out at 70% innovation, 30% as far as maintenance. And then over time that flips and you'll get to 30% of your time spent on innovations development, 70% maintenance, that burden we removed that burden. >>Those are some really powerful statements protect that you mean, I really liked the way that you described software disintegration. I've actually never heard that term before. And it kind of reminded me of, you know, when you buy a brand new car, you drive it off. The lot the value goes down right away, then before you even get things out. And on the consumer side, we know that as soon as we buy the newest iPhone, the next one's going to be out, or there's some part of it, that's going to be outdated in terms of technical debt. I was reading a stat that technical debt is expected to reach and costs businesses 5 trillion us dollars over the next 10 years. How does OutSystems helps customers address the challenges with technical debt and even reduce it? >>Yeah. If you think about the guy, the truest sense of technical debt, it's a, it's a decision that you make in the development process to basically, you know, load up the future with some work that you don't want to do right now. And so we're solving that issue where number one, we, you don't even have to make that decision. So you can go back to that concept of removing that cognitive load of, do I get the software out right now or do I get it out in the right way? And that's really what technical debt technical debt is saying. I need to get it out now. And there are some things I want to do that it'd be better if I did them now, but I'm going to go ahead and push that out into the future. You don't have to do that today with us. >>And so what happens with our systems? We invest in that platform, and this is hard. I mean, this is not an easy thing to do. This is why we have some of the best and brightest engineers focusing on this process at the heart of this, not to get too technical, but the heart of this is what we call the true change engine. But then, um, within our platform, we go through and we look at all of the changes that you need to make. So if you think of that concept of technical debt of like, oh, I want to get this into the hands of man users, but I don't want to invest in the time to do something right. It's always done right. As far as with the OutSystems platform. So we take that, we look at the intent of your change. So it's like a, it's like a process where you tell us the intent. >>When you, as a application developer, you're designing an application, you tell us the intent of the application is to look and feel. It could be some business processes can be some integrations. We determine what's the best way to do that. And then once again, from a software disintegration standpoint, we continue to invest in all the right ways to do that the best way possible. And so, I mean, we have customers that have written applications. That's 10, 15 years ago, they're still using our platform with those same applications they've added to them, but they actually have not rewritten those applications. And so if you think about the normal traditional development process, the technical debt incurred over that type of lifetime would be enormous with us. There's no technical debt. They're still using the same application. They have simply added capabilities to it. We invest in that platform. So they don't have to >>So big business outcomes there, obviously from a developer productivity perspective, but from the company wide perspective, the ability to eliminate technical debt, some significant opportunities there. Talk to me about the existing OutSystems customers. When are they going to be able to take advantage of this? What is the migration or upgrade path that they can take? >>Yeah. And so it's, it was very important to me and, and, uh, and the team, as far as our systems, to be able to integrate, to innovate as far as for customers, without disrupting customers. And we've probably all been through this path of great new technology is awesome. But then to actually utilize that technology when you're a current customer, it creates pain. And so we've invested heavily in making sure that the process is pain-free so you can use project Niamh. So we are announcing it as it was in public preview, as far as now, and then we will release it from GA as far as in the first quarter of next year. So over this timeframe, you'll be able to get in and try it out and all that continue to use your current version, which is OutSystems 11. So what we, what we affectionately call it 11, as far as Alice systems, Al systems, 11 version, and continued to use, and you can continue to use that today. >>Side-by-side and coexistence with the project, Neo and project Neo is a code name. So we will, we will have an official product name is for as at launch, but it's our it's. Our affectionate is kind of a unofficial mascot as Neo. So we call it project Neo bit of a fun thing, and you can use it side by side. And then in the future, you'll be able to migrate applications over, or you can just continue to coexist. I mean, we see a very long lifetime for OutSystems 11, it's a different platform, different technology behind the scenes project, Neos, Kubernetes base, Lennox containers. Based once again on the bill, we went in with the, just looked at it and said, rearchitect re-imagined, how would you do this? If you had the best and brightest, as far as engineers, architects, um, you know, we have, which we do, you know, very smart in those people. >>And we did that. And so we did that for our customers. And so Neo is that how systems 11 still a great choice. If you have applications on it, you can use it. And we have, we anticipate that customers will actually side by side, develop on both in which we have some customers in preview today. And that's the process that they have. They will develop on 11, they will develop on the Neo and they will continue to do that. And there's no, we, we are dedicated to making sure that there's no disruption and no pain in that process. And then when customers are ready to migrate over, if that's what they choose, we'll help them migrate over. >>You make it sound easy. And I was wondering if project Neo had anything to do with the new matrix movie, I just saw the trailer for it the other day. >>It was a happy coincidence. It is not easy. Let me, let me be clear. It is something we have been working on for three years and really this last year really kicked into high gear. And, um, you know, a lot of behind the scenes work, obviously for us, but once again, that's our value proposition. It's we do the hard work. So developers and customers don't have to do that hard work, uh, but no relations in the L I love, I do love the matrix movies. So it's a, it's a nice coincidence. >>It is a nice coincidence. Last question, Patrick, for you, you know, as we wrap up the calendar year 2021, we head into 20, 22. I think we're all very hopeful that 2022 will be a better year than the last two. What are some of the things that you see as absolutely critical for enterprises? What are they most concerned about right now? >>Yeah, I think it's look, I mean, it's, obviously it has been a crazy couple of years. And, um, and if you think about what enterprises want, I mean, they want to provide, uh, a great experiences for their customers, a great experience for their employees. Once again, digital transformation, we're where you don't even kind of talk about digital transformation more because we're in it. And I think that customers need to make sure that the experiences they provide these digital experiences are the best possible experiences. And these are differentiators. These are differentiators for employees is, are differentiators for customers. I believe that software is one of the big differentiators for businesses today and going forward, and that will continue to be so we're where businesses may be invested in supply chains and invested in certain types of technologies. Business will continue to invest in software because software is that differentiator. >>And if you look at where we fit, you can go, you can go buy, you know, some great satisfied where my software as a service off the shelf in the end, you're just like every other business you bought the same thing that everybody else has bought. You can go the traditional development route, where you invest a bunch of money. It's a high risk, takes a long time. And once again, you may not get what you want. We believe what is most important to businesses. Get that unique software that fits like a glove that is great for employees is great for their customers. And it is a unique differentiator for them. And I really see that in 2022, that's going to be big and, and going forward. They're the legs for that type of investment that companies make and their return on that is huge. >>I agree with you on that in terms of software as a differentiator. No, we're seeing every company become a software company in every industry these days to be first to survive in the last 20 months and now to be competitive, it's really kind of a must have. So Patrick, thank you for joining me on the program, talking about project Neo GA in quarter of calendar year 22 exciting stuff. We appreciate your feedback and your insights and congratulations on project Neil. Thanks, Lisa. Appreciate it for Patrick Jean I'm Lisa Martin, and you're watching the cubes continuous coverage of re-invent 2021.

Published Date : Nov 30 2021

SUMMARY :

Lisa Martin and we are running one of the industry's most important and largest hybrid tech events with I appreciate being one of those 100 guests, you know, what developer experience needs to change? So many applications that take too long was, you know, So talk to me about what you're that we have with each other, with you and me, we're not the same studio today. And going back to that, you know, statement about that, it needs to change if we need these experiences And so either you shy away from that as an organization or as an individual to So, so go ahead and crack crack open what you guys are actually announcing today. And so you go in and you create The amount of technologies that you can leverage. So when you're talking to developers, what are some of the things that you described as the unique differentiators And so that first thing is you don't have to worry about the architecture anymore. it literally is you start with some version or some component before you can get that out You do not have to go through that process as a tradition, as a developer on our systems like you And it kind of reminded me of, you know, when you buy a brand new car, it's a decision that you make in the development process to basically, So if you think of that concept of technical debt of like, oh, I want to get this into the hands of man And so if you think about the normal traditional development process, the technical debt incurred When are they going to be able to take is pain-free so you can use project Niamh. as far as engineers, architects, um, you know, we have, which we do, you know, very smart in those people. And so Neo is that how systems 11 And I was wondering if project Neo had anything to do with the new matrix movie, And, um, you know, a lot of behind the scenes work, obviously for us, but once again, What are some of the things that you see as absolutely critical And I think that customers need to make sure that the experiences they provide And I really see that in 2022, that's going to be big and, I agree with you on that in terms of software as a differentiator.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Patrick JeanPERSON

0.99+

OutSystemsORGANIZATION

0.99+

Patrick JeannePERSON

0.99+

30%QUANTITY

0.99+

2022DATE

0.99+

millionsQUANTITY

0.99+

three yearsQUANTITY

0.99+

LisaPERSON

0.99+

9QUANTITY

0.99+

This yearDATE

0.99+

2021DATE

0.99+

70%QUANTITY

0.99+

7QUANTITY

0.99+

GoogleORGANIZATION

0.99+

8QUANTITY

0.99+

todayDATE

0.99+

two live setsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

100QUANTITY

0.99+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

100 guestsQUANTITY

0.98+

two numbersQUANTITY

0.98+

bothQUANTITY

0.98+

NeosORGANIZATION

0.98+

last yearDATE

0.98+

LennoxORGANIZATION

0.98+

next decadeDATE

0.97+

first quarter of next yearDATE

0.97+

pandemicEVENT

0.96+

two remote sitesQUANTITY

0.96+

about 25 million developersQUANTITY

0.96+

KubernetesORGANIZATION

0.95+

5 trillion us dollarsQUANTITY

0.94+

millions of end-usersQUANTITY

0.92+

10 millions of end-usersQUANTITY

0.92+

12 years oldQUANTITY

0.92+

first thingQUANTITY

0.91+

over 100 guestsQUANTITY

0.9+

single developerQUANTITY

0.89+

last 20 monthsDATE

0.86+

10, 15 years agoDATE

0.86+

AzureORGANIZATION

0.85+

calendar year 22DATE

0.84+

14 million plusQUANTITY

0.83+

CTOPERSON

0.82+

20DATE

0.81+

11TITLE

0.81+

22DATE

0.8+

next 10 yearsDATE

0.79+

10 billion devicesQUANTITY

0.79+

AWS reInvent 2021 Outsystems Patrick Jean


 

(Upbeat intro music) >> Welcome to theCUBE's continuing coverage of AWS re:Invent 2021. I'm Lisa Martin and we are running one of the industry's most important and largest hybrid tech events with AWS in this ecosystem partners this year. We have two live sets, two remote sets over 100 guests talking about the next decade in cloud innovation. And we're excited to be joined by Patrick Jean the CTO of OutSystems, Patrick welcome to the program. >> Thank you, I appreciate being one of those 100 guests. >> One of the 100, one of the elite 100, we'll say it like that, right? >> Yes. >> So OutSystems has some revolutionary news. You guys are saying, you know what, developer experience needs to change, tell us more. >> It does I mean, it needs to change. And I've been in the industry developing applications for too many years to mention, basically since I was 12 years old writing software and going over that time and thinking about it, doing the traditional software development route. So many applications that take too long was costly to build, so much risk involved in it. Eventually it didn't meet all the requirements. And if you look at the investment we make in software, which is important, I mean, software is a unique differentiator for businesses. That investment has such a high-risk and a high cost and that needs to change. And it needs to change just because of the complexity that is in that process inherent in it. That's and that is what we are doing in OutSystems is tackling that problem. And from a business standpoint, it must change. >> It must change that is strong words there. So talk to me about what you're announcing what were the gaps in the market, customer feedback, were there any catalysts from the pandemic going we've got to change this developer experience and this is the time. >> For sure. I mean, if you think about from the pandemic and I mean, we were on a journey for digital transformation. We've been on this journey for a number of years the pandemic really accelerated that the experiences that we have with each other, you and me are not in the same studio today. I mean, there reasons that we use this experience remotely. We have a technology that can do it. The pandemic accelerated that. And so, so much of the experiences we have are digital experiences. And if you think about it, there's a device in between us. There's going to be a device in between all the people viewing what we're looking at. That experience that they will have with us will be basically surfaced through an application on that device. And the pandemic has really accelerated that. And that's an area that we play in, obviously for what's considered low-code application development. And if you just think about application development in general, that's what powers all of these experiences. And going back to that statement about that it needs to change. If we need these experiences to be diverse, if we need these experiences to be meaningful, if we need them to make sure that when people engage as far as what that device is something that brings, delight and pleasure to them. We need developers across the board investing in that. Today there is a very constrained market for professional developers because of the inherent complexity in software development. And so if you think about how that's almost, almost here limiting access to the people who can create those experiences, that's not a good situation. There's about 25 million developers in the world that would consider themselves developers today, seven, eight, nine, 10 billion devices out there. Think of that disparity between those two numbers. And so we need a larger number of people to actually develop applications so that experience can be much more diverse. We need to expose development to many more people. That is the problem today with software development is that it is complex, it is too specialized. It's too inherent as far with failure when you get it together. And so either you shy away from that as an organization or as an individual. To do development are you going on these very long development as far as cycles to actually create these applications? What we do is we take the approach of let's make it very simple to get into. Sometimes we call it citizen developer, low-code, basically all they're saying is let's reduce the risk of development. Let's go into a process where we make it accessible to more and more people. You can go through and develop applications with the lower risk. You can build change into that process. You can get value into end users as rapidly as possible. So that is the value proposition, that is what needs to change. >> Strong value proposition well said, Patrick. Talking about reducing the complexity, the risk as well. So go ahead and crack open what you guys are actually announcing today. >> Yeah, for sure. So we've been doing this for many years. We have software development, we have 14 million plus as far as end-users using applications that have been developed with the Allo systems platform. What we're announcing is taking some of the great benefits that we have to what you'd consider as the first part of that low-code process. Where you have a developer that has an idea, and there's a canvas in front of you. You're an artist, right, with a canvas that's what you are as a developer. And so you go in and you create that application. We've been doing this for many years and it worked really well. The thing that we're improving upon now is the ability to do that and scale that out to millions of end-users, 10 millions of end-users. So if you think about that inherent speed of developing an application, using a platform like OutSystems, we're taking that same concept and rolling that into an internet scale application, hosting architecture. So any developer that uses OutSystems, basically like it would be comparable to a traditional development team that has application architects, cloud architects, security engineers, database engineers, a whole team of very smart individuals that generally the biggest technology companies in the world can put together. Most companies can't do that, you don't have access to that type of skillset. And so we're providing that with Project Neo, which is what we're announcing today in our, at our user conference and customer conference. Is this brand new as far as platform that allows you to build these applications at scale. And this is initially built on AWS using all the great AWS technologies. If you look at what AWS has done and provided to developers today, it's amazing. It is absolutely amazing. The amount of technologies that you can leverage. It's also daunting because as a traditional developer, you have to go in and choose what do you do? It's like, there's just massive cognitive load. As far as upfront when you go in to design an application. What's up in messaging, what's up at data store, well, how do I host my application? What type of network as far as security do I use? We're taking all that heavy lifting, all that undifferentiated heavy lifting off of the developers, putting it into the Project Neo platform. Allowing a single developer or a small group of developers to actually leverage that best in class architecture on AWS today. >> So when you're talking to developers, what are some of the things that you describe as the unique differentiators of Project Neo? It sounds like this was really apt and apt time for change. But when you're talking to those folks, what do you say you know, one, two three, these are the things that make Project Neo unique. >> Yeah, so the first is don't worry about the application architecture. Like I mentioned when you go in, the idea, the concept of that application and what it means to deliver some value, whether it's into a business or a hobby or whatever. I mean, however you're developing application, you're doing it for a reason. You want that value to come out as quick as possible. You want that experience. And so that first thing is, you don't have to worry about the architecture anymore. So in the past you'd have to think about if it's a very large application, it's millions and millions of end-users. How do you structure that? How do you put it together? That concern is removed from you in that process. The other thing is we solve the problem of software disintegration. So with traditional development, when you develop an application and you get it into the hands of end users it immediately starts to disintegrate. So there will be bugs that will appear. There will be as far as security flaws that will come up services that you use will become deprecated. We'll swap out cloud services by AWS or Azure or Google. swap out cloud services with different services behind the scenes. Version, there'll be new versions of those that is software disintegration. As soon as you develop software today and all of these beautiful cloud services that you use and components. Something will become outdated almost by the time you release it. A lot of times with software development projects, it literally is you start with some version or some component before you can get that out in a traditional mode, something becomes outdated. We solved that issue. What I like to call software disintegration. We, as far as OutSystems, ensure we invest in that platform. And so when we may need to change out those components, those services, those versions fix is for security flaws, fixed bugs, we do that and it's seamless. And so your application, you do not have to rewrite your application. You do not have to go through that process as a tradition, as a developer on OutSystems like you would, as your traditional developer. We solve that software disintegration issue. So it's very empowering to developers to not have to worry about that. There are many, you look at the numbers today about how much is invested in innovation versus maintenance. A lot of companies start out at 70% innovation, 30% as far as maintenance, and then overtime that flips. And you'll get to 30% of your time spent on innovations development, 70% maintenance, that burden, we remove that burden. >> Those were some really powerful statements Patrick that you made and I really liked the way that you described software disintegration. I've actually never heard that term before. And it kind of reminded me of when you buy a brand new car, you drive it off the lot, the value goes down right away then before you even get things out. And on the consumer side, we know that as soon as we buy the newest iPhone, the next one's going to be out, or there's some part of it, that's going to be outdated. In terms of technical debt, I was reading a stat that technical debt is expected to reach in costs of businesses, 5 trillion, US dollars over the next 10 years. How does OutSystems help customers address the challenges with technical debt and even reduce it? >> Yeah, I mean if you think about in the kind of the truest sense of technical debt, it's a decision that you make in the development process to basically load up the future with some work that you don't want to do right now. And so we're solving that issue where not only, you don't even have to make that decision. So you can go back to that concept of removing that cognitive load of, do I get the software out right now or do I get it out in the right way? And that's really what technical debt, technical debt is saying I need to get it out now. And there are some things I want to do that it'd be better if I did them now, but I'm going to go ahead and push that out into the future. You don't have to do that today with us. And so what happens with OutSystems is we invest in that platform. And this is hard. I mean, this is not an easy thing to do. This is why we have some of the best and brightest engineers focusing on this process at the heart of this, not to get too technical, but the heart of this is what we call the true change engine within our platform. We go through and we look at all of the changes that you need to make. So you think of that concept of technical debt of like, ah, I want to get this in the hands of end users, but I don't want to invest in the time to do something right. It's always done right, as far as with the OutSystems platform. So we take that, we look at the intent of your change. So it's like a process where you tell us the intent. When you as a application developer, you're designing an application, you tell us the intent of the application is to look and feel. It could be some business processes this could be some integrations. We determine what's the best way to do that and then once again, from a software disintegration standpoint, we continue to invest in all the right ways to do that the best way possible. And so, I mean, we have customers that have written applications that's 10, 15 years ago. They're still using our platform with those same applications they've added to them, but they have not rewritten those applications. And so if you think about the normal traditional development process, the technical debt incurred over that type of lifetime would be enormous. With us there's no technical debt. They're still using the same application they've simply added capabilities to it. We invest in that platform so they don't have to. >> So big business outcomes down, obviously from a developer productivity perspective, but from the company wide perspective, the ability to eliminate technical debt, some significant opportunities there. Talk to me about the existing OutSystems customers. When are they going to be able to take advantage of this? What is the migration or upgrade path that they can take and when? >> Yeah and so it is very important to me and the team as far as OutSystems to be able to integrate, to innovate as far as for customers, without disrupting customers. And we've probably all been through this path of great new technology is awesome. But then to actually utilize that technology when you're a current customer, it creates pain. And so we've invested heavily in making sure that the process is pain-free. So you can use Project Neo. So we are announcing it as in, it was in public preview as far as now, and then we will release it from GA as far as in the first quarter of next year. So over this timeframe, you'll be able to get in and try it out and all that. Continue to use your current version, which is OutSystems 11. So what we affectionately call O-11, as far as Allo systems. The Allo systems 11 version continue to use, and you can continue to use that today side-by-side and coexistence with the Project Neo. And Project Neo is a code name. So we will have an official product name as for as at launch but it's our affectionate it's kind of a unofficial mascot as Neo. So we call the Project Neo is a little bit of a fun name and you can use it side by side and then in the future, you'll be able to migrate applications over. Or you can just continue to co-exist. I mean, we see a very long lifetime for OutSystems 11, it's a different platform, different technology behind the scenes. Project Neo's Kubernetes-base Linux containers. Based once again, on the ability, we went in with the gist and looked at it and said, re-architect, re-imagine, how would you do this if you had the best and brightest as far as engineers, architects, we have, which we do. Various market and those people and we did that. And so we did that for our customers. And so Neo is that OutSystems 11 still a great choice. If you have applications on it, you can use it. And we have, we anticipate the customers will actually side by side develop on both in which we have some customers in preview today. And that's the process that they have. They will develop on 11, they will develop on Neo and they will continue to do that. And there's no, we are dedicated to making sure that there's no disruption and no pain in that process. And then when customers are ready to migrate over, if that's what they choose, we'll help them migrate over. >> You make it sound easy. And I was wondering if Project Neo had anything to do with the new matrix movie I just saw the trailer for it the other day, I wonder if this is related. >> It was a happy coincidence. It is not easy let me, let me be clear. It is something we have been working on for three years and really this last year really kicked into high gear. And a lot of behind the scenes work, obviously for us, but once again, that's our value proposition. It's we do the hard work. So developers and the customers don't have to do that hard work. But no relations to Neo, I love, I do love the matrix movies. So it's a nice coincidence. (Lisa laughs) >> It is a nice coincidence. Last question, Patrick, for you, as we wrap up the calendar year 2021, we heading into 2022. I think we're all very hopeful that 2022 will be a better year than the last two. What are some of the things that you see as absolutely critical for enterprises? What are they most concerned about right now? >> Yeah, I think it's, look I mean, it's obviously it has been a crazy a couple of years. And if you think about what enterprises want, I mean, they want to provide a great experiences for their customers, a great experience for their employees. Once again, digital transformation, where you don't even kind of talk about digital transformation more because we're in it. And I think that customers need to make sure that the experiences they provide these digital experiences are the best possible experiences. And these are differentiators. These are differentiators for employees. These are differentiators for customers. I believe that software is one of the big differentiators for businesses today and going forward. And that will continue to be so where businesses may be invested in supply chains, invested in certain types of technologies. Business will continue to invest in software because software is that differentiator. And if you look at where we fit, you can go, you can go buy, some great set of software, my software as a service off the shelf. In the end, you're just like every other business you bought the same thing that everybody else had bought. You can go the traditional development route, where you invest a bunch of money, it's a high risk, takes a long time. And once again, you may not get what you want. We believe what is most important to businesses. Get that unique software that fits like a glove that is great for employees, it's great for their customers. And it is a unique differentiator for them. And I really see that in 2022, that's going to be big and going forward. They're the legs for that type of investment that companies make and they return on that is huge. >> I agree with you on that in terms of software as a differentiator. Now we're seeing every company become a software company in every industry these days to be, first to survive in the last 20 months and now to be competitive, it's really kind of a must have. So, Patrick thank you for joining me on the program, talking about Project Neo, GA in the first quarter of calendar year 22. Exciting stuff we appreciate your feedback and your insights and congratulations on Project Neo. >> Thanks, Lisa, appreciate it. >> For Patrick Jean, I'm Lisa Martin, and you're watching theCUBEs continuous coverage of re:Invent 2021. (Outro music)

Published Date : Nov 16 2021

SUMMARY :

the CTO of OutSystems, Patrick being one of those 100 guests. You guys are saying, you know what, and a high cost and that needs to change. So talk to me about what you're announcing So that is the value proposition, what you guys are as platform that allows you as the unique differentiators almost by the time you release it. the next one's going to be out, it's a decision that you make the ability to eliminate technical debt, And that's the process that they have. Neo had anything to do with And a lot of behind the that you see as absolutely And if you think about I agree with you on that and you're watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

PatrickPERSON

0.99+

AWSORGANIZATION

0.99+

Patrick JeanPERSON

0.99+

millionsQUANTITY

0.99+

30%QUANTITY

0.99+

OutSystemsORGANIZATION

0.99+

14 millionQUANTITY

0.99+

LisaPERSON

0.99+

70%QUANTITY

0.99+

Project NeoTITLE

0.99+

three yearsQUANTITY

0.99+

2022DATE

0.99+

two live setsQUANTITY

0.99+

5 trillionQUANTITY

0.99+

100 guestsQUANTITY

0.99+

nineQUANTITY

0.99+

eightQUANTITY

0.99+

two remote setsQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

100QUANTITY

0.99+

TodayDATE

0.99+

two numbersQUANTITY

0.99+

sevenQUANTITY

0.99+

firstQUANTITY

0.98+

this yearDATE

0.98+

GoogleORGANIZATION

0.98+

LinuxTITLE

0.97+

about 25 million developersQUANTITY

0.97+

iPhoneCOMMERCIAL_ITEM

0.97+

last yearDATE

0.97+

first partQUANTITY

0.97+

10DATE

0.97+

next decadeDATE

0.97+

bothQUANTITY

0.97+

first quarter of calendar year 22DATE

0.95+

twoQUANTITY

0.95+

NeoTITLE

0.94+

pandemicEVENT

0.94+

11TITLE

0.92+

10 billion devicesQUANTITY

0.91+

first quarter of next yearDATE

0.9+

single developerQUANTITY

0.9+

first thingQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

over 100 guestsQUANTITY

0.88+

next 10 yearsDATE

0.88+

2021 095 VMworld Matthew Morgan and Steven Jones


 

>>Welcome to the cubes coverage of VMworld 2021. I'm Lisa Martin, two guests joining me next. Matt Morgan is here. Vice-president cloud infrastructure business group at VMware and Steven Jones joins us as well. Director of services at AWS gentlemen. That's great to have you on the program. >>Thank you, Lisa. >>Glad to see everyone's doing well. Here we are virtual. So we are just around the four year anniversary of VMware cloud on AWS. Can't believe it's been 20 17, 4 years. Matt talked to us about VMware AWS partnership and how it's progressed over that time. >>The partnership has been fantastic and it's evolved. We announced VM-ware cloud on AWS general availability all the way back at VMworld, 2017, we've been releasing new features and capabilities every other week with 16 major platform releases and 300 features as customers have requested. So it's been an incredible co-engineering relationship with AWS. We've also expanded our go to market by announcing a resale program in which AWS can resell VMware cloud on AWS. We did that back in 2019 and in 2020, we've announced that AWS is VMware's preferred public cloud partner for vSphere based workloads. And VMware is AWS's preferred service for vSphere based workloads. >>So as you said, Matt, a tremendous amount of evolution and just a short four year timeframe. Stephen talked to me about the partnership through AWS, this lens. >>Yeah. You bet. Look, I agree with Matt that the partnership has been fantastic and it's just amazing to see how fast four years has gone. I really think that AWS and VMware really are a really good example of how two technology companies can work together for them. The benefit of our mutual customers, um, as Matt indicated, VM-ware is our preferred service for vSphere based workloads. And we're broadly working together as a single team across both engineering and go-to-market functions to help customers drive business value from the, the, the investments they made over the years. And then also as they work to transform their businesses into the future with cloud technology, >>Let's talk about digital transformation. That is a term we've been, we've been talking about that for many years on this program. And at every event we've all been at, right. What we've seen in the last year and a half is a massive acceleration. Now talk to me about how VMware and AWS are helping customers facilitate that digital transformation. >>So our customers see modern it infrastructure as the core pillar of a digital transformation strategy and public cloud has been a digital transformation enabler for organizations. And that's because they have so many benefits when they embraced the public cloud, including the ability to elastically consume infrastructure. That's required the ability to employ a pay as you go financial model and the ability to reduce operational overhead, which helps save both monetary costs, but also provides more flexibility. But the big driver now is the ability to embrace innovative cloud services and those services help accelerate application development, deployment and management VMware cloud on AWS is a prime example of such an offering, which not only provides these benefits, but enhances them with operational consistency working the same way their it architecture works today, giving them familiarity and enterprise robustness that VMware technologies are known for, but being able to maximize the power of the global AWS cloud >>And every year from a customer adoption perspective, that's doubling Steven walked through a couple of customer examples that really highlight the value of VMC on AWS. >>Yeah, I've got a couple here. I think, uh, Kiko Milano is a good one. There a then our Italian company, they sell cosmetics and beauty products through about 900 retail stores in 27 different markets. So quite large, but they found that their on premises data center and outsourcing partner was just too inflexible for the changing needs of their company. And within four months, uh, Kiko actually migrated all of their core workloads to Amazon. Is he too, and particularly surprised how easy it was to migrate over 300 servers to the VMware cloud on AWS offering. And this is, this is key because the actually leveraging the same platform that they were used to, which was BMR. Uh, the Kiko team actually didn't have to perform any testing or modify any other existing applications. They also, they didn't have to actually train their teams again, because again, they were already up-skilled with being able to leverage the BMR technology. >>So again, we think it's the best of both worlds customers like Kiko can come and use VMware cloud on AWS, consolidate their server footprint and also take advantage of, of a hyperscale platform. That's pretty cool. Another customer, uh, SAP global ratings that our company provides a high quality market intelligence in the form of credit ratings, research, and thought leadership to help educate market participants to make better financial decisions who doesn't want to make a better financial decision. Right? So in order to accelerate their business growth and globalization really meet new business capabilities, they knew they needed to move a hundred percent to the cloud and wanted to know how they're actually going to do that. Now they also have an aging data center system outages, which are becoming more frequent, which to them actually concerned that they actually might, um, uh, face in the future, some penalties from the sec. >>So they didn't want to do that. So over the period of about eight months, think about this eight months, they moved to 150 financial apps to AWS leveraging VMware on AWS. Uh, pretty impressive. They reduce technical debt, uh, from legacy systems that were hosted on sun Solaris, Oracle excavator, and a X. And then now actually able to meet the goal demands of their business. The fun part here is they're actually meeting their uptime, uh, needs a hundred percent of the time since it actually moves these workloads to the VMware cloud on AWS. So pretty exciting. See customers link this kind of journey, >>Absolutely impressive journeys. Also short time periods to do a massive change there. It sounds like the familiarity with VMware in the console is a huge facilitator of the speed of migration and folks being able to get up and running. Stephen talked to me about some of the trends that you were seeing in organizations like the customers that you just mentioned. >>Yeah. So there are some emergency transfer store and a lot of customers want to leverage the same cloud operating models, but also in their own data centers. So they can take advantage of agility and innovation of cloud will also meeting requirements that they sometimes have that keep them from adopting cloud. Uh, you can think of workloads that sometimes have low latency requirements, right? Or they need to process large volumes of data locally. Uh, other times customers tell us they really need the flexibility to run data workloads, um, in a particular area that has data sovereignty or residency requirements. So when, as we talk about customers, um, they tell us that not only do they want to minimize their, their need to actually manage and operate infrastructure, um, and focus on business innovation is sometimes need to do this, um, in a, in a data center this close to them, if that makes sense. So they're looking for the best again of both worlds. >>Got it. The best of both worlds and Matt, you have some breaking news to share. What is it? >>So today we're announcing the general availability of VMware cloud on AWS outposts. >>Awesome. Congratulations. Tell me about that. Let's dig into it. >>So for customers looking to extend their AWS centric model to an on-premise location, that data center edge location via more cloud on AWS, outposts delivers the agility and innovation of AWS cloud, but on premises and VMware cloud on AWS outpost is based on VMware cloud, a jointly engineered service. So together we're delivering this service on premises as a service. This gives us the capability to integrate VMware's enterprise class architecture and platform with next generation dedicated Amazon nitro based ECE to bare metal instances. It provides a deeply integrated hybrid cloud operating environment that extends from a customer's data center to these particular services running on premises in the data center, the edge, or to the public cloud and having a unified control plane between all of it. >>A unified control plan is absolutely critical. Uh, Stephen eight, >>We have a detailed plan to offer integrated AWS services, and that capability really enhances the innovation angle for customers as they embraced the modernization of their applications. >>Another great example of how deep the partnership is Steven AWS outpost was announced at reinvent, I think 2019, which was the last time I was at an event in person. So coming up on a couple of years here, when GA talked to me about some of the key use cases that you're seeing, where it really excels. >>Yeah. So Matt, Matt highlighted a number of these, right. And you're right. It was 2019. Uh, we were all together back then and hopefully we can do that, uh, very soon here, um, quickly on apple. So overall, since, since we're talking about outposts, uh, VMware cloud on a post as well. So the thing here and Matt highlighted this is that without posts, we actually live we've leveraged, leveraged literally the same hardware and control plane technology that we leverage in our own data centers so that the customers will come to know and love and expect about the AWS platform and VMC on AWS, uh, uh, is, is, is the exact same thing that we'll be able to get with the Apple's technology. I'll give you a couple of customer examples. I think that that actually speaks to the use cases best. So, um, you remember, I talked a little bit about data locality and residency requirements. >>So first ABI Dhabi bank, uh, is the largest bank in the United Arab Emirates, right? And they were offering corporate investment and personal banking service, and they wanted to deliver a digital banking service, including email and mobile payments, but they had to follow a specific residency and data retention requirements and they had to do it in the UAE. And so what they've done is they've actually leveraged multiple AWS outposts in the UAE to allow them to provide business continuity while also leveraging the same API APIs that they had to come to know about, uh, and love about the AWS services in region, right? Phillips healthcare is another really good example. Um, you can imagine that, uh, what they do every day is, is, uh, very important things like predictive analytics for preventative treatments. And so outposts Phillips has actually taken those and that developed cloud applications, again, deployed on the same infrastructure they were used to within region. Now they can actually do this in clinics at hospitals, and they're in managing that the same tools providing, uh, same end-to-end, um, view and to their own providers, 19 administrators. And so they actually estimate they have over 70,000 servers now distributed across 12,000 locations or 1200 locations. Excuse me. So that's an example of, again, just two use cases that really broadened the reach and the flexibility of customers to run workloads in the cloud, but in a on-premise fashion. Does that make sense? >>Yes, it does. And you mentioned two great stories there. One in financial services, the other one healthcare, two industries that have had to massively pivot in the last 18 months amongst many others, but let's talk a little bit more Steven, about some of the things that you're hearing from some of the early customers of BMC on outpost. What are some of the near term opportunities that you're uncovering? >>Yeah, I've got to say here too, that, uh, customers are VMware customers have been asking us for this for quite some time. I'm sure Matt would agree. Um, so look from, uh, go back to some of the use cases we've discussed low latency compute requirements. So one of our higher education customers today who has migrated workloads to be more cloud on AWS, um, is looking at, uh, extending the same capability to an on-premise experience specifically for, um, uh, school applications that require a low latency, um, uh, integration, um, from a local data processing perspective. Again, one of our VMware on AWS top biopharmaceutical companies, uh, here again in the U S um, is planning to use VMware cloud on AWS outposts for health management applications with patient records that need to be retained locally at the hospital hospital sites. And then finally you can kind of going back to the story around data residency. We have a large telco provider in Europe that is planning to use this particular offering for their applications that need to remain on premises to meet regulatory requirements. So again, you know, we're just super pleased with the amount of interest, not only in VMware cloud on AWS, but also in this new run that we're announcing today. And we're really excited to be able to support the VMware cloud experience really on the AWS Apple's platform for a of these use cases. >>One of the things we've talked about for many years with both VMware and AWS is the dedication to listening to the voice of the customer. Not obviously this is a great example, Steven, as you said, VMware customers have been asking for this for awhile. So while customers have a ton of choice, I want you guys to unpack what the differentiators are of this service. And Matt, if we can start with you to bring you back into the conversation, we'd love to get your, your input on those differentiators. >>Yeah, absolutely. So people have to look at this for the service that's delivered and on the VMware side of the equation, we're delivering the full VMware cloud infrastructure capability. This is delivered as a service as a cloud service on premises. So why is this valuable? Well, it relieves the it burden of infrastructure management and fully maximizes the value of a fully managed cloud service, giving an organization, the capability to unlock the renovation, budgets, and start to invest truly an innovation. This is all about continuous life cycle management, ongoing service monitoring, automated processes to ensure the health and security the infrastructure. And of course, this is backed by expert VMware site recovery and reliability engineers, to ensure that everything works perfectly. We also enable organizations to leverage best in class enterprise grade capabilities that we've talked about in our compute storage and networking for best-in-class resiliency auto-scaling and intrinsic availability. >>So there's no long procurement cycles to set up these environments. And that means it's developer ready right out of the box. We're also deeply integrated with what customers do today. So end to end hybrid cloud usually requires end-to-end hybrid processes. And with this integration into those processes is instant, no reconfiguration, no conversion, no refactoring, no rearchitecture of existing applications using VMware HDX or B motion organizations can move applications to leverage this cloud service instantly. It allows you to use established on premises governance, security, and operational policies, and ensures that that workload portability I mentioned goes both ways. It's bi-directional as customers need to have portability to meet their business requirements. As we mentioned earlier, there's a unified hybrid control plane with a single pane of glass to manage resources across the end-to-end hybrid cloud environment. And we're giving direct access to 200 plus native AWS services. And that enables an organization to truly modernize their applications, starting where they are today. And so that gives you the real capability to deliver a unique service. One that gives you an organization, the ability to migrate without any downtime have fast, fast cost effective capabilities and a low risk to their hybrid cloud strategy. >>Excellent. That's a pretty jam packed list of differentiators there, but one of the things that it really sounds like not from what you said is how much work has gone on to make the transition smooth for customers, give them that flexibility and that portability that they need. Those are marketing terms you and I know are used very frequently, but it really seems like the work that you've done here will be done straight to that. I want to ask you Stephen, that same question from AWS's perspective, what really differentiates the solution. >>It is a good question. I'll just, uh, I'll agree that there has been a ton of work first that is, has gone, gone into actually making this happen. Right. Um, and to, to all the points that Matt made. And I would just add that again. 80 was outpost is built on the same AWS nitro system and infrastructure. The customers have already come to love in the cloud. And so gone really are the days where customers have to worry about procuring and racking and stacking their own gear layer on all the benefits, the map outline from a VMware perspective. And again, we, we really believe the customers are getting the best of both worlds here. Um, with, with specifically with the compute that comes in the outpost rack, um, customers actually get getting kind of built in redundancy and resiliency, hard security, all those things that customers don't know, they need certain things. >>The customers know they need to pay attention to, but also want some help with. And so we've, we, we put a lot of thought and effort into this. Um, but could I just, uh, explain a little bit about the customer experience, um, when a customer orders and AWS outposts rack, right? AWS actually signs up, uh, to do a fully managed experience here. Like we'll bring people in to actually do site assessments. Um, we'll manage the hardware, setup, the installation and the maintenance of that gear over time. Well, VM-ware also manages the, the software defined data center construct as well as, um, the, the single point for, uh, for support questions. And so together, we really thought through how customers is met, but it get an end to end experience from hardware all the way up through application modernization. It's pretty exciting, >>Very deep partnership there. And we're out of time, but I do want to ask you guys, where can customers go, who are interested in learning more about this new service? >>So at VM world, there are a collection of DMR cloud, AWS sessions, including sessions, dedicated to VMware cloud on AWS outpost. We encourage everyone who's attending VMworld to look up those sessions and you'll learn all about the hardware, the service, the capabilities, the procurement, and how to get started. In addition, on vmware.com, we have a web portal for you to gain additional knowledge through a digital consumption. That's vmware.com/vmc-outposts. >>Awesome. Matt, thank you. I'm sure folks will be just drinking up all of this information at the sessions at VMworld 2021. And I hope to see you in person at next year's VM. I'm crossing my fingers. Great to see you guys Format Morgan and Steve Jones. I'm Lisa Martin, and you're watching the cubes coverage of the em world to 2021.

Published Date : Sep 27 2021

SUMMARY :

That's great to have you on the program. Matt talked to us about VMware AWS partnership and how it's progressed over that time. expanded our go to market by announcing a resale program in which AWS Stephen talked to me about the partnership through AWS, this lens. to see how fast four years has gone. Now talk to me about how VMware and AWS are helping customers facilitate that But the big driver now is the ability to embrace innovative cloud services examples that really highlight the value of VMC on AWS. Uh, the Kiko team actually didn't have to perform any testing or modify any other existing So in order to accelerate their business growth months, they moved to 150 financial apps to AWS leveraging VMware on AWS. the speed of migration and folks being able to get up and running. the flexibility to run data workloads, um, in a particular area that has The best of both worlds and Matt, you have some breaking news to share. Let's dig into it. services running on premises in the data center, the edge, or to the public cloud Uh, Stephen eight, and that capability really enhances the innovation angle for customers as they embraced Another great example of how deep the partnership is Steven AWS outpost I think that that actually speaks to the use cases best. the reach and the flexibility of customers to run workloads in the cloud, And you mentioned two great stories there. We have a large telco provider in Europe that is planning to use this particular offering for their applications And Matt, if we can start with you to bring you back into the conversation, we'd love to get your, your input on those the capability to unlock the renovation, budgets, and start to invest truly an innovation. And that enables an organization to truly modernize their applications, gone on to make the transition smooth for customers, The customers have already come to love in the cloud. The customers know they need to pay attention to, but also want some help with. And we're out of time, but I do want to ask you guys, where can customers go, the service, the capabilities, the procurement, and how to get started. And I hope to see you in person at next year's VM.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

StephenPERSON

0.99+

AWSORGANIZATION

0.99+

MattPERSON

0.99+

UAELOCATION

0.99+

Matt MorganPERSON

0.99+

Steve JonesPERSON

0.99+

EuropeLOCATION

0.99+

2019DATE

0.99+

LisaPERSON

0.99+

PhillipsORGANIZATION

0.99+

2020DATE

0.99+

StevenPERSON

0.99+

AmazonORGANIZATION

0.99+

Steven JonesPERSON

0.99+

VMwareORGANIZATION

0.99+

1200 locationsQUANTITY

0.99+

United Arab EmiratesLOCATION

0.99+

AppleORGANIZATION

0.99+

19 administratorsQUANTITY

0.99+

300 featuresQUANTITY

0.99+

150 financial appsQUANTITY

0.99+

two guestsQUANTITY

0.99+

12,000 locationsQUANTITY

0.99+

BMCORGANIZATION

0.99+

Stephen eightPERSON

0.99+

over 70,000 serversQUANTITY

0.99+

oneQUANTITY

0.99+

27 different marketsQUANTITY

0.99+

bothQUANTITY

0.99+

eight monthsQUANTITY

0.99+

2017DATE

0.99+

appleORGANIZATION

0.99+

both worldsQUANTITY

0.98+

over 300 serversQUANTITY

0.98+

four monthsQUANTITY

0.98+

Saveen Pakala and Tanu Sood, Nutanix | .NEXTConf 2021


 

(cheerful music) >> Hello. Welcome to this special nutanix.next coverage, theCUBE. We are in our remote studios in Napa today, with some two great guests talking about hybrid multicloud what that costs them. Tanu Sood, who's the senior director of product marketing, attending. Great to see you, and Saveen Pakala, VP of product management for platform at Nutanix. Great to have you on. A lot of cool things happening with hybrid cloud architectures. When people want to have more cloud and want it more invisible, they want it faster, they want it on multiple clouds, AWS, Azure, GCP, and others. So welcome to theCUBE coverage. >> Thanks so much, John. Thanks for having us here. >> Tanu if I start with you first on the, on the, what is driving the hybrid multicloud architecture? Is it just the fact there's clouds out there or are specific things that you're seeing that customers really want that's a need for their business? >> So you're right, John, over the past few years, we've seen cloud investments really taking off. In fact, last year in the midst of the pandemic, when the economy was showing a downturn, cloud spending was up by 30%. So organizations are looking to cloud for speed, for scale, for elasticity and for app modernization. However, the same organizations would also tell you that there are some workloads that will continue to stay on-prem either in the near term or permanently. So what they're really talking about is this hot notion of hybrid cloud, which is interoperability between their on-prem investments, their existing investments, and their public cloud investments. In fact, I would say Gardner in 2020, they did a survey and 75% of organizations actually talked about hybrid cloud being the preferred ID operating model and an overwhelming majority of those 80% who had public cloud in their infrastructure, They had two or more public cloud providers in their space. So that's the multicloud aspect of it. So whether it's happening as a happenstance audit to deliberate ID strategy, what we are seeing organizations take on, is this hybrid multicloud infrastructure. >> It's interesting. It's so funny to see the dynamics of the evolution, cause it's like, oh yeah, we got some thought, I want more cloud. I want more cloud. I want more cloud. Wait a minute. I want to give up the on premises piece. We've got Amazon, we've got- okay, we've got multiple things happening. How do you pull it all together? So Saveen, I got to ask you the blockers. What it is holding back? Because I mean, it's kind of like it happened, right? People replatforming with cloud, they're not giving up their data centers and or the on-premise component. >> Yeah. What's the blocker. Is it inertia? >> Yeah. Is it time? Is it evolution? Is it skills? What's that, what's holding everyone back? >> Yeah, you know, John, over the last several months quarters, what we have seen is that there are four typical issues that sort of come up. When customers start to look at their hybrid multicloud journey, the first one is, how do I move my on-prem applications, workloads to public cloud? Do I need to refactor the applications as I do that? How do I move that application from one cloud to the other and potentially move it back to the data center? And because the other link platform are disparate between these different destinations, it's usually very challenging. The next question then you come up with is Hey, after I move the application to the public cloud, what about management? Right? It, it is going to be different. It has its own island of infrastructure. The management tooling is different, the skillsets required are different, processes are different. So that becomes another challenge. Then comes the service levels. I'm still responsible for all the service levels, from a backup perspective. DR, security, performance that, you know, I was responsible for on-prem. I'm still responsible for those in the public cloud. And then lastly, I would say, you know, customers want to know that their investment is protected, right? As they move the workloads all around, they want to know that the licenses would follow them. They can actually take advantage of the licenses. they've already procured and not have to procure something new just to run the same workload. So those are some of the challenges that we've seen come up. >> I mean, it's always good to chat with you guys because I remember covering Nutanix back in 2010, there was kind of a new thing and everyone got on the same bandwidth and copied the hyperconvergence. And it was very similar on a whole nother level. It seems now there's another inflection point. I want to get your reaction, Tanu. You can, you can weigh in. That'd be great too. And get your reactions. Well, this whole shift from design thinking, which has been great for, for a decade or so to there's a whole other kind of conversation around system thinking and systems thinking is about platforms and it's about outcomes. But now with what you guys are discussing and launching this year, that next, is it's a systems concept. It's distributed computing. This is kind of a new kind of mindset. How do you guys see that evolving in the customer base? And how do you talk about that? Because this is something that is coming up, kind of like that design thinking mantra. It's like systems thinking, think about the impacts. Can you guys weigh in on your reactions to that? >> Oh yeah. So, you know, when you look at the systems level problem, right, it's really that of having the same platform, you know, be available at that multiple locations, wherever you want to run your workloads. That's not the underpin Penang or the foundation, if you will, of your system. And we've done, done just that with clusters, we have basically taken our hyper-converged infrastructure stack plus the hypervisor, plus the management stack, you know, that was running on-prem and we have essentially made it available on the public cloud. Right. So, so that's really the special thing about that is that it's a single infrastructure and single management plane across your private cloud and a public cloud, which really helps organizations to accelerate the hybrid cloud journey. >> And it was the impact for customers could be next, if you have that single layer, it's unify. >> Yeah. I mean, it's, it's a, you know, the problems and the challenges that I mentioned earlier, customers will be able to address all of them with, by leveraging something like flustered, sorry, customers will be able to deploy their workloads on private cloud or public cloud without having to, you know, have disparate management models, right. They can have a single, simple and consistent management model between private cloud and public cloud. They are, they will be able to meet the same service levels that they've been able to on-premise, whether it's VR, whether it's backup, whether it's security, whether it's performance and all along knowing that their investment is protected with Nutanix. As you know, we have licensed for affordability of owners, software licenses between private and public cloud. So these are all the benefits that are, that are very real. And, you know, customers really value when they think about overall problem statement, they have a can. >> And what's your reaction to systems mindset, system thinking in terms of customers and other, A list, if you're looking at hybrid cloud. >> Yeah. So John, as you talked about, right, we started from a place of making infrastructure invisible. You just taking a read that complexity of infrastructure. And now we have evolved it to the next level where we are really making clouds invisible. This whole idea of you could be sitting on any cloud, it could be private cloud, it could be any public kind of, multiple public clouds. You don't have to worry about the complexity. There's the software layer, layer that's sitting on top of that. That's really making that underlying layer invisible to you so that you can just get about doing your job. It's all about business outcome at the end of the day. >> By the way, I love the invisible mindset because that's also like, that's what DevOps infrastructure's code was supposed to be; make things invisible, make them programmable. And we got to see serverless and functions coming out. People are really getting excited by the ease of ability to just provision resources. This is a major wave, that's going to have a major impact to enterprises. How has this specifically impacting this hybrid cloud architecture? What do you guys do to make that invisible? Because customers are all like, no, one's deny- denying it's happening. They know like, okay, we know what's happening, like, but they don't know what to do. They're like, how do I start? Who do I hire? What do I change? What do I automate? These are questions. How do you guys see that? >> Yeah, look, I think customers repeatedly tell us that, Hey, ultimately I invest a lot in really making my enterprise IT repeatable, reliable and predictable, right? So after they're invested in the process, the tooling, the people, they want to be able to leverage that regardless of, you know, where the IT direction takes them. When it comes to public cloud, they won't be able to take the same investment that they've made and be able to leverage them and capitalize that on the public cloud. And that's really, you know, the problem statement that we're really focused on. Just making sure that your point to making the infrastructure invisible has to do with, you know, having a platform that hides all the complexity underneath and provides a simple, consistent, you know, framework, if you will, for the applications and the management, the people and the tooling. >> Saveen, tell me about Nutanix clusters. What's that about? What's the value? What's the pitch there, what's it- What's it all about? >> Yeah. In a, in a, in a nutshell, you know, clusters is simply Nutanix software stack delivered on public cloud. Really it's a, it's, it includes our ECI or the POS hyperconverged infrastructure, AHV hypervisor, and prison management plan. And it's the same stack that we have been, that we actually introduced 10 years ago, run by thousands of customers. And they've taken the exact same reliable, big stack. And we had available on the public cloud. And with that, you know, customers get some of those benefits that we talked about earlier. >> And it, talk about the use cases because everyone's talking about day one, day two operations, shift left for security. If I bring that stack into the cloud, what is the use cases that emerge just for the customer? >> Yeah, so John, that definitely some patterns that have emerged with customers you will have with cloud. And in fact, our viewers, won't be surprised to hear that disaster recovery is foremost. A lot of organizations are starting with disaster recovery on public cloud with mechanics clusters. This helps them avoid maintenance and investments in a secondary data center, just purely for disaster recovery, but it also gives some geographical separation and it gives them the regional cloud options so that they can still meet the data residency requirement, which as you know, is very key for especially companies that based in MIA Interestingly, most of these organizations that are looking at disaster recovery in public cloud using Nutanix clusters are also leveraging their investments in clusters and their cloud instances to drive capacity bursting. So using it when Dev desk or seasonal on-demand bursting. So when you're not using it for fail-over, for disaster recovery, the same cloud investments are being optimized for cloud capacity bursting. And then finally this workload migration, right? So either it's for data center consolidation or migration, or for app modernization. Our customers are looking to migrate some of their workloads to the cloud, but they want to do that quickly or in a timely fashion. So the idea is that you migrate them as is without any app refactoring right away with clusters, and then once you're on the cloud, then you can refactor at your own pace. You can modernize some components of your applications on an as needed basis. So those are the three use cases that we are seeing disaster recovery, capacity bursting, workload migration, but then to your point about day one and day two, operations. Day two operations that are really, really key. When you have public cloud investments, private cloud investments, and multiple public clouds in the mix, it could be really complex to have your IT operations in play, right? So this notion that Saveen alluded to earlier of a unified infrastructure and management plane that oversees your public cloud, multiple public cloud and private cloud in infrastructure, as well as provide operations, not just for your VMs, but also for your containers and storage is key for our customers. So, so this whole notion of easing up on day zero and day one operations, but also day two and day end operations is top of mind for our customers. >> That's really well put, I think that, that'll tying that layer, that horizontally scalable control plane, whatever you want to call it, it really creates a lot of value from the blocking and tackling meat and potatoes disaster recovery, to enabling the migration and replatforming, and then refactoring of those apps. I mean, this is the modernization trend. This is what people are talking about. So this is what people want. This, this is hard to do. And seems hard. Maybe it's easier with you guys. What, what's, what's holding it all back? Because these...I'm sold. I mean, I've been preaching this for years. Like this is finally coming at scale, and then, is it, is it multi-cloud that's the bottleneck or is that not yet fleshed out? Is it more, architectures are not ready? The containerization or the state stateful data apps? Aren't the tools aren't there? Can you guys give me a sense of why it's not going faster? Or is it going faster? >> Yeah. So maybe I'll chime in and let Tanu as well. So we, we introduced clusters late last year and we have seen a lot of momentum and a ton of interest from our customer base. And, you know, for the use cases that Tanu just talked about, that's already happening with many customers that are already well on their hybrid multicloud journey. And, you know, ultimately it comes down to just, you know, where the organization is in their journey. And, you know, especially if you're a Nutanix customer, very familiar with the stack, you know, for them taking the next step, taking, you know, with clusters in AWS, it's actually not that big of a jump, right. And, but if you're not on the platform, then you know, you, you know, some of the challenges we discussed earlier are the things that get in the way. >> It's almost like day one operations tend to is like innovation and day, day two operations is rain it in, you know, get the value. >> Yeah. >> Day one, get going and do some experimentation and day two, make it all operate cleanly. >> Exactly. >> You know, oftentimes, you know, we have conversations even in the forest. So second conversation, the topic gravitates towards app refactoring. When you know that there's a much more heavyweight and complex time consuming project. You can actually get to cloud without pre-factoring and do it at your own pace. And, you know, on your own terms, really. >> I think the migration thing is a huge thing. I mean, at that, I see a lot of that. And then once they get to the cloud, they go, Wow, I could do a lot more here. >> Yeah. >> And that just spawns more. It's a step function value there. And then as open source continues to grow, oh my, it's just, it's just a successful and we don't overthink it, just get to the cloud, understand the distributed nature of the on-premise piece. And boom, then go from there, you see that, that accelerated value distraction. >> And as Tanu said earlier, I mean, we are taking a much, much more of a holistic and uplevel view of management in this hybrid multicloud environment, including non Nutanix environments, right? So we're not stopping at just a Nutanix environment. So just to be answer, you're talking about containers, you're talking multicloud, but also talking about non Nutanix environments that you may have and, you know, give you that one sort of, you know, one single plain of glass, if you will. >> It's DevOps happening at the Dev is I've always been there, now, the OS is getting stronger and stronger. Now it's changing too the intelligent edge is around the corner. That's just another edge. That's just another premise in my mind. So again, this flexes with what you guys are thinking about. So I think the edge brings up a lot of action too. Big time. Exciting news. Let's extend this into the news. So you guys have some exciting news. Talk about what's new, what's the big stories what's breaking. What's exciting. What's the top stories coming this year? >> Sure, sure. So since we launched clusters late, late last year on AWS, we have focused on a couple of things. One is expanding the availability, right? So we have added multiple regions. Now the total number of regions, AWS regions that we support is 23. We also recently added support for AWS gov cloud for a US federal customers. And we have FedRAMP moderate authorization. So that's, which is very key for that customer base. We also added some really new and exciting capabilities such as elastic VR. Some of that Tana already mentioned. Hibernate and resumed, which is a very unique capability from clusters where you can hibernate and our clusters and, you know, give up all the betterment in order to compute, but still have your data intact in as three, just so you can resume it very quickly whenever the need arises again. And you know, last but not the least, we are super excited about bringing clusters to Microsoft Azure. This has been a long and strong partnership with Microsoft. And as you heard in the keynote, we are actually starting the preview at this event, and, you know, opening up to the customer so that they can get that firsthand feeling for the product and work with us in bringing the product to GA. >> And John- >> Multicloud world. Oh, sorry. Tanu, go ahead. >> No, so this is exactly what I was going to say. This is multicloud coming to pair, right? So we talked about hybrid cloud and now here we are with multicloud options for you. >> What's interesting is that everyone always, you know, as the trends change, you know, this is changing, that company's shifting and you guys have evolved beautifully. And I think the way people are leveraging cloud really shows their strengths and run the cloud actually highlights the strengths. If you play it properly, you can survive. I mean, look at snowflake. They don't even have a cloud. There are data cloud now. So, you know, if you bring, if customers can bring their, their architecture to the cloud, they can actually do a lot of re, rearchitecting and rechanging to modernizing their business. This is something that's kind of only in the past few years, that's come up. This is quite a big trend. Do you guys see the same thing happening faster or is it just we're inside the ropes? And we love it so much. (laughs) >> Yeah. Like I said, earlier, organizations that are at different levels of the journey, but we're seeing happening all around us. And we're embracing that. We're actually embracing that trend, Enabling that trend because we truly believe hybrid cloud is the more practical reality. And we want customers to have the cloud on their own terms and not feel like they have to, you know do something just because they're forced to, or they're not able to, cost-effectively or even technically for that matter. >> Oh John >> Okay, well- Go ahead, Tanu, sorry. >> I was just going to say that our CEO, Rajiv Ramaswami, puts it really well. The cloud is brilliant operating model, right. So it really should not be about variable workloads. It should just be an easy operational model for you to engage with. >> Yeah. I think you guys have a great strategy. And I think the invisible really rings true with me as well as that horizontally scalable control plane, because the innovation is happening, but the operations have to be reigned in and support the expansion as well. Which means you have to kind of focus on the fact that you've got to reign in the data and you've got to make it invisible. If you look at Lambda functions, and you've got serverless trend booming with the edge, it's got to be invisible and programmable. It just has to be. >> Exactly. Yeah. >> Great stuff. All right. Final question for you both, if you don't mind. Tanu we'll start with you. >> Okay. >> What's the big story this year at .next? If you had to summarize it and tell your friend that you're driving in the elevator up to the top floor, what's the big story that should be talked about? That's being talked about this year at .next? >> Taking unified infrastructure and management and having Azure in preview is really the big news here. So go to nutanix.com/azure, to learn more, show us your interest there, sign up for a test drive. It really is a very easy way for you to experience a product in action. And you'll just see how simple it is to deploy a hybrid cloud with clusters on Azure and under an hour. >> Saveen, final word for you. What's the big news? What's the takeaway? >> Yeah, look, I, I would say that, you know, your cloud on your terms is really the big news. That's driving everything they're doing back in the office billing products and ultimately, you know, delivering or making that whole hybrid cloud journey a reality for our customers. >> Tanu and Saveen, thank you for coming on, sharing that commentary on theCUBE coverage at .next. Thanks for coming up. >> Thanks so much John. Thanks for all. >> It was our pleasure. >> Thanks for watching. More coverage, stay tuned. (cheerful music)

Published Date : Sep 23 2021

SUMMARY :

Great to have you on. Thanks for having us here. So that's the multicloud aspect of it. So Saveen, I got to ask you the blockers. What's the blocker. Is it time? And then lastly, I would say, you know, to chat with you guys because I remember the same platform, you know, if you have that single layer, it's unify. having to, you know, And what's your reaction invisible to you so that you can What do you guys do to and capitalize that on the public cloud. What's the pitch there, what's it- And it's the same stack that we have been, If I bring that stack into the cloud, So the idea is that you migrate them it's easier with you guys. very familiar with the stack, you know, rain it in, you know, get the value. and day two, make it all operate cleanly. And, you know, on your own terms, really. And then once they get to the cloud, nature of the on-premise piece. that you may have and, you know, So you guys have some exciting news. in bringing the product to GA. Tanu, go ahead. This is multicloud coming to pair, right? as the trends change, you know, and not feel like they have to, you know for you to engage with. but the operations have to Yeah. both, if you don't mind. driving in the elevator is really the big news here. What's the big news? is really the big news. thank you for coming on, Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

twoQUANTITY

0.99+

Rajiv RamaswamiPERSON

0.99+

MicrosoftORGANIZATION

0.99+

TanuPERSON

0.99+

Tanu SoodPERSON

0.99+

NapaLOCATION

0.99+

2020DATE

0.99+

Saveen PakalaPERSON

0.99+

2010DATE

0.99+

75%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

80%QUANTITY

0.99+

TanuORGANIZATION

0.99+

last yearDATE

0.99+

10 years agoDATE

0.99+

USLOCATION

0.99+

this yearDATE

0.99+

nutanix.com/azureOTHER

0.98+

first oneQUANTITY

0.98+

firstQUANTITY

0.98+

LambdaTITLE

0.97+

threeQUANTITY

0.97+

day oneQUANTITY

0.96+

todayDATE

0.96+

AzureTITLE

0.96+

two great guestsQUANTITY

0.96+

thousands of customersQUANTITY

0.96+

three use casesQUANTITY

0.96+

second conversationQUANTITY

0.96+

bothQUANTITY

0.96+

Day oneQUANTITY

0.96+

under an hourQUANTITY

0.96+

pandemicEVENT

0.95+

GALOCATION

0.95+

single layerQUANTITY

0.94+

SaveenPERSON

0.94+

day twoQUANTITY

0.94+

Day twoQUANTITY

0.94+

late last yearDATE

0.93+

singleQUANTITY

0.91+

four typical issuesQUANTITY

0.91+

one single plainQUANTITY

0.91+

2021DATE

0.9+

dayQUANTITY

0.9+

a decadeQUANTITY

0.9+

TanaPERSON

0.89+

ryonePERSON

0.87+

nutanix.nextORGANIZATION

0.87+

30%QUANTITY

0.87+

OneQUANTITY

0.86+

single infrastructureQUANTITY

0.85+

oneQUANTITY

0.84+

one cloudQUANTITY

0.83+

single managementQUANTITY

0.82+

theCUBEORGANIZATION

0.81+

PenangORGANIZATION

0.8+

DevOpsTITLE

0.75+

2021 084 Meena Gowdar


 

(bright music) >> Welcome to this session of the AWS EC2 15th birthday event. I'm your host, Lisa Martin. I'm joined by Meena Gowdar, the principal product manager for AWS Outposts at AWS. Meena, welcome to the program. >> Thanks Lisa. It's great to be joining here today. >> So you were the first product manager hired to lead the development of the Outpost service. Talk to us about back in the day. The vision of Outpost at that time. >> Yeah, Outpost vision has always been to extend the AWS experience to customers on premises location, and provide a truly consistent hybrid experience, with the same AWS services, APIs and suite of tools available at the region. So we launched Outpost to support customers' workloads that cannot migrate to the region. These are applications that are sensitive to latency, such as manufacturing, workloads, financial trading workloads. Then there are applications that do heavy edge data processing, like image assisted diagnostics and hospitals for example, or smart cities that are fitted with cameras and sensors that gather so much data. And then another use case was regarding data residency that need to remain within certain jurisdictions. Now that AWS cloud is available in 25 regions and we have seven more coming, but that doesn't cover every corner of the world, and customers want us to be closer to their end-users. So Outpost allows them to bring the AWS experience where customer wants us to be. To answer your question about the use case evolution, along the way, in addition to the few that I just mentioned, we've seen a couple of surprises. The first one is application migration. It is an interesting trend from large enterprises that could run applications in the cloud, but must first rearchitect their applications to be cloud ready. These applications need to go through modernization while remaining in close proximity to other dependent systems. So by using Outpost, customers can modernize and containerize using AWS services, while they continued to remain on premises before moving to the region. Here, Outpost acts as a launchpad, serving them to make that leap to the region. We were also surprised by the different types of data residency use cases that customers are thinking about Outposts. For example, iGaming, as sports betting is a growing trend in many countries, they're also heavily regulated requiring providers to run their applications within state boundaries. Outposts allows application providers to standardize on a common AWS infrastructure and deploy the application in as many locations as they want to scale. >> So a lot of evolution and it's short time-frame, and I know that as we're here talking about the EC2 15th birthday, Amazon EC2 Core to AWS, but it's also at the core of Outposts, how does EC2 work on Outposts? >> The simple answer is EC2 works just the same as Outposts does in the region, so giving customers access to the same APIs, tools, and metrics that they are familiar with. With Outposts, customers will access the capacity, just like how they would access them in an availability zone. Customers can extend their VPC from the region and launch EC2 instances using the same APIs, just like how they would do in the region. So they also get to benefit all the tools like auto-scaling, CloudWatch metrics, Flow Logs that they are already familiar with. So the other thing that I also want to share is, at GA, we launched Outposts with the Gen 5 Intel Cascade Lake Processor based instances, that's because they run on AWS Nitro Systems. The Nitro Systems allows us to extend the AWS experience to customers location in a secure manner, and bring all the capabilities to manage and virtualize the underlying compute storage and network capabilities, just the way we do that in the region. So staying true to that Outpost product vision, customers can experience the same sort of EC2 feature sets like EC2 placement groups on demand, capacity, reservations, sharing through resource access managers, IM policies, and security groups so it really is the same EC2. >> I imagine having that same experience, the user experience was a big advantage for customers that were in the last 18 months rapidly transforming and digitizing their businesses. Any customer examples pop up that to you that really speak to, we kept this user experience the same, it really helped customers pivot quickly when the pandemic struck. >> It almost feels like we haven't missed a beat Outpost being a fully managed service that can be rolled into customer's data center, has been a huge differentiator. Especially at a time where customers have to be nimble and ready to respond to their customers or end users. If at all, we've seen the adoption accelerate in the last 12 to 18 months, and that is reflected through our global expansion. We currently support 60 countries worldwide, and we've seen customers deploying Outposts and migrating more applications to run on Outpost worldwide. >> Right. So lots of evolution going on as I mentioned a minute ago. Talk to me about some of the things that you're most excited about. What do you think is coming down the pike in the next 6 to 10 months? >> We're excited about expanding the core EC2 instance offerings, especially bringing our own Graviton Arm processor based instances on Outposts, because of the AWS nitro systems. Most easy to instances that launch in the region will also become available on Outpost. Again, back to the vision to provide a consistent hybrid experience for AWS customers. We're also excited about the 1U and 2U Outpost server form factors, which we will launch later this year. The Outpost service will support both the Intel Ice Lake Processor based instances, and also Graviton Processor based instances. So customers who can't install and, you know, 42U form factor Outposts, can now bring AWS experience in retail stores, back office, and other remote locations that are not traditional data centers. So we're very excited about our next couple of years, and what we are going to be launching for customers. >> Excellent. Meena, Thank you for joining me today for the EC2 15th birthday, talking about the vision of outposts. Again, you were the first product manager hired to lead the development of that. Pretty exciting. What's gone on then the unique use cases that have driven its evolution, and some of the things that are coming down the pike. Very exciting. Thank you for your time. >> Thank you, Lisa, >> For Meena Gowdar, I'm Lisa Martin. Thanks for watching. (bright music)

Published Date : Aug 20 2021

SUMMARY :

the AWS EC2 15th birthday event. It's great to be joining here today. to lead the development the AWS experience to and bring all the capabilities the user experience was a in the last 12 to 18 months, in the next 6 to 10 months? that launch in the region and some of the things Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Meena GowdarPERSON

0.99+

LisaPERSON

0.99+

AWSORGANIZATION

0.99+

MeenaPERSON

0.99+

25 regionsQUANTITY

0.99+

60 countriesQUANTITY

0.99+

Ice LakeCOMMERCIAL_ITEM

0.99+

todayDATE

0.99+

2021 084OTHER

0.99+

Cascade Lake ProcessorCOMMERCIAL_ITEM

0.99+

OutpostORGANIZATION

0.99+

bothQUANTITY

0.98+

IntelORGANIZATION

0.98+

EC2TITLE

0.98+

AmazonORGANIZATION

0.98+

first oneQUANTITY

0.97+

first productQUANTITY

0.94+

EC2 CoreCOMMERCIAL_ITEM

0.93+

firstQUANTITY

0.93+

a minute agoDATE

0.92+

later this yearDATE

0.92+

OutpostTITLE

0.92+

15th birthdayQUANTITY

0.91+

Nitro SystemsCOMMERCIAL_ITEM

0.88+

Gen 5COMMERCIAL_ITEM

0.86+

seven moreQUANTITY

0.85+

CloudWatchTITLE

0.83+

last 18 monthsDATE

0.82+

Graviton ArmCOMMERCIAL_ITEM

0.8+

first product managerQUANTITY

0.8+

iGamingTITLE

0.78+

2UOTHER

0.76+

AWS OutpostsORGANIZATION

0.74+

GALOCATION

0.73+

next couple of yearsDATE

0.72+

EC2COMMERCIAL_ITEM

0.72+

OutpostsORGANIZATION

0.69+

pandemicEVENT

0.66+

18 monthsQUANTITY

0.66+

OutpostsTITLE

0.64+

10 monthsQUANTITY

0.63+

Outpost visionTITLE

0.63+

GravitonOTHER

0.61+

OutpostsCOMMERCIAL_ITEM

0.56+

6QUANTITY

0.54+

12QUANTITY

0.5+

42UOTHER

0.48+

OutpostCOMMERCIAL_ITEM

0.33+

1UOTHER

0.31+

Red Hat and Nutanix Strategic Partnership


 

(light, upbeat music) >> The last decade of cloud computing introduced and popularized an operating model that emphasized, simplified IT infrastructure provisioning and management. As well, it ushered in an era of consumption-based pricing and much more facile IT management, generally. Now these principles, they've bled into traditional data centers, which have increasingly become software led, programmable and DevOps centric. Now as we enter the post isolation era, it's ironic that not only are IT executives pursuing hybrid strategies, but everyone is talking about hybrid. Hybrid work, hybrid teams, hybrid events, hybrid meetings. The world has gone hybrid and the cloud is no exception. The cloud is expanding. Public cloud models are pushing to the data center and the edge on premises infrastructure is connecting to public clouds and managing data workflows and infrastructure across clouds and out to the edge. Now most leading technology executives that I speak with, they're essentially architecting their own clouds. And what I mean by that is they're envisioning and building an abstraction layer that hides the complexity of the underlying infrastructure and manages workloads intelligently. The end customer doesn't know or care where the data is, as long as it's secure, properly governed, and could be accessed quickly, all irrespective of physical location. Now for the most part, this vision, it can't be bought off the shelf. It needs to be built by placing bets on key technology partners and leveraging the so-called API economy. In other words, picking technology vendors that I trust in programmatically codifying and automating where possible my organizational edicts and business requirements into my own cloud to uniquely support my application portfolio in my modern business processes, which by the way, are rapidly evolving. Now, a key to enabling this vision is optionality. Meaning, not getting locked into one single technology platform, but rather having the confidence that as technology evolves, which it always does, I can focus my energies on adding value to my business through process innovation and human capital growth. Hello, everyone and welcome to this cube conversation and video exclusive on a major new industry development and partnership that's designed to maximize customer infrastructure options and move the new era of hybrid cloud computing forward. We have two industry leaders joining us today. Monica Kumar is the senior vice president of marketing and cloud go-to-market from Nutanix, and David Farrell is the senior vice president and general manager for global strategic alliances at Red Hat. Folks, welcome to theCUBE. Thanks for coming on. >> Good to be here today. >> Thank you so much. >> All right, so Red Hat is the poster child for open source success and it's executing on a strategy based on Red Hat Enterprise Linux, RHEL and OpenShift, the industry's leading container platform, to drive cloud-like experiences. Nutanix is a pioneering company and was the first to truly envision and successfully bring to market a cloud operating model to data center infrastructure. So you two, are getting together and forming a deeper, more substantive relationship. So Monica, tell us about the hard news. What's the scoop? >> Yeah, of course. So, first of all, I'm so excited to be here with David Farrell from Red Hat and for those of you who may not know this, I have a very deep personal connection with Red Hat from my previous role as well. I've been working with Red Hat since the early 2000s. So it gives me great pleasure to be here on behalf of Nutanix and with David from Red Hat, to be announcing a formal strategic partnership to deliver open hybrid multi-cloud solutions. Now let me explain to you what I mean by that. This partnership that we're announcing today is going to enable best-in-class solutions for building, scaling and managing containerized and virtualized cloud native applications in of course, hyper-converged infrastructure environments. So the collaboration is going to bring together these industry-leading technologies. Enabling and integrating Red Hat OpenShift and Red Hat Enterprise Linux, onto the Nutanix cloud platform, which includes, you know, our well-known Nutanix AOS and AHV hypervisor technologies. Now the question is, why are we doing all this? It's because of, as you said, Dave, the rapid evolution of hybrid cloud strategies and adoption of containers and Kubernetes in our customer base to develop, deploy and manage apps. And what we're hearing from our customers is that they want this integration between Red Hat Technologies and Nutanix Solutions. >> Okay. Thank you, Monica. So big news David, from Red Hat's perspective. Okay. So Red Hat, Nutanix, both leaders in their respective fields. David, what spurred the decision to partner from your standpoint? >> Yeah. And listen, let me echo Monica's comments as well. So we're really excited about the partnership with Nutanix. And we're excited because Nutanix is the leader in hybrid cloud infrastructure, but we're even more excited because this is what customers have been asking us to do. And that's really at the core of the decision. I think both teams, both companies have been listening to customers and we've got a groundswell of enterprise customers around the world that are asking us to come together. Bring our technologies together from a certification perspective, which Monica spoke about, right, is number one. So RHEL and OpenShift being certified on top of AHV, right. To provide the best-in-class service for enterprise grade applications, but there's more to it than just the certification. Like customers are looking for a world-class integrated support experience as well as they go into, into production. So we also have integrated support, right. So customers can contact Nutanix, they can contact Red Hat and having that seamless, that seamless experience is really, really critical and something that our customers have been asking us for. And then we'll continue to work from a roadmap perspective as well, from an engineering perspective, to make sure that our roadmaps are aligned and the customers have assurance over time and continuity over time so that they can make investments that they know are going to pay off and be safe investments and scalable investments over the long arc of their technology horizon, so. So those are, those are kind of our view of why this is good for customers and back to your points, David, it's about choice and optionality, right? And choice and consistency, and I think the verdict is in now, in the industry, that hybrid is the future, right? Everybody kind of agrees on that, right? In certain applications and certain workloads are going to run on-prem, others are going to run on the public cloud, and customers need choice to be able to decide what's the right destination for those workloads. And that's what Red Hat's all about, that's what RHEL's all about, what OpenShift is all about, is that it runs on any cloud infrastructure. Now it runs on Nutanix HCI. >> So I liked that two, one virtual throat to choke, or maybe better put, maybe one virtual hand to shake. So David staying with you, maybe you could talk about some of the other key terms of the partnership. Maybe focus on joint solutions that the customers can expect and I'm particularly interested in the engineering collaboration. I know there's a go-to-market component, but the engineering collaboration and technology innovation that we can expect. >> Yeah. So there's a few components to it, David. One is, obviously as I talked about roadmap, right. And that's, you know, our technology teams coming together, looking at the existing roadmaps for RHEL and OpenShift, but also adjacent capabilities that are coming from the Red Hat portfolio and capabilities that are coming from our ISV ecosystems and our respective ecosystems. This is a big win for our partners, as well, that have been asking us to work together. So we'll continue to keep the radar up about what some of those functionalities and capabilities ought to be. Whether we make them or somebody else makes them to pull into the, pull into the strategy, if you will. The second big principle around joint engineering is going to be around customer experience, right. So for example, we're starting off with the agnostic installer and by the way, this is coming Thursday, right? I think we're live on Thursday, the 29th, right? So this is in market, it is GA, it's available today, the 29th, right. And then we will move to the, to the UPI- so sorry, to the IPI installer in the second half of this year, right. To provide a more automated experience and then I think on the Nutanix side, Monica can, can talk to this, that Nutanix is building APIs to also automate installation, right? So first and foremost, we're all about getting the solution and getting the jointly engineered technologies working together and providing a superior customer experience for our customers that are deploying Red Hat on top of Nutanix. And that's going to be the guiding, the guiding driver, if you will, for how we work together. >> Yeah. And let me add to that. Like you said, we are, the engineering is already bearing fruit for our customers, right. As of today, when we announcing, we already have certified versions of Red Hat Linux with AHV, number one. Number two, as you said, the agnostic installer is available. We will make the automated installer available so any customer can deploy OpenShift using the Nutanix cloud platform in the very near future, right. Those are the two sort of the beginnings of the engineering and this is going to, this is a longterm partnership, so we will continue to evolve the different configurations that we, you know, that we test and that we validate as well as we go on. So I'm really excited about the fact that we are going to be offering customers fully tested, validated configurations to deploy. (cross talk) >> Go ahead >> David if I may just in there as well, I mean, so that's on the engineering side, right. But there'll also be an important thing, customers expect us to cooperate in to engage proactively as we face them, right. So that both the Red Hat, part of the agreement is that both the Red Hat and the Nutanix field teams, right the customer teams, will also be enabled, right. We'll do technical enablement for our teams, stand up proof of technologies, right. So that we're burning in some of the technology, if you will, and working out the kinks before the customer has to, right. And this is also a key value proposition is we're doing this work upstream, both in the engineering teams and in the field engagement teams so that customers can get time to market, if you will, and speed of solution deployment. >> Got it. So we'd love to talk about the sweet spot, the ideal customer profile at ICP. So is there a particular type of customer Monica, that stands to benefit most from the partnership and the certifications that you're committing to? >> Yeah. I mean, if you look at, you know, cloud native app development, that's happening across all types of segments, but particularly, you know, enterprise customers running, in all industries practically, running tier one applications or building custom applications in the cloud would be a great focus for this. Our customers who are mature in their cloud native journeys and want to build and run cloud native workloads at scale would be another type of audience. I mean, when you really think about the gamut of customers we serve jointly together, it's all the way from, you know, mid-sized customers who are, who may want a complete solution that's built for them, to enterprise customers and even globals accounts that are actually doing a lot of custom application development and then deploying things at scale. So really, I mean, anybody who's developing applications, anybody who's running workloads, you know, database workloads, applications that they're building, analytics workloads, I think for all of them. This is a very beneficial solution and I would say specifically from a Nutanix customer perspective, we've had a demand for, you know, the certification with AHV and RHEL for a long time. So that's something our customers are very much looking forward to. We have a large number of customers who already are deploying that configuration and now they know it's fully tested, fully supported, and there's an ongoing roadmap from both companies to support it. And then as far as OpenShift goes, we are super excited about the possibilities of providing that optionality to customers and really meeting them at every level of their journey to the cloud. >> So you got the product level certifications, that to me is all about trust and it's kind of table stakes, but if I have that, now I can, I can lean in. What other kind of value dimensions should we be thinking about with regard to this, this partnership? I mean, obviously, you know, cost savings, you know, speed, things like that, but maybe you could sort of add more color to that. >> Yeah. Well, absolutely, look. I mean, anytime there's joint there's integration, there is complexity that's taken out of deployment from the customer's hands and the vendors do the work upfront, that results in a lot of different benefits. Including productivity benefits, speed to market benefits, total cost of ownership benefits, as you said. So we expect that the fact that the two companies are now going to do all this work upfront for our customers, they'll be able to deploy and do things that we're doing, you know, much faster than before, right? So that's, you know, definitely we believe, and then also joined support. I think David mentioned that, the fact that we are offering joint support as well to our customers we'll be problem solving together. So the seamless support experience will provide faster resolution for our joint customers. >> Great. David, I wonder if you could kind of share your view of you know, thinking about the Nutanix cloud platform, what makes it well suited for supporting OpenShift and cloud native workloads? >> Well, I think the, look first off, they're the leader, right. They bring the most trusted and tried HCI environment in the industry to bear for customers, right. And they deliver on the promises that Monica just went through, right, around simplicity, around ease of use, around scalability, around optionality, right. And they take that complexity away and that's what customers I think are telling both Red Hat and Nutanix, and really everybody for that matter, right. Is that they want to focus on the business outcomes, on the business value, on the applications, that differentiate them. And Nutanix really takes away a lot of that complexity for the customer at the infrastructure level, right. And then RHEL, and OpenShift and Red Hat do that as well, both at the infrastructure level and at the application level, right. So when it comes to simplicity, and when it comes to choice, but consistency, both Nutanix and Red Hat have that at the core of how we build and how we engineer products that we take to market to remove that complexity so the customers can move quickly, more cost-effectively, and have that optionality that they're after. >> Yeah and David, if I may add to that, and thank you again for saying the things you said, that's exactly why our customers choose us. One of the key factors is our distributed architecture as well, because of the way it's architected, the Nutanix cloud platform delivers an environment that's highly scalable and resilient, and it's well suited for enterprise deployments of Red Hat, OpenShift at scale. The platform also includes, you know, fully integrated unified storage, which addresses many of the challenging problems faced by operators routinely in configuring and managing storage for stateful containers, for example. So there's a lot of goodness there and the combination of Red Hat, both you know, RHEL and OpenShift, any 10x platform, I believe, offers really unparalleled value to our customers in terms of the technology we bring and the integration we bring to our customers. >> Okay, great. Last question, David, maybe you first, and then Monica, you can bring us home. Where do you guys want to see this partnership going? >> We want to see it going where, customers are getting the most value of course, right. And we would like to see obviously adoption, right. So, anytime two leaders like ourselves come together, it's all about delivering for the customer. We've got a long list of customers that have been asking us, as Monica said to do this, and it's overwhelming, right. So we're responding to that. We've got a pipeline of, of customers that we're already beginning to engage on. And so we'll measure our progress based upon adoption, right, and how customers adopt the solution, the shared solution as we go forward. How they're feeding back to us, the value that they're getting, and also encouraging them to engage with us around the roadmap and where we take the solution, right. So I think those are ways that, you know, we'll be focused on adoption and satisfaction around it across the marketplace and the degree of interaction and input we get from customers with respect to the roadmap. And Monica, how do you feel about it? >> Yeah. What's success look like Monica? >> Yeah, look, we all know that technology is a means to an end, right? And the end is solving customer problems, as David said. For us, success will be when we have many, many, many, joined happy customers that are getting benefit from our platform. To me, this is just the beginning of our relationship to help customers. The best is yet to come. I'm super excited, as I said, for many reasons, but specifically, because we know there's a huge demand out there for this integrated solution between Red Hat and Nutanix and we'll start delivering it to our customers. So we've been, we'll be working very closely with our customers to see how that option goes, and we want to delight them with this, with our joint solution. That's our goal. >> Thank you. Well, David, you kind of alluded to it. Customers have been looking forward to this for quiet some time, and number of us have been thinking about this happening and to me, the key, is you're actually putting some real muscle behind it as seen in the engineering resources. And you got to have that type of commitment before you really go forward, otherwise, it's just kind of a yeah a nice press release, nice party deal, this isn't. So congratulations on figuring this out. Good luck. And we'll be really excited to watch your progress. Appreciate you guys coming to theCUBE. Okay. And thank you for watching everybody. This is Dave Vellante for theCube. We'll see you next time. (light, upbeat music)

Published Date : Jul 29 2021

SUMMARY :

and David Farrell is the and successfully bring to So the collaboration is the decision to partner that hybrid is the future, right? solutions that the customers and by the way, this is beginnings of the engineering So that both the Red Hat, and the certifications it's all the way from, you know, that to me is all about trust and the vendors do the work upfront, the Nutanix cloud platform, in the industry to bear and the integration we and then Monica, you can bring us home. the shared solution as we go forward. And the end is solving customer and to me, the key, is you're

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

MonicaPERSON

0.99+

David FarrellPERSON

0.99+

NutanixORGANIZATION

0.99+

Monica KumarPERSON

0.99+

Dave VellantePERSON

0.99+

ThursdayDATE

0.99+

DavePERSON

0.99+

two companiesQUANTITY

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

firstQUANTITY

0.99+

both companiesQUANTITY

0.99+

two leadersQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

Red Hat TechnologiesORGANIZATION

0.99+

OneQUANTITY

0.99+

Red HatORGANIZATION

0.99+

todayDATE

0.99+

RHELTITLE

0.98+

OpenShiftTITLE

0.98+

UPIORGANIZATION

0.98+

both teamsQUANTITY

0.98+

IPIORGANIZATION

0.98+

10xQUANTITY

0.98+

early 2000sDATE

0.98+

Red HatTITLE

0.98+

GALOCATION

0.97+

Hat Enterprise LinuxTITLE

0.94+

Red Hat LinuxTITLE

0.94+

LIVE Panel: "Easy CI With Docker"


 

>>Hey, welcome to the live panel. My name is Brett. I am your host, and indeed we are live. In fact, if you're curious about that, if you don't believe us, um, let's just show a little bit of the browser real quick to see. Yup. There you go. We're live. So, all right. So how this is going to work is I'm going to bring in some guests and, uh, in one second, and we're going to basically take your questions on the topic designer of the day, that continuous integration testing. Uh, thank you so much to my guests welcoming into the panel. I've got Carlos, Nico and Mandy. Hello everyone. >>Hello? All right, >>Let's go. Let's go around the room and all pretend we don't know each other and that the internet didn't read below the video who we are. Uh, hi, my name is Brett. I am a Docker captain, which means I'm supposed to know something about Docker. I'm coming from Virginia Beach. I'm streaming here from Virginia Beach, Virginia, and, uh, I make videos on the internet and courses on you to me, Carlos. Hey, >>Hey, what's up? I'm Carlos Nunez. I am a solutions architect, VMware. I do solution things with computers. It's fun. I live in Dallas when I'm moving to Houston in a month, which is where I'm currently streaming. I've been all over the Northeast this whole week. So, um, it's been fun and I'm excited to meet with all of you and talk about CIA and Docker. Sure. >>Yeah. Hey everyone. Uh, Nico, Khobar here. I'm a solution engineer at HashiCorp. Uh, I am streaming to you from, uh, the beautiful Austin, Texas. Uh, ignore, ignore the golden gate bridge here. This is from my old apartment in San Francisco. Uh, just, uh, you know, keeping that, to remember all the good days, um, that that lived at. But, uh, anyway, I work at Patrick Corp and I work on all things, automation, um, and cloud and dev ops. Um, and I'm excited to be here and Mandy, >>Hi. Yeah, Mandy Hubbard. I am streaming from Austin, Texas. I am, uh, currently a DX engineer at ship engine. Um, I've worked in QA and that's kind of where I got my, uh, my Docker experience and, um, uh, moving into DX to try and help developers better understand and use our products and be an advocate for them. >>Nice. Well, thank you all for joining me. Uh, I really appreciate you taking the time out of your busy schedule to be here. And so for those of you in chat, the reason we're doing this live, because it's always harder to do things live. The reason we're here is to answer a question. So we didn't come with a bunch of slides and demos or anything like that. We're here to talk amongst ourselves about ideas and really here for you. So we've, we obviously, this is about easy CII, so we're, we're going to try to keep the conversation around testing and continuous integration and all the things that that entails with containers. But we may, we may go down rabbit holes. We may go veer off and start talking about other things, and that's totally fine if it's in the realm of dev ops and containers and developer and ops workflows, like, Hey, it's, it's kinda game. >>And, uh, these people have a wide variety of expertise. They haven't done just testing, right? We, we live in a world where you all kind of have to wear many hats. So feel free to, um, ask what you think is on the top of your mind. And we'll do our best to answer. It may, might not be the best answer or the correct answer, but we're going to do our best. Um, well, let's get it start off. Uh, let's, let's get a couple of topics to start off with. Uh, th the, the easy CGI was my, one of my three ideas. Cause he's the, one of the things that I'm most excited about is the innovation we're seeing around easier testing, faster testing, automated testing, uh, because as much as we've all been doing this stuff for, you know, 15 years, since 20 years since the sort of Jenkins early days, um, it it's, it seems like it's still really hard and it's still a lot of work. >>So, um, let's go around the room real quick, and everybody can just kind of talk for a minute about like your experience with testing and maybe some of your pain points, like what you don't like about our testing world. Um, and we can talk about some pains, cause I think that will lead us to kind of talk about what, what are the things we're seeing now that might be better, uh, ideas about how to do this. I know for me, uh, testing, obviously there's the code part, but just getting it automated, but mostly getting it in the hands of developers so that they can control their own testing. And don't have to go talk to a person to run that test again, or the mysterious Jenkins platform somewhere. I keep mentioning Jenkins cause it's, it is still the dominant player out there. Um, so for me, I'm, I'm, I, I don't like it when I'm walking into a room and there's, there's only one or two people that know how the testing works or know how to make the new tests go into the testing platform and stuff like that. So I'm always trying to free those things so that any of the developers are enabled and empowered to do that stuff. So someone else, Carlos, anybody, um, >>Oh, I have a lot of opinions on that. Having been a QA engineer for most of my career. Um, the shift that we're saying is everyone is dev ops and everyone is QA. Th the issue I see is no one asked developers if they wanted to be QA. Um, and so being the former QA on the team, when there's a problem, even though I'm a developer and we're all running QA, they always tend to come to the one of the former QA engineers. And they're not really owning that responsibility and, um, and digging in. So that's kind of what I'm saying is that we're all expected to test now. And some people, well, some people don't know how it's, uh, for me it was kind of an intuitive skill. It just kind of fit with my personality, but not knowing what to look for, not knowing what to automate, not even understanding how your API end points are used by your front end to know what to test when a change is made. It's really overwhelming for developers. And, um, we're going to need to streamline that and, and hold their hands a little bit until they get their feet wet with also being QA. >>Right. Right. So, um, uh, Carlos, >>Yeah, uh, testing is like, Tesla is one of my favorite subjects to talk about when I'm baring with developers. And a lot of it is because of what Mandy said, right? Like a lot of developers now who used to write a test and say, Hey, QA, go. Um, I wrote my unit tests. Now write the rest of the test. Essentially. Now developers are expected to be able to understand how testing, uh, testing methodologies work, um, in their local environments, right? Like they're supposed to understand how to write an integration tasks federate into and tasks, a component test. And of course, how to write unit tests that aren't just, you know, assert true is true, right? Like more comprehensive, more comprehensive, um, more high touch unit tests, which include things like mocking and stubbing and spine and all that stuff. And, you know, it's not so much getting those tests. Well, I've had a lot of challenges with developers getting those tests to run in Docker because of usually because of dependency hell, but, um, getting developers to understand how to write tests that matter and mean something. Um, it's, it's, it can be difficult, but it's also where I find a lot of the enjoyment of my work comes into play. So yeah. I mean, that's the difficulty I've seen around testing. Um, big subject though. Lots to talk about there. >>Yeah. We've got, we've already got so many questions coming in. You already got an hour's worth of stuff. So, uh, Nico 81st thoughts on that? >>Yeah, I think I definitely agree with, with other folks here on the panel, I think from a, um, the shift from a skillset perspective that's needed to adopt the new technologies, but I think from even from, uh, aside from the organizational, um, and kind of key responsibilities that, that the new developers have to kinda adapt to and, and kind of inherit now, um, there's also from a technical perspective as there's, you know, um, more developers are owning the full stack, including the infrastructure piece. So that adds a lot more to the plate in Tim's oaf, also testing that component that they were not even, uh, responsible for before. Um, and, um, also the second challenge that, you know, I'm seeing is that on, you know, the long list of added, um, uh, tooling and, you know, there's new tool every other day. Um, and, um, that kind of requires more customization to the testing, uh, that each individual team, um, any individual developer Y by extension has to learn. Uh, so the customization, uh, as well as the, kind of the scope that had, uh, you know, now in conferences, the infrastructure piece, um, uh, both of act to the, to the challenges that we're seeing right now for, um, for CGI and overall testing, um, uh, the developers are saying, uh, in, in the market today. >>Yeah. We've got a lot of questions, um, about all the, all the different parts of this. So, uh, let me just go straight to them. Cause that's why we're here is for the people, uh, a lot of people asking about your favorite tools and in one of this is one of the challenges with integration, right? Is, um, there is no, there are dominant players, but there, there is such a variety. I mean, every one of my customers seems like they're using a different workflow and a different set of tools. So, and Hey, we're all here to just talk about what we're, what we're using, uh, you know, whether your favorite tools. So like a lot of the repeated questions are, what are your favorite tools? Like if you could create it from scratch, uh, what would you use? Pierre's asking, you know, GitHub actions sounds like they're a fan of GitHub actions, uh, w you know, mentioning, pushing the ECR and Docker hub and, uh, using vs code pipeline, I guess there may be talking about Azure pipelines. Um, what, what's your preferred way? So, does anyone have any, uh, thoughts on that anyone want to throw out there? Their preferred pipeline of tooling? >>Well, I have to throw out mine. I might as Jenkins, um, like kind of a honorary cloud be at this point, having spoken a couple of times there, um, all of the plugins just make the functionality. I don't love the UI, but I love that it's been around so long. It has so much community support, and there are so many plugins so that if you want to do something, you don't have to write the code it's already been tested. Um, unfortunately I haven't been able to use Jenkins in, uh, since I joined ship engine, we, most of our, um, our, our monolithic core application is, is team city. It's a dotnet application and TeamCity plays really well with.net. Um, didn't love it, uh, Ms. Jenkins. And I'm just, we're just starting some new initiatives that are using GitHub actions, and I'm really excited to learn, to learn those. I think they have a lot of the same functionality that you're looking for, but, um, much more simplified in is right there and get hubs. So, um, the integration is a lot more seamless, but I do have to go on record that my favorite CICT tools Jenkins. >>All right. You heard it here first people. All right. Anyone else? You're muted? I'm muted. Carlin says muted. Oh, Carla says, guest has muted themselves to Carlos. You got to unmute. >>Yes. I did mute myself because I was typing a lot, trying to, you know, try to answer stuff in the chat. And there's a lot of really dark stuff in there. That's okay. Two more times today. So yeah, it's fine. Yeah, no problem. So totally. And it's the best way to start a play more. So I'm just going to go ahead and light it up. Um, for enterprise environments, I actually am a huge fan of Jenkins. Um, it's a tool that people really understand. Um, it has stood the test of time, right? I mean, people were using Hudson, but 15 years ago, maybe longer. And, you know, the way it works, hasn't really changed very much. I mean, Jenkins X is a little different, but, um, the UI and the way it works internally is pretty familiar to a lot of enterprise environments, which is great. >>And also in me, the plugin ecosystem is amazing. There's so many plugins for everything, and you can make your own if you know, Java groovy. I'm sure there's a perfect Kotlin in there, but I haven't tried myself, but it's really great. It's also really easy to write, um, CIS code, which is something I'm a big fan of. So Jenkins files have been, have worked really well for me. I, I know that I can get a little bit more complex as you start to build your own models and such, but, you know, for enterprise enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Jenkins is the bellwether and for very good reason now for my personal projects. And I see a lot on the chat here, I think y'all, y'all been agreed with me get hub actions 100%, my favorite tool right now. >>Um, I love GitHub actions. It's, it's customizable, it's modular. There's a lot of plugins already. I started using getting that back maybe a week after when GA and there was no documentation or anything. And I still, it was still my favorite CIA tool even then. Um, and you know, the API is really great. There's a lot to love about GitHub actions and, um, and I, and I use it as much as I can from my personal project. So I still have a soft spot for Travis CAI. Um, you know, they got acquired and they're a little different now trying to see, I, I can't, I can't let it go. I just love it. But, um, yeah, I mean, when it comes to Seattle, those are my tools. So light me up in the comments I will respond. Yeah. >>I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, you know, early days get hub open source and like a free CIA tool that I could describe. I think it was the ammo back then. I don't actually remember, but yeah, it was kind of an exciting time from my experience. There was like, oh, this is, this is just there as a service. And I could just use it. It doesn't, it's like get hub it's free from my open source stuff. And so it does have a soft spot in my heart too. So yeah. >>All right. We've got questions around, um, cam, so I'm going to ask some questions. We don't have to have these answers because sometimes they're going to be specific, but I want to call them out because people in chat may have missed that question. And there's probably, you know, that we have smart people in chat too. So there's probably someone that knows the answer to these things. If, if it's not us, um, they're asking about building Docker images in Kubernetes, which to me is always a sore spot because it's Kubernetes does not build images by default. It's not meant for that out of the gate. And, uh, what is the best way to do this without having to use privileged containers, which privileged containers just implying that yeah, you, you, it probably has more privileges than by default as a container in Kubernetes. And that is a hard thing because, uh, I don't, I think Docker doesn't lie to do that out of the gate. So I don't know if anyone has an immediate answer to that. That's a pretty technical one, but if you, if you know the answer to that in chat, call it out. >>Um, >>I had done this, uh, but I'm pretty sure I had to use a privileged, um, container and install the Docker Damon on the Kubernetes cluster. And I CA I can't give you a better solution. Um, I've done the same. So, >>Yeah, uh, Chavonne asks, um, back to the Jenkins thing, what's the easiest way to integrate Docker into a Jenkins CICB pipeline. And that's one of the challenges I find with Jenkins because I don't claim to be the expert on Jenkins. Is there are so many plugins because of this, of this such a huge ecosystem. Um, when you go searching for Docker, there's a lot that comes back, right. So I, I don't actually have a preferred way because every team I find uses it differently. Um, I don't know, is there a, do you know if there's a Jenkins preferred, a default plugin? I don't even know for Docker. Oh, go ahead. Yeah. Sorry for Docker. And jacon sorry, Docker plugins for Jenkins. Uh, as someone's asking like the preferred or easy way to do that. Um, and I don't, I don't know the back into Jenkins that well, so, >>Well, th the new, the new way that they're doing, uh, Docker builds with the pipeline, which is more declarative versus the groovy. It's really simple, and their documentation is really good. They, um, they make it really easy to say, run this in this image. So you can pull down, you know, public images and add your own layers. Um, so I don't know the name of that plugin, uh, but I can certainly take a minute after this session and going and get that. Um, but if you really are overwhelmed by the plugins, you can just write your, you know, your shell command in Jenkins. You could just by, you know, doing everything in bash, calling the Docker, um, Damon directly, and then getting it working just to see that end to end, and then start browsing for plugins to see if you even want to use those. >>The plugins will allow more integration from end to end. Some of the things that you input might be available later on in the process for having to manage that yourself. But, you know, you don't have to use any of the plugins. You can literally just, you know, do a block where you write your shell command and get it working, and then decide if, for plugins for you. Um, I think it's always under important to understand what is going on under the hood before you, before you adopt the magic of a plugin, because, um, once you have a problem, if you're, if it's all a lockbox to you, it's going to be more difficult to troubleshoot. It's kind of like learning, get command line versus like get cracking or something. Once, once you get in a bind, if you don't understand the underlying steps, it's really hard to get yourself out of a bind, versus if you understand what the plugin or the app is doing, then, um, you can get out of situations a lot easier. That's a good place. That's, that's where I'd start. >>Yeah. Thank you. Um, Camden asks better to build test environment images, every commit in CII. So this is like one of those opinions of we're all gonna have some different, uh, or build on build images on every commit, leveraging the cash, or build them once outside the test pile pipeline. Um, what say you people? >>Uh, well, I I've seen both and generally speaking, my preference is, um, I guess the ant, the it's a consultant answer, right? I think it depends on what you're trying to do, right. So if you have a lot of small changes that are being made and you're creating images for each of those commits, you're going to have a lot of images in your, in your registry, right? And on top of that, if you're building those images, uh, through CAI frequently, if you're using Docker hub or something like that, you might run into rate limiting issues because of Docker's new rate, limiting, uh, rate limits that they put in place. Um, but that might be beneficial if the, if being able to roll back between those small changes while you're testing is important to you. Uh, however, if all you care about is being able to use Docker images, um, or being able to correlate versions to your Docker images, or if you're the type of team that doesn't even use him, uh, does he even use, uh, virgins in your image tags? Then I would think that that might be a little, much you might want to just have in your CIO. You might want to have a stage that builds your Docker images and Docker image and pushes it into your registry, being done first particular branches instead of having to be done on every commit regardless of branch. But again, it really depends on the team. It really depends on what you're building. It really depends on your workflow. It can depend on a number of things like a curse sometimes too. Yeah. Yeah. >>Once had two points here, you know, I've seen, you know, the pattern has been at every, with every, uh, uh, commit, assuming that you have the right set of tests that would kind of, uh, you would benefit from actually seeing, um, the, the, the, the testing workflow go through and can detect any issue within, within the build or whatever you're trying to test against. But if you're just a building without the appropriate set of tests, then you're just basically consuming almond, adding time, as well as all the, the image, uh, stories associated with it without treaty reaping the benefit of, of, of this pattern. Uh, and the second point is, again, I think if you're, if you're going to end up doing a per commit, uh, definitely recommend having some type of, uh, uh, image purging, um, uh, and, and, and garbage collection process to ensure that you're not just wasting, um, all the stories needed and also, um, uh, optimizing your, your bill process, because that will end up being the most time-consuming, um, um, you know, within, within your pipeline. So this is my 2 cents on this. >>Yeah, that's good stuff. I mean, those are both of those are conversations that could lead us into the rabbit hole for the rest of the day on storage management, uh, you know, CP CPU minutes for, uh, you know, your build stuff. I mean, if you're in any size team, more than one or two people, you immediately run into headaches with cost of CIA, because we have now the problem of tools, right? We have so many tools. We can have the CIS system burning CPU cycles all day, every day, if we really wanted to. And so you re very quickly, I think, especially if you're on every commit on every branch, like that gets you into a world of cost mitigation, and you probably are going to have to settle somewhere in the middle on, uh, between the budget, people that are saying you're spending way too much money on the CII platform, uh, because of all these CPU cycles, and then the developers who would love to have everything now, you know, as fast as possible and the biggest, biggest CPU's, and the biggest servers, and have the bills, because the bills can never go fast enough, right. >>There's no end to optimizing your build workflow. Um, we have another question on that. This is another topic that we'll all probably have different takes on is, uh, basically, uh, version tags, right? So on images, we, we have a very established workflow in get for how we make commits. We have commit shots. We have, uh, you know, we know get tags and there's all these things there. And then we go into images and it's just this whole new world that's opened up. Like there's no real consensus. Um, so what, what are your thoughts on the strategy for teams in their image tag? Again, another, another culture thing. Um, commander, >>I mean, I'm a fan of silver when we have no other option. Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, I don't really see any reason to use another, uh, there's just normal, incremental, um, you know, numbering, but I love the fact that you can pull any tag and know exactly when it was created. So I'm a big fan of bar, if you can make that work for your organization. >>Yep. People are mentioned that in chat, >>So I like as well. Uh, I'm a big fan of it. I think it's easy to be able to just be as easy to be able to signify what a major changes versus a minor change versus just a hot fix or, you know, some or some kind of a bad fix. The problem that I've found with having teams adopt San Bernardo becomes answering these questions and being able to really define what is a major change, what is a minor change? What is a patch, right? And this becomes a bit of an overhead or not so much of an overhead, but, uh, uh, uh, a large concern for teams who have never done versioning before, or they never been responsible for their own versioning. Um, in fact, you know, I'm running into that right now, uh, with, with a client that I'm working with, where a lot, I'm working with a lot of teams, helping them move their applications from a legacy production environment into a new one. >>And in doing so, uh, versioning comes up because Docker images, uh, have tags and usually the tax correlate to versions, but some teams over there, some teams that I'm working with are only maintaining a script and others are maintaining a fully fledged JAK, three tier application, you know, with lots of dependencies. So telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking about, you know, what's major, what's my number what's patch. That might be a lot for them. And for someone or a team like that, I might just suggest using commit shots as your versions until you figure that out, or maybe using, um, dates as your version, but for the more for the team, with the larger application, they probably already know the answers to those questions. In which case they're either already using Sember or they, um, or they may be using some other version of the strategy and might be in December, might suit them better. So, um, you're going to hear me say, it depends a lot, and I'm just going to say here, it depends. Cause it really does. Carlos. >>I think you hit on something interesting beyond just how to version, but, um, when to consider it a major release and who makes those decisions, and if you leave it to engineers to version, you're kind of pushing business decisions down the pipe. Um, I think when it's a minor or a major should be a business decision and someone else needs to make that call someone closer to the business should be making that call as to when we want to call it major. >>That's a really good point. And I add some, I actually agree. Um, I absolutely agree with that. And again, it really depends on the team that on the team and the scope of it, it depends on the scope that they're maintaining, right? And so it's a business application. Of course, you're going to have a product manager and you're going to have, you're going to have a product manager who's going to want to make that call because that version is going to be out in marketing. People are going to use it. They're going to refer to and support calls. They're going to need to make those decisions. Sember again, works really, really well for that. Um, but for a team that's maintaining the scripts, you know, I don't know, having them say, okay, you must tell me what a major version is. It's >>A lot, but >>If they want it to use some birds great too, which is why I think going back to what you originally said, Sember in the absence of other options. I think that's a good strategy. >>Yeah. There's a, there's a, um, catching up on chat. I'm not sure if I'm ever going to catch up, but there's a lot of people commenting on their favorite CII systems and it's, and it, it just goes to show for the, the testing and deployment community. Like how many tools there are out there, how many tools there are to support the tools that you're using. Like, uh, it can be a crazy wilderness. And I think that's, that's part of the art of it, uh, is that these things are allowing us to build our workflows to the team's culture. Um, and, uh, but I do think that, you know, getting into like maybe what we hope to be at what's next is I do hope that we get to, to try to figure out some of these harder problems of consistency. Uh, one of the things that led me to Docker at the beginning to begin with was the fact that it wa it created a consistent packaging solution for me to get my code, you know, off of, off of my site of my local system, really, and into the server. >>And that whole workflow would at least the thing that I was making at each step was going to be the same thing used. Right. And that, that was huge. Uh, it was also, it also took us a long time to get there. Right. We all had to, like Docker was one of those ones that decade kind of ideas of let's solidify the, enter, get the consensus of the community around this idea. And we, and it's not perfect. Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, but it is there and we're all using it. And now I'm looking for that next piece, right. Then hopefully the next step in that, um, that where we can all arrive at a consensus so that once you hop teams, you know, okay. We all knew Docker. We now, now we're all starting to get to know the manifests, but then there's this big gap in the middle where it's like, it might be one of a dozen things. Um, you know, so >>Yeah, yeah. To that, to that, Brett, um, you know, uh, just maybe more of a shameless plug here and wanting to kind of talk about one of the things that I'm on. So excited, but I work, I work at Tasha Corp. I don't know anyone, or I don't know if many people have heard of, um, you know, we tend to focus a lot on workflows versus technologies, right. Because, you know, as you can see, even just looking at the chat, there's, you know, ton of opinions on the different tooling, right. And, uh, imagine having, you know, I'm working with clients that have 10,000 developers. So imagine taking the folks in the chat and being partnered with one organization or one company and having to make decisions on how to build software. Um, but there's no way you can conversion one or, or one way or one tool, uh, and that's where we're facing in the industry. >>So one of the things that, uh, I'm pretty excited about, and I don't know if it's getting as much traction as you know, we've been focused on it. This is way point, which is a project, an open source project. I believe we got at least, uh, last year, um, which is, it's more of, uh, it's, it is aim to address that really, uh, uh, Brad set on, you know, to come to tool to, uh, make it extremely easy and simple. And, you know, to describe how you want to build, uh, deploy or release your application, uh, in, in a consistent way, regardless of the tools. So similar to how you can think of Terraform and having that pluggability to say Terraform apply or plan against any cloud infrastructure, uh, without really having to know exactly the details of how to do it, uh, this is what wave one is doing. Um, and it can be applied with, you know, for the CIA, uh, framework. So, you know, task plugability into, uh, you know, circle CEI tests to Docker helm, uh, Kubernetes. So that's the, you know, it's, it's a hard problem to solve, but, um, I'm hopeful that that's the path that we're, you know, we'll, we'll eventually get to. So, um, hope, you know, you can, you can, uh, see some of the, you know, information, data on it, on, on HashiCorp site, but I mean, I'm personally excited about it. >>Yeah. Uh I'm to gonna have to check that out. And, um, I told you on my live show, man, we'll talk about it, but talk about it for a whole hour. Uh, so there's another question here around, uh, this, this is actually a little bit more detailed, but it is one that I think a lot of people deal with and I deal with a lot too, is essentially the question is from Cameron, uh, D essentially, do you use compose in your CIO or not Docker compose? Uh, because yes I do. Yeah. Cause it, it, it, it solves so many problems am and not every CGI can, I don't know, there's some problems with a CIO is trying to do it for me. So there are pros and cons and I feel like I'm still on the fence about it because I use it all the time, but also it's not perfect. It's not always meant for CIA. And CIA sometimes tries to do things for you, like starting things up before you start other parts and having that whole order, uh, ordering problem of things anyway. W thoughts and when have thoughts. >>Yes. I love compose. It's one of my favorite tools of all time. Um, and the reason why it's, because what I often find I'm working with teams trying to actually let me walk that back, because Jack on the chat asked a really interesting question about what, what, what the hardest thing about CIS for a lot of teams. And in my experience, the hardest thing is getting teams to build an app that is the same app as what's built in production. A lot of CGI does things that are totally different than what you would do in your local, in your local dev. And as a result of that, you get, you got this application that either doesn't work locally, or it does work, but it's a completely different animal than what you would get in production. Right? So what I've found in trying to get teams to bridge that gap by basically taking their CGI, shifting the CII left, I hate the shift left turn, but I'll use it. >>I'm shifting the CIO left to your local development is trying to say, okay, how do we build an app? How do we, how do we build mot dependencies of that app so that we can build so that we can test our app? How do we run tests, right? How do we build, how do we get test data? And what I found is that trying to get teams to do all this in Docker, which is normally a first for a lot of teams that I'm working with, trying to get them all to do all of this. And Docker means you're running Docker, build a lot running Docker, run a lot. You're running Docker, RM a lot. You ran a lot of Docker, disparate Docker commands. And then on top of that, trying to bridge all of those containers together into a single network can be challenging without compose. >>So I like using a, to be able to really easily categorize and compartmentalize a lot of the things that are going to be done in CII, like building a Docker image, running tests, which is you're, you're going to do it in CII anyway. So running tests, building the image, pushing it to the registry. Well, I wouldn't say pushing it to the registry, but doing all the things that you would do in local dev, but in the same network that you might have a mock database or a mock S3 instance or some of something else. Um, so it's just easy to take all those Docker compose commands and move them into your Yammel file using the hub actions or your dankest Bob using Jenkins, or what have you. Right. It's really, it's really portable that way, but it doesn't work for every team. You know, for example, if you're just a team that, you know, going back to my script example, if it's a really simple script that does one thing on a somewhat routine basis, then that might be a lot of overhead. Um, in that case, you know, you can get away with just Docker commands. It's not a big deal, but the way I looked at it is if I'm, if I'm building, if I build something that's similar to a make bile or rate file, or what have you, then I'm probably gonna want to use Docker compose. If I'm working with Docker, that's, that's a philosophy of values, right? >>So I'm also a fan of Docker compose. And, um, you know, to your point, Carlos, the whole, I mean, I'm also a fan of shifting CEI lift and testing lift, but if you put all that logic in your CTI, um, it changes the L the local development experience from the CGI experience. Versus if you put everything in a compose file so that what you build locally is the same as what you build in CGI. Um, you're going to have a better experience because you're going to be testing something more, that's closer to what you're going to be releasing. And it's also very easy to look at a compose file and kind of, um, understand what the dependencies are and what's happening is very readable. And once you move that stuff to CGI, I think a lot of developers, you know, they're going to be intimidated by the CGI, um, whatever the scripting language is, it's going to be something they're going to have to wrap their head around. >>Um, but they're not gonna be able to use it locally. You're going to have to have another local solution. So I love the idea of a composed file use locally, um, especially if he can Mount the local workspace so that they can do real time development and see their changes in the exact same way as it's going to be built and tested in CGI. It gives developers a high level of confidence. And then, you know, you're less likely to have issues because of discrepancies between how it was built in your local test environment versus how it's built in NCI. And so Docker compose really lets you do all of that in a way that makes your solution more portable, portable between local dev and CGI and reduces the number of CGI cycles to get, you know, the test, the test data that you need. So that's why I like it for really, for local dev. >>It'll be interesting. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, not a whole lot, but a little bit talk of the Docker, compose V two, which has now built into the Docker command line. And so now we're shifting from the Python built compose, which was a separate package. You could that one of the challenges was getting it into your CA solution because if you don't have PIP and you got down on the binary and the binary wasn't available for every platform and, uh, it was a PI installer. It gets a little nerdy into how that works, but, uh, and the team is now getting, be able to get unified with it. Now that it's in Golang and it's, and it's plugged right into the Docker command line, it hopefully will be easier to distribute, easier to, to use. >>And you won't have to necessarily have dependencies inside of where you're running it because there'll be a statically compiled binary. Um, so I've been playing with that, uh, this year. And so like training myself to do Docker going from Docker dash compose to Docker space, compose. It is a thing I I'm almost to the point of having to write a shell replacement. Yeah. Alias that thing. Um, but, um, I'm excited to see what that's going, cause there's already new features in it. And it, these built kit by default, like there's all these things. And I, I love build kit. We could make a whole session on build kit. Um, in fact there's actually, um, maybe going on right now, or right around this time, there is a session on, uh, from Solomon hikes, the seat, uh, co-founder of Docker, former CTO, uh, on build kit using, uh, using some other tool on top of build kit or whatever. >>So that, that would be interesting for those of you that are not watching that one. Cause you're here, uh, to do a check that one out later. Um, all right. So another good question was caching. So another one, another area where there is no wrong answers probably, and everyone has a different story. So the question is, what are your thoughts on CII build caching? There's often a debate between security. This is from Quentin. Thank you for this great question. There's often a debate between security reproducibility and build speeds. I haven't found a good answer so far. I will just throw my hat in the ring and say that the more times you want to build, like if you're trying to build every commit or every commit, if you're building many times a day, the more caching you need. So like the more times you're building, the more caching you're gonna likely want. And in most cases caching doesn't bite you in the butt, but that could be, yeah, we, can we get the bit about that? So, yeah. Yeah. >>I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, what you're trying to build and I'm quoting your colors. Um, yeah, it's, it's got, it's gonna depend because, you know, there are some instances where you definitely want to use, you know, depends on the frequency that you're building and how you're building. Um, it's you would want to actually take advantage of cashing functionalities, um, for the build, uh, itself. Um, but if, um, you know, as you mentioned, there could be some instances where you would want to disable, um, any caching because you actually want to either pull a new packages or, um, you know, there could be some security, um, uh, disadvantages related to security aspects that would, you know, you know, using a cache version of, uh, image layer, for example, could be a problem. And you, you know, if you have a fleet of build, uh, engines, you don't have a good grasp of where they're being cashed. We would have to, um, disable caching in that, in that, um, in those instances. So it, it would depend. >>Yeah, it's, it's funny you have that problem on both sides of cashing. Like there are things that, especially in Docker world, they will cash automatically. And, and then, and then you maybe don't realize that some of that caching could be bad. It's, it's actually using old, uh, old assets, old artifacts, and then there's times where you would expect it to cash, that it doesn't cash. And then you have to do something extra to enable that caching, especially when you're dealing with that cluster of, of CIS servers. Right. And the cloud, the whole clustering problem with caching is even more complex, but yeah, >>But that's, that's when, >>Uh, you know, ever since I asked you to start using build kits and able to build kit, you know, between it's it's it's reader of Boston in, in detecting word, you know, where in, in the bill process needs to cash, as well as, uh, the, the, um, you know, the process. I don't think I've seen any other, uh, approach there that comes close to how efficient, uh, that process can become how much time it can actually save. Uh, but again, I think, I think that's, for me that had been my default approach, unless I actually need something that I would intentionally to disable caching for that purpose, but the benefits, at least for me, the benefits of, um, how bill kit actually been processing my bills, um, from the builds as well as, you know, using the cash up until, you know, how it detects the, the difference in, in, in the assets within the Docker file had been, um, you know, uh, pretty, you know, outweigh the disadvantages that it brings in. So it, you know, take it each case by case. And based on that, determine if you want to use it, but definitely recommend those enabling >>In the absence of a reason not to, um, I definitely think that it's a good approach in terms of speed. Um, yeah, I say you cash until you have a good reason not to personally >>Catch by default. There you go. I think you catch by default. Yeah. Yeah. And, uh, the trick is, well, one, it's not always enabled by default, especially when you're talking about cross server. So that's a, that's a complexity for your SIS admins, or if you're on the cloud, you know, it's usually just an option. Um, I think it also is this, this veers into a little bit of, uh, the more you cash the in a lot of cases with Docker, like the, from like, if you're from images and checked every single time, if you're not pinning every single thing, if you're not painting your app version, you're at your MPN versions to the exact lock file definition. Like there's a lot of these things where I'm I get, I get sort of, I get very grouchy with teams that sort of let it, just let it all be like, yeah, we'll just build two images and they're totally going to have different dependencies because someone happened to update that thing and after whatever or MPM or, or, and so I get grouchy about that, cause I want to lock it all down, but I also know that that's going to create administrative burden. >>Like the team is now going to have to manage versions in a very much more granular way. Like, do we need to version two? Do we need to care about curl? You know, all that stuff. Um, so that's, that's kind of tricky, but when you get to, when you get to certain version problems, uh, sorry, uh, cashing problems, you, you, you don't want those set those caches to happen because it, if you're from image changes and you're not constantly checking for a new image, and if you're not pinning that V that version, then now you, you don't know whether you're getting the latest version of Davion or whatever. Um, so I think that there's, there's an art form to the more you pen, the less you have, the less, you have to be worried about things changing, but the more you pen, the, uh, all your versions of everything all the way down the stack, the more administrative stuff, because you're gonna have to manually change every one of those. >>So I think it's a balancing act for teams. And as you mature, I to find teams, they tend to pin more until they get to a point of being more comfortable with their testing. So the other side of this argument is if you trust your testing, then you, and you have better testing to me, the less likely to the subtle little differences in versions have to be penned because you can get away with those minor or patch level version changes. If you're thoroughly testing your app, because you're trusting your testing. And this gets us into a whole nother rant, but, uh, yeah, but talking >>About penny versions, if you've got a lot of dependencies isn't that when you would want to use the cash the most and not have to rebuild all those layers. Yeah. >>But if you're not, but if you're not painting to the exact patch version and you are caching, then you're not technically getting the latest versions because it's not checking for all the time. It's a weird, there's a lot of this subtle nuance that people don't realize until it's a problem. And that's part of the, the tricky part of allow this stuff, is it, sometimes the Docker can be almost so much magic out of the box that you, you, you get this all and it all works. And then day two happens and you built it a second time and you've got a new version of open SSL in there and suddenly it doesn't work. Um, so anyway, uh, that was a great question. I've done the question on this, on, uh, from heavy. What do you put, where do you put testing in your pipeline? Like, so testing the code cause there's lots of types of testing, uh, because this pipeline gets longer and longer and Docker building images as part of it. And so he says, um, before staging or after staging, but before production, where do you put it? >>Oh man. Okay. So, um, my, my main thought on this is, and of course this is kind of religious flame bait, so sure. You know, people are going to go into the compensation wrong. Carlos, the boy is how I like to think about it. So pretty much in every stage or every environment that you're going to be deploying your app into, or that your application is going to touch. My idea is that there should be a build of a Docker image that has all your applications coded in, along with its dependencies, there's testing that tests your application, and then there's a deployment that happens into whatever infrastructure there is. Right. So the testing, they can get tricky though. And the type of testing you do, I think depends on the environment that you're in. So if you're, let's say for example, your team and you have, you have a main branch and then you have feature branches that merged into the main branch. >>You don't have like a pre-production branch or anything like that. So in those feature branches, whenever I'm doing CGI that way, I know when I freak, when I cut my poll request, that I'm going to merge into main and everything's going to work in my feature branches, I'm going to want to probably just run unit tests and maybe some component tests, which really, which are just, you know, testing that your app can talk to another component or another part, another dependency, like maybe a database doing tests like that, that don't take a lot of time that are fascinating and right. A lot of would be done at the beach branch level and in my opinion, but when you're going to merge that beach branch into main, as part of a release in that activity, you're going to want to be able to do an integration tasks, to make sure that your app can actually talk to all the other dependencies that it talked to. >>You're going to want to do an end to end test or a smoke test, just to make sure that, you know, someone that actually touches the application, if it's like a website can actually use the website as intended and it meets the business cases and all that, and you might even have testing like performance testing, low performance load testing, or security testing, compliance testing that would want to happen in my opinion, when you're about to go into production with a release, because those are gonna take a long time. Those are very expensive. You're going to have to cut new infrastructure, run those tests, and it can become quite arduous. And you're not going to want to run those all the time. You'll have the resources, uh, builds will be slower. Uh, release will be slower. It will just become a mess. So I would want to save those for when I'm about to go into production. Instead of doing those every time I make a commit or every time I'm merging a feature ranch into a non main branch, that's the way I look at it, but everything does a different, um, there's other philosophies around it. Yeah. >>Well, I don't disagree with your build test deploy. I think if you're going to deploy the code, it needs to be tested. Um, at some level, I mean less the same. You've got, I hate the term smoke tests, cause it gives a false sense of security, but you have some mental minimum minimal amount of tests. And I would expect the developer on the feature branch to add new tests that tested that feature. And that would be part of the PR why those tests would need to pass before you can merge it, merge it to master. So I agree that there are tests that you, you want to run at different stages, but the earlier you can run the test before going to production. Um, the fewer issues you have, the easier it is to troubleshoot it. And I kind of agree with what you said, Carlos, about the longer running tests like performance tests and things like that, waiting to the end. >>The only problem is when you wait until the end to run those performance tests, you kind of end up deploying with whatever performance you have. It's, it's almost just an information gathering. So if you don't run your performance test early on, um, and I don't want to go down a rabbit hole, but performance tests can be really useless if you don't have a goal where it's just information gap, uh, this is, this is the performance. Well, what did you expect it to be? Is it good? Is it bad? They can get really nebulous. So if performance is really important, um, you you're gonna need to come up with some expectations, preferably, you know, set up the business level, like what our SLA is, what our response times and have something to shoot for. And then before you're getting to production. If you have targets, you can test before staging and you can tweak the code before staging and move that performance initiative. Sorry, Carlos, a little to the left. Um, but if you don't have a performance targets, then it's just a check box. So those are my thoughts. I like to test before every deployment. Right? >>Yeah. And you know what, I'm glad that you, I'm glad that you brought, I'm glad that you brought up Escalades and performance because, and you know, the definition of performance says to me, because one of the things that I've seen when I work with teams is that oftentimes another team runs a P and L tests and they ended, and the development team doesn't really have too much insight into what's going on there. And usually when I go to the performance team and say, Hey, how do you run your performance test? It's usually just a generic solution for every single application that they support, which may or may not be applicable to the application team that I'm working with specifically. So I think it's a good, I'm not going to dig into it. I'm not going to dig into the rabbit hole SRE, but it is a good bridge into SRE when you start trying to define what does reliability mean, right? >>Because the reason why you test performance, it's test reliability to make sure that when you cut that release, that customers would go to your site or use your application. Aren't going to see regressions in performance and are not going to either go to another website or, you know, lodge in SLA violation or something like that. Um, it does, it does bridge really well with defining reliability and what SRE means. And when you have, when you start talking about that, that's when you started talking about how often do I run? How often do I test my reliability, the reliability of my application, right? Like, do I have nightly tasks in CGI that ensure that my main branch or, you know, some important branch I does not mean is meeting SLA is meeting SLR. So service level objectives, um, or, you know, do I run tasks that ensure that my SLA is being met in production? >>Like whenever, like do I use, do I do things like game days where I test, Hey, if I turn something off or, you know, if I deploy this small broken code to production and like what happens to my performance? What happens to my security and compliance? Um, you can, that you can go really deep into and take creating, um, into creating really robust tests that cover a lot of different domains. But I liked just using build test deploy is the overall answer to that because I find that you're going to have to build your application first. You're going to have to test it out there and build it, and then you're going to want to deploy it after you test it. And that order generally ensures that you're releasing software. That works. >>Right. Right. Um, I was going to ask one last question. Um, it's going to have to be like a sentence answer though, for each one of you. Uh, this is, uh, do you lint? And if you lint, do you lent all the things, if you do, do you fail the linters during your testing? Yes or no? I think it's going to depend on the culture. I really do. Sorry about it. If we >>Have a, you know, a hook, uh, you know, on the get commit, then theoretically the developer can't get code there without running Melinta anyway, >>So, right, right. True. Anyone else? Anyone thoughts on that? Linting >>Nice. I saw an additional question online thing. And in the chat, if you would introduce it in a multi-stage build, um, you know, I was wondering also what others think about that, like typically I've seen, you know, with multi-stage it's the most common use case is just to produce the final, like to minimize the, the, the, the, the, the image size and produce a final, you know, thin, uh, layout or thin, uh, image. Uh, so if it's not for that, like, I, I don't, I haven't seen a lot of, you know, um, teams or individuals who are actually within a multi-stage build. There's nothing really against that, but they think the number one purpose of doing multi-stage had been just producing the minimalist image. Um, so just wanted to kind of combine those two answers in one, uh, for sure. >>Yeah, yeah, sure. Um, and with that, um, thank you all for the great questions. We are going to have to wrap this up and we could go for another hour if we all had the time. And if Dr. Khan was a 24 hour long event and it didn't sadly, it's not. So we've got to make room for the next live panel, which will be Peter coming on and talking about security with some developer ex security experts. And I wanted to thank again, thank you all three of you for being here real quick, go around the room. Um, uh, where can people reach out to you? I am, uh, at Bret Fisher on Twitter. You can find me there. Carlos. >>I'm at dev Mandy with a Y D E N D Y that's me, um, >>Easiest name ever on Twitter, Carlos and DFW on LinkedIn. And I also have a LinkedIn learning course. So if you check me out on my LinkedIn learning, >>Yeah. I'm at Nicola Quebec. Um, one word, I'll put it in the chat as well on, on LinkedIn, as well as, uh, uh, as well as Twitter. Thanks for having us, Brett. Yeah. Thanks for being here. >>Um, and, and you all stay around. So if you're in the room with us chatting, you're gonna, you're gonna, if you want to go to see the next live panel, I've got to go back to the beginning and do that whole thing, uh, and find the next, because this one will end, but we'll still be in chat for a few minutes. I think the chat keeps going. I don't actually know. I haven't tried it yet. So we'll find out here in a minute. Um, but thanks you all for being here, I will be back a little bit later, but, uh, coming up next on the live stuff is Peter Wood security. Ciao. Bye.

Published Date : May 28 2021

SUMMARY :

Uh, thank you so much to my guests welcoming into the panel. Virginia, and, uh, I make videos on the internet and courses on you to me, So, um, it's been fun and I'm excited to meet with all of you and talk Uh, just, uh, you know, keeping that, to remember all the good days, um, uh, moving into DX to try and help developers better understand and use our products And so for those of you in chat, the reason we're doing this So feel free to, um, ask what you think is on the top of your And don't have to go talk to a person to run that Um, and so being the former QA on the team, So, um, uh, Carlos, And, you know, So, uh, Nico 81st thoughts on that? kind of the scope that had, uh, you know, now in conferences, what we're using, uh, you know, whether your favorite tools. if you want to do something, you don't have to write the code it's already been tested. You got to unmute. And, you know, the way it works, enterprise CIO CD, if you want, especially if you want to roll your own or own it yourself, um, Um, and you know, the API is really great. I mean, I, I feel with you on the Travis, the, I think, cause I think that was my first time experiencing, And there's probably, you know, And I CA I can't give you a better solution. Um, when you go searching for Docker, and then start browsing for plugins to see if you even want to use those. Some of the things that you input might be available later what say you people? So if you have a lot of small changes that are being made and time-consuming, um, um, you know, within, within your pipeline. hole for the rest of the day on storage management, uh, you know, CP CPU We have, uh, you know, we know get tags and there's Um, it's just clean and I like the timestamp, you know, exactly when it was built. Um, in fact, you know, I'm running into that right now, telling the script, telling the team that maintains a script, Hey, you know, you should use somber and you should start thinking I think you hit on something interesting beyond just how to version, but, um, when to you know, I don't know, having them say, okay, you must tell me what a major version is. If they want it to use some birds great too, which is why I think going back to what you originally said, a consistent packaging solution for me to get my code, you know, Uh, you know, the Docker Docker file is not the most perfect way to describe how to make your app, To that, to that, Brett, um, you know, uh, just maybe more of So similar to how you can think of Terraform and having that pluggability to say Terraform uh, D essentially, do you use compose in your CIO or not Docker compose? different than what you would do in your local, in your local dev. I'm shifting the CIO left to your local development is trying to say, you know, you can get away with just Docker commands. And, um, you know, to your point, the number of CGI cycles to get, you know, the test, the test data that you need. Um, I don't know if you all were able to see the keynote, but there was a, there was a little bit, And you won't have to necessarily have dependencies inside of where you're running it because So that, that would be interesting for those of you that are not watching that one. I'm going to quote Carlos again and say, it depends on, on, you know, how you're talking, you know, And then you have to do something extra to enable that caching, in, in the assets within the Docker file had been, um, you know, Um, yeah, I say you cash until you have a good reason not to personally uh, the more you cash the in a lot of cases with Docker, like the, there's an art form to the more you pen, the less you have, So the other side of this argument is if you trust your testing, then you, and you have better testing to the cash the most and not have to rebuild all those layers. And then day two happens and you built it a second And the type of testing you do, which really, which are just, you know, testing that your app can talk to another component or another you know, someone that actually touches the application, if it's like a website can actually Um, the fewer issues you have, the easier it is to troubleshoot it. So if you don't run your performance test early on, um, and you know, the definition of performance says to me, because one of the things that I've seen when I work So service level objectives, um, or, you know, do I run Hey, if I turn something off or, you know, if I deploy this small broken code to production do you lent all the things, if you do, do you fail the linters during your testing? So, right, right. And in the chat, if you would introduce it in a multi-stage build, And I wanted to thank again, thank you all three of you for being here So if you check me out on my LinkedIn Um, one word, I'll put it in the chat as well on, Um, but thanks you all for being here,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Carlos NunezPERSON

0.99+

CarlaPERSON

0.99+

CarlosPERSON

0.99+

BrettPERSON

0.99+

DallasLOCATION

0.99+

HoustonLOCATION

0.99+

NicoPERSON

0.99+

Virginia BeachLOCATION

0.99+

ChavonnePERSON

0.99+

San FranciscoLOCATION

0.99+

DecemberDATE

0.99+

MandyPERSON

0.99+

KhobarPERSON

0.99+

CarlinPERSON

0.99+

JackPERSON

0.99+

SeattleLOCATION

0.99+

CIAORGANIZATION

0.99+

two pointsQUANTITY

0.99+

24 hourQUANTITY

0.99+

Tasha Corp.ORGANIZATION

0.99+

PierrePERSON

0.99+

Patrick CorpORGANIZATION

0.99+

PeterPERSON

0.99+

Jenkins XTITLE

0.99+

second pointQUANTITY

0.99+

second challengeQUANTITY

0.99+

PythonTITLE

0.99+

DockerTITLE

0.99+

2 centsQUANTITY

0.99+

10,000 developersQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

bothQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

CameronPERSON

0.99+

two imagesQUANTITY

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.99+

JenkinsTITLE

0.99+

KhanPERSON

0.99+

HashiCorpORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

each caseQUANTITY

0.99+

BradPERSON

0.99+

firstQUANTITY

0.99+

three ideasQUANTITY

0.99+

this yearDATE

0.99+

QuentinPERSON

0.98+

both sidesQUANTITY

0.98+

TimPERSON

0.98+

last yearDATE

0.98+

20 yearsQUANTITY

0.98+

CamdenPERSON

0.98+

each stepQUANTITY

0.98+

Two more timesQUANTITY

0.98+

DockerCon2021 Keynote


 

>>Individuals create developers, translate ideas to code, to create great applications and great applications. Touch everyone. A Docker. We know that collaboration is key to your innovation sharing ideas, working together. Launching the most secure applications. Docker is with you wherever your team innovates, whether it be robots or autonomous cars, we're doing research to save lives during a pandemic, revolutionizing, how to buy and sell goods online, or even going into the unknown frontiers of space. Docker is launching innovation everywhere. Join us on the journey to build, share, run the future. >>Hello and welcome to Docker con 2021. We're incredibly excited to have more than 80,000 of you join us today from all over the world. As it was last year, this year at DockerCon is 100% virtual and 100% free. So as to enable as many community members as possible to join us now, 100%. Virtual is also an acknowledgement of the continuing global pandemic in particular, the ongoing tragedies in India and Brazil, the Docker community is a global one. And on behalf of all Dr. Khan attendees, we are donating $10,000 to UNICEF support efforts to fight the virus in those countries. Now, even in those regions of the world where the pandemic is being brought under control, virtual first is the new normal. It's been a challenging transition. This includes our team here at Docker. And we know from talking with many of you that you and your developer teams are challenged by this as well. So to help application development teams better collaborate and ship faster, we've been working on some powerful new features and we thought it would be fun to start off with a demo of those. How about it? Want to have a look? All right. Then no further delay. I'd like to introduce Youi Cal and Ben, gosh, over to you and Ben >>Morning, Ben, thanks for jumping on real quick. >>Have you seen the email from Scott? The one about updates and the docs landing page Smith, the doc combat and more prominence. >>Yeah. I've got something working on my local machine. I haven't committed anything yet. I was thinking we could try, um, that new Docker dev environments feature. >>Yeah, that's cool. So if you hit the share button, what I should do is it will take all of your code and the dependencies and the image you're basing it on and wrap that up as one image for me. And I can then just monitor all my machines that have been one click, like, and then have it side by side, along with the changes I've been looking at as well, because I was also having a bit of a look and then I can really see how it differs to what I'm doing. Maybe I can combine it to do the best of both worlds. >>Sounds good. Uh, let me get that over to you, >>Wilson. Yeah. If you pay with the image name, I'll get that started up. >>All right. Sen send it over >>Cheesy. Okay, great. Let's have a quick look at what you he was doing then. So I've been messing around similar to do with the batter. I've got movie at the top here and I think it looks pretty cool. Let's just grab that image from you. Pick out that started on a dev environment. What this is doing. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working on and I'll get that opened up in my idea. Ready to use. It's a here close. We can see our environment as my Molly image, just coming down there and I've got my new idea. >>We'll load this up and it'll just connect to my dev environment. There we go. It's connected to the container. So we're working all in the container here and now give it a moment. What we'll do is we'll see what changes you've been making as well on the code. So it's like she's been working on a landing page as well, and it looks like she's been changing the banner as well. So let's get this running. Let's see what she's actually doing and how it looks. We'll set up our checklist and then we'll see how that works. >>Great. So that's now rolling. So let's just have a look at what you use doing what changes she had made. Compare those to mine just jumped back into my dev container UI, see that I've got both of those running side by side with my changes and news changes. Okay. So she's put Molly up there rather than mobi or somebody had the same idea. So I think in a way I can make us both happy. So if we just jumped back into what we'll do, just add Molly and Moby and here I'll save that. And what we can see is, cause I'm just working within the container rather than having to do sort of rebuild of everything or serve, or just reload my content. No, that's straight the page. So what I can then do is I can come up with my browser here. Once that's all refreshed, refresh the page once hopefully, maybe twice, we should then be able to see your refresh it or should be able to see that we get Malia mobi come up. So there we go, got Molly mobi. So what we'll do now is we'll describe that state. It sends us our image and then we'll just create one of those to share with URI or share. And we'll get a link for that. I guess we'll send that back over to you. >>So I've had a look at what you were doing and I'm actually going to change. I think that might work for both of us. I wondered if you could take a look at it. If I send it over. >>Sounds good. Let me grab the link. >>Yeah, it's a dev environment link again. So if you just open that back in the doc dashboard, it should be able to open up the code that I've changed and then just run it in the same way you normally do. And that shouldn't interrupt what you're already working on because there'll be able to run side by side with your other brunch. You already got, >>Got it. Got it. Loading here. Well, that's great. It's Molly and movie together. I love it. I think we should ship it. >>Awesome. I guess it's chip it and get on with the rest of.com. Wasn't that cool. Thank you Joey. Thanks Ben. Everyone we'll have more of this later in the keynote. So stay tuned. Let's say earlier, we've all been challenged by this past year, whether the COVID pandemic, the complete evaporation of customer demand in many industries, unemployment or business bankruptcies, we all been touched in some way. And yet, even to miss these tragedies last year, we saw multiple sources of hope and inspiration. For example, in response to COVID we saw global communities, including the tech community rapidly innovate solutions for analyzing the spread of the virus, sequencing its genes and visualizing infection rates. In fact, if all in teams collaborating on solutions for COVID have created more than 1,400 publicly shareable images on Docker hub. As another example, we all witnessed the historic landing and exploration of Mars by the perseverance Rover and its ingenuity drone. >>Now what's common in these examples, these innovative and ambitious accomplishments were made possible not by any single individual, but by teams of individuals collaborating together. The power of teams is why we've made development teams central to Docker's mission to build tools and content development teams love to help them get their ideas from code to cloud as quickly as possible. One of the frictions we've seen that can slow down to them in teams is that the path from code to cloud can be a confusing one, riddle with multiple point products, tools, and images that need to be integrated and maintained an automated pipeline in order for teams to be productive. That's why a year and a half ago we refocused Docker on helping development teams make sense of all this specifically, our goal is to provide development teams with the trusted content, the sharing capabilities and the pipeline integrations with best of breed third-party tools to help teams ship faster in short, to provide a collaborative application development platform. >>Everything a team needs to build. Sharon run create applications. Now, as I noted earlier, it's been a challenging year for everyone on our planet and has been similar for us here at Docker. Our team had to adapt to working from home local lockdowns caused by the pandemic and other challenges. And despite all this together with our community and ecosystem partners, we accomplished many exciting milestones. For example, in open source together with the community and our partners, we open sourced or made major contributions to many projects, including OCI distribution and the composed plugins building on these open source projects. We had powerful new capabilities to the Docker product, both free and subscription. For example, support for WSL two and apple, Silicon and Docker, desktop and vulnerability scanning audit logs and image management and Docker hub. >>And finally delivering an easy to use well-integrated development experience with best of breed tools and content is only possible through close collaboration with our ecosystem partners. For example, this last year we had over 100 commercialized fees, join our Docker verified publisher program and over 200 open source projects, join our Docker sponsored open source program. As a result of these efforts, we've seen some exciting growth in the Docker community in the 12 months since last year's Docker con for example, the number of registered developers grew 80% to over 8 million. These developers created many new images increasing the total by 56% to almost 11 million. And the images in all these repositories were pulled by more than 13 million monthly active IP addresses totaling 13 billion pulls a month. Now while the growth is exciting by Docker, we're even more excited about the stories we hear from you and your development teams about how you're using Docker and its impact on your businesses. For example, cancer researchers and their bioinformatics development team at the Washington university school of medicine needed a way to quickly analyze their clinical trial results and then share the models, the data and the analysis with other researchers they use Docker because it gives them the ease of use choice of pipeline tools and speed of sharing so critical to their research. And most importantly to the lives of their patients stay tuned for another powerful customer story later in the keynote from Matt fall, VP of engineering at Oracle insights. >>So with this last year behind us, what's next for Docker, but challenge you this last year of force changes in how development teams work, but we felt for years to come. And what we've learned in our discussions with you will have long lasting impact on our product roadmap. One of the biggest takeaways from those discussions that you and your development team want to be quicker to adapt, to changes in your environment so you can ship faster. So what is DACA doing to help with this first trusted content to own the teams that can focus their energies on what is unique to their businesses and spend as little time as possible on undifferentiated work are able to adapt more quickly and ship faster in order to do so. They need to be able to trust other components that make up their app together with our partners. >>Docker is doubling down and providing development teams with trusted content and the tools they need to use it in their applications. Second, remote collaboration on a development team, asking a coworker to take a look at your code used to be as easy as swiveling their chair around, but given what's happened in the last year, that's no longer the case. So as you even been hinted in the demo at the beginning, you'll see us deliver more capabilities for remote collaboration within a development team. And we're enabling development team to quickly adapt to any team configuration all on prem hybrid, all work from home, helping them remain productive and focused on shipping third ecosystem integrations, those development teams that can quickly take advantage of innovations throughout the ecosystem. Instead of getting locked into a single monolithic pipeline, there'll be the ones able to deliver amps, which impact their businesses faster. >>So together with our ecosystem partners, we are investing in more integrations with best of breed tools, right? Integrated automated app pipelines. Furthermore, we'll be writing more public API APIs and SDKs to enable ecosystem partners and development teams to roll their own integrations. We'll be sharing more details about remote collaboration and ecosystem integrations. Later in the keynote, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, access to content. They can trust, allows them to focus their coding efforts on what's unique and differentiated to that end Docker and our partners are bringing more and more trusted content to Docker hub Docker official images are 160 images of popular upstream open source projects that serve as foundational building blocks for any application. These include operating systems, programming, languages, databases, and more. Furthermore, these are updated patch scan and certified frequently. So I said, no image is older than 30 days. >>Docker verified publisher images are published by more than 100 commercialized feeds. The image Rebos are explicitly designated verify. So the developers searching for components for their app know that the ISV is actively maintaining the image. Docker sponsored open source projects announced late last year features images for more than 200 open source communities. Docker sponsors these communities through providing free storage and networking resources and offering their community members unrestricted access repos for businesses allow businesses to update and share their apps privately within their organizations using role-based access control and user authentication. No, and finally, public repos for communities enable community projects to be freely shared with anonymous and authenticated users alike. >>And for all these different types of content, we provide services for both development teams and ISP, for example, vulnerability scanning and digital signing for enhanced security search and filtering for discoverability packaging and updating services and analytics about how these products are being used. All this trusted content, we make available to develop teams for them directly to discover poll and integrate into their applications. Our goal is to meet development teams where they live. So for those organizations that prefer to manage their internal distribution of trusted content, we've collaborated with leading container registry partners. We announced our partnership with J frog late last year. And today we're very pleased to announce our partnerships with Amazon and Miranda's for providing an integrated seamless experience for joint for our joint customers. Lastly, the container images themselves and this end to end flow are built on open industry standards, which provided all the teams with flexibility and choice trusted content enables development teams to rapidly build. >>As I let them focus on their unique differentiated features and use trusted building blocks for the rest. We'll be talking more about trusted content as well as remote collaboration and ecosystem integrations later in the keynote. Now ecosystem partners are not only integral to the Docker experience for development teams. They're also integral to a great DockerCon experience, but please join me in thanking our Dr. Kent on sponsors and checking out their talks throughout the day. I also want to thank some others first up Docker team. Like all of you this last year has been extremely challenging for us, but the Docker team rose to the challenge and worked together to continue shipping great product, the Docker community of captains, community leaders, and contributors with your welcoming newcomers, enthusiasm for Docker and open exchanges of best practices and ideas talker, wouldn't be Docker without you. And finally, our development team customers. >>You trust us to help you build apps. Your businesses rely on. We don't take that trust for granted. Thank you. In closing, we often hear about the tenant's developer capable of great individual feeds that can transform project. But I wonder if we, as an industry have perhaps gotten this wrong by putting so much emphasis on weight, on the individual as discussed at the beginning, great accomplishments like innovative responses to COVID-19 like landing on Mars are more often the results of individuals collaborating together as a team, which is why our mission here at Docker is delivered tools and content developers love to help their team succeed and become 10 X teams. Thanks again for joining us, we look forward to having a great DockerCon with you today, as well as a great year ahead of us. Thanks and be well. >>Hi, I'm Dana Lawson, VP of engineering here at get hub. And my job is to enable this rich interconnected community of builders and makers to build even more and hopefully have a great time doing it in order to enable the best platform for developers, which I know is something we are all passionate about. We need to partner across the ecosystem to ensure that developers can have a great experience across get hub and all the tools that they want to use. No matter what they are. My team works to build the tools and relationships to make that possible. I am so excited to join Scott on this virtual stage to talk about increasing developer velocity. So let's dive in now, I know this may be hard for some of you to believe, but as a former CIS admin, some 21 years ago, working on sense spark workstations, we've come such a long way for random scripts and desperate systems that we've stitched together to this whole inclusive developer workflow experience being a CIS admin. >>Then you were just one piece of the siloed experience, but I didn't want to just push code to production. So I created scripts that did it for me. I taught myself how to code. I was the model lazy CIS admin that got dangerous and having pushed a little too far. I realized that working in production and building features is really a team sport that we had the opportunity, all of us to be customer obsessed today. As developers, we can go beyond the traditional dev ops mindset. We can really focus on adding value to the customer experience by ensuring that we have work that contributes to increasing uptime via and SLS all while being agile and productive. We get there. When we move from a pass the Baton system to now having an interconnected developer workflow that increases velocity in every part of the cycle, we get to work better and smarter. >>And honestly, in a way that is so much more enjoyable because we automate away all the mundane and manual and boring tasks. So we get to focus on what really matters shipping, the things that humans get to use and love. Docker has been a big part of enabling this transformation. 10, 20 years ago, we had Tomcat containers, which are not Docker containers. And for y'all hearing this the first time go Google it. But that was the way we built our applications. We had to segment them on the server and give them resources. Today. We have Docker containers, these little mini Oasys and Docker images. You can do it multiple times in an orchestrated manner with the power of actions enabled and Docker. It's just so incredible what you can do. And by the way, I'm showing you actions in Docker, which I hope you use because both are great and free for open source. >>But the key takeaway is really the workflow and the automation, which you certainly can do with other tools. Okay, I'm going to show you just how easy this is, because believe me, if this is something I can learn and do anybody out there can, and in this demo, I'll show you about the basic components needed to create and use a package, Docker container actions. And like I said, you won't believe how awesome the combination of Docker and actions is because you can enable your workflow to do no matter what you're trying to do in this super baby example. We're so small. You could take like 10 seconds. Like I am here creating an action due to a simple task, like pushing a message to your logs. And the cool thing is you can use it on any the bit on this one. Like I said, we're going to use push. >>You can do, uh, even to order a pizza every time you roll into production, if you wanted, but at get hub, that'd be a lot of pizzas. And the funny thing is somebody out there is actually tried this and written that action. If you haven't used Docker and actions together, check out the docs on either get hub or Docker to get you started. And a huge shout out to all those doc writers out there. I built this demo today using those instructions. And if I can do it, I know you can too, but enough yapping let's get started to save some time. And since a lot of us are Docker and get hub nerds, I've already created a repo with a Docker file. So we're going to skip that step. Next. I'm going to create an action's Yammel file. And if you don't Yammer, you know, actions, the metadata defines my important log stuff to capture and the input and my time out per parameter to pass and puts to the Docker container, get up a build image from your Docker file and run the commands in a new container. >>Using the Sigma image. The cool thing is, is you can use any Docker image in any language for your actions. It doesn't matter if it's go or whatever in today's I'm going to use a shell script and an input variable to print my important log stuff to file. And like I said, you know me, I love me some. So let's see this action in a workflow. When an action is in a private repo, like the one I demonstrating today, the action can only be used in workflows in the same repository, but public actions can be used by workflows in any repository. So unfortunately you won't get access to the super awesome action, but don't worry in the Guild marketplace, there are over 8,000 actions available, especially the most important one, that pizza action. So go try it out. Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's demo, I'm just going to use the gooey. I'm going to navigate to my actions tab as I've done here. And I'm going to in my workflow, select new work, hello, probably load some workflows to Claire to get you started, but I'm using the one I've copied. Like I said, the lazy developer I am in. I'm going to replace it with my action. >>That's it. So now we're going to go and we're going to start our commitment new file. Now, if we go over to our actions tab, we can see the workflow in progress in my repository. I just click the actions tab. And because they wrote the actions on push, we can watch the visualization under jobs and click the job to see the important stuff we're logging in the input stamp in the printed log. And we'll just wait for this to run. Hello, Mona and boom. Just like that. It runs automatically within our action. We told it to go run as soon as the files updated because we're doing it on push merge. That's right. Folks in just a few minutes, I built an action that writes an entry to a log file every time I push. So I don't have to do it manually. In essence, with automation, you can be kind to your future self and save time and effort to focus on what really matters. >>Imagine what I could do with even a little more time, probably order all y'all pieces. That is the power of the interconnected workflow. And it's amazing. And I hope you all go try it out, but why do we care about all of that? Just like in the demo, I took a manual task with both tape, which both takes time and it's easy to forget and automated it. So I don't have to think about it. And it's executed every time consistently. That means less time for me to worry about my human errors and mistakes, and more time to focus on actually building the cool stuff that people want. Obviously, automation, developer productivity, but what is even more important to me is the developer happiness tools like BS, code actions, Docker, Heroku, and many others reduce manual work, which allows us to focus on building things that are awesome. >>And to get into that wonderful state that we call flow. According to research by UC Irvine in Humboldt university in Germany, it takes an average of 23 minutes to enter optimal creative state. What we call the flow or to reenter it after distraction like your dog on your office store. So staying in flow is so critical to developer productivity and as a developer, it just feels good to be cranking away at something with deep focus. I certainly know that I love that feeling intuitive collaboration and automation features we built in to get hub help developer, Sam flow, allowing you and your team to do so much more, to bring the benefits of automation into perspective in our annual October's report by Dr. Nicole, Forsgren. One of my buddies here at get hub, took a look at the developer productivity in the stork year. You know what we found? >>We found that public GitHub repositories that use the Automational pull requests, merge those pull requests. 1.2 times faster. And the number of pooled merged pull requests increased by 1.3 times, that is 34% more poor requests merged. And other words, automation can con can dramatically increase, but the speed and quantity of work completed in any role, just like an open source development, you'll work more efficiently with greater impact when you invest the bulk of your time in the work that adds the most value and eliminate or outsource the rest because you don't need to do it, make the machines by elaborate by leveraging automation in their workflows teams, minimize manual work and reclaim that time for innovation and maintain that state of flow with development and collaboration. More importantly, their work is more enjoyable because they're not wasting the time doing the things that the machines or robots can do for them. >>And I remember what I said at the beginning. Many of us want to be efficient, heck even lazy. So why would I spend my time doing something I can automate? Now you can read more about this research behind the art behind this at October set, get hub.com, which also includes a lot of other cool info about the open source ecosystem and how it's evolving. Speaking of the open source ecosystem we at get hub are so honored to be the home of more than 65 million developers who build software together for everywhere across the globe. Today, we're seeing software development taking shape as the world's largest team sport, where development teams collaborate, build and ship products. It's no longer a solo effort like it was for me. You don't have to take my word for it. Check out this globe. This globe shows real data. Every speck of light you see here represents a contribution to an open source project, somewhere on earth. >>These arts reach across continents, cultures, and other divides. It's distributed collaboration at its finest. 20 years ago, we had no concept of dev ops, SecOps and lots, or the new ops that are going to be happening. But today's development and ops teams are connected like ever before. This is only going to continue to evolve at a rapid pace, especially as we continue to empower the next hundred million developers, automation helps us focus on what's important and to greatly accelerate innovation. Just this past year, we saw some of the most groundbreaking technological advancements and achievements I'll say ever, including critical COVID-19 vaccine trials, as well as the first power flight on Mars. This past month, these breakthroughs were only possible because of the interconnected collaborative open source communities on get hub and the amazing tools and workflows that empower us all to create and innovate. Let's continue building, integrating, and automating. So we collectively can give developers the experience. They deserve all of the automation and beautiful eye UIs that we can muster so they can continue to build the things that truly do change the world. Thank you again for having me today, Dr. Khan, it has been a pleasure to be here with all you nerds. >>Hello. I'm Justin. Komack lovely to see you here. Talking to developers, their world is getting much more complex. Developers are being asked to do everything security ops on goal data analysis, all being put on the rockers. Software's eating the world. Of course, and this all make sense in that view, but they need help. One team. I told you it's shifted all our.net apps to run on Linux from windows, but their developers found the complexity of Docker files based on the Linux shell scripts really difficult has helped make these things easier for your teams. Your ones collaborate more in a virtual world, but you've asked us to make this simpler and more lightweight. You, the developers have asked for a paved road experience. You want things to just work with a simple options to be there, but it's not just the paved road. You also want to be able to go off-road and do interesting and different things. >>Use different components, experiments, innovate as well. We'll always offer you both those choices at different times. Different developers want different things. It may shift for ones the other paved road or off road. Sometimes you want reliability, dependability in the zone for day to day work, but sometimes you have to do something new, incorporate new things in your pipeline, build applications for new places. Then you knew those off-road abilities too. So you can really get under the hood and go and build something weird and wonderful and amazing. That gives you new options. Talk as an independent choice. We don't own the roads. We're not pushing you into any technology choices because we own them. We're really supporting and driving open standards, such as ISEI working opensource with the CNCF. We want to help you get your applications from your laptops, the clouds, and beyond, even into space. >>Let's talk about the key focus areas, that frame, what DACA is doing going forward. These are simplicity, sharing, flexibility, trusted content and care supply chain compared to building where the underlying kernel primitives like namespaces and Seagraves the original Docker CLI was just amazing Docker engine. It's a magical experience for everyone. It really brought those innovations and put them in a world where anyone would use that, but that's not enough. We need to continue to innovate. And it was trying to get more done faster all the time. And there's a lot more we can do. We're here to take complexity away from deeply complicated underlying things and give developers tools that are just amazing and magical. One of the area we haven't done enough and make things magical enough that we're really planning around now is that, you know, Docker images, uh, they're the key parts of your application, but you know, how do I do something with an image? How do I, where do I attach volumes with this image? What's the API. Whereas the SDK for this image, how do I find an example or docs in an API driven world? Every bit of software should have an API and an API description. And our vision is that every container should have this API description and the ability for you to understand how to use it. And it's all a seamless thing from, you know, from your code to the cloud local and remote, you can, you can use containers in this amazing and exciting way. >>One thing I really noticed in the last year is that companies that started off remote fast have constant collaboration. They have zoom calls, apron all day terminals, shattering that always working together. Other teams are really trying to learn how to do this style because they didn't start like that. We used to walk around to other people's desks or share services on the local office network. And it's very difficult to do that anymore. You want sharing to be really simple, lightweight, and informal. Let me try your container or just maybe let's collaborate on this together. Um, you know, fast collaboration on the analysts, fast iteration, fast working together, and he wants to share more. You want to share how to develop environments, not just an image. And we all work by seeing something someone else in our team is doing saying, how can I do that too? I can, I want to make that sharing really, really easy. Ben's going to talk about this more in the interest of one minute. >>We know how you're excited by apple. Silicon and gravis are not excited because there's a new architecture, but excited because it's faster, cooler, cheaper, better, and offers new possibilities. The M one support was the most asked for thing on our public roadmap, EFA, and we listened and share that we see really exciting possibilities, usership arm applications, all the way from desktop to production. We know that you all use different clouds and different bases have deployed to, um, you know, we work with AWS and Azure and Google and more, um, and we want to help you ship on prime as well. And we know that you use huge number of languages and the containers help build applications that use different languages for different parts of the application or for different applications, right? You can choose the best tool. You have JavaScript hat or everywhere go. And re-ask Python for data and ML, perhaps getting excited about WebAssembly after hearing about a cube con, you know, there's all sorts of things. >>So we need to make that as easier. We've been running the whole month of Python on the blog, and we're doing a month of JavaScript because we had one specific support about how do I best put this language into production of that language into production. That detail is important for you. GPS have been difficult to use. We've added GPS suppose in desktop for windows, but we know there's a lot more to do to make the, how multi architecture, multi hardware, multi accelerator world work better and also securely. Um, so there's a lot more work to do to support you in all these things you want to do. >>How do we start building a tenor has applications, but it turns out we're using existing images as components. I couldn't assist survey earlier this year, almost half of container image usage was public images rather than private images. And this is growing rapidly. Almost all software has open source components and maybe 85% of the average application is open source code. And what you're doing is taking whole container images as modules in your application. And this was always the model with Docker compose. And it's a model that you're already et cetera, writing you trust Docker, official images. We know that they might go to 25% of poles on Docker hub and Docker hub provides you the widest choice and the best support that trusted content. We're talking to people about how to make this more helpful. We know, for example, that winter 69 four is just showing us as support, but the image doesn't yet tell you that we're working with canonical to improve messaging from specific images about left lifecycle and support. >>We know that you need more images, regularly updated free of vulnerabilities, easy to use and discover, and Donnie and Marie neuro, going to talk about that more this last year, the solar winds attack has been in the, in the news. A lot, the software you're using and trusting could be compromised and might be all over your organization. We need to reduce the risk of using vital open-source components. We're seeing more software supply chain attacks being targeted as the supply chain, because it's often an easier place to attack and production software. We need to be able to use this external code safely. We need to, everyone needs to start from trusted sources like photography images. They need to scan for known vulnerabilities using Docker scan that we built in partnership with sneak and lost DockerCon last year, we need just keep updating base images and dependencies, and we'll, we're going to help you have the control and understanding about your images that you need to do this. >>And there's more, we're also working on the nursery V2 project in the CNCF to revamp container signings, or you can tell way or software comes from we're working on tooling to make updates easier, and to help you understand and manage all the principals carrier you're using security is a growing concern for all of us. It's really important. And we're going to help you work with security. We can't achieve all our dreams, whether that's space travel or amazing developer products ever see without deep partnerships with our community to cloud is RA and the cloud providers aware most of you ship your occasion production and simple routes that take your work and deploy it easily. Reliably and securely are really important. Just get into production simply and easily and securely. And we've done a bunch of work on that. And, um, but we know there's more to do. >>The CNCF on the open source cloud native community are an amazing ecosystem of creators and lovely people creating an amazing strong community and supporting a huge amount of innovation has its roots in the container ecosystem and his dreams beyond that much of the innovation is focused around operate experience so far, but developer experience is really a growing concern in that community as well. And we're really excited to work on that. We also uses appraiser tool. Then we know you do, and we know that you want it to be easier to use in your environment. We just shifted Docker hub to work on, um, Kubernetes fully. And, um, we're also using many of the other projects are Argo from atheists. We're spending a lot of time working with Microsoft, Amazon right now on getting natural UV to ready to ship in the next few. That's a really detailed piece of collaboration we've been working on for a long term. Long time is really important for our community as the scarcity of the container containers and, um, getting content for you, working together makes us stronger. Our community is made up of all of you have. Um, it's always amazing to be reminded of that as a huge open source community that we already proud to work with. It's an amazing amount of innovation that you're all creating and where perhaps it, what with you and share with you as well. Thank you very much. And thank you for being here. >>Really excited to talk to you today and share more about what Docker is doing to help make you faster, make your team faster and turn your application delivery into something that makes you a 10 X team. What we're hearing from you, the developers using Docker everyday fits across three common themes that we hear consistently over and over. We hear that your time is super important. It's critical, and you want to move faster. You want your tools to get out of your way, and instead to enable you to accelerate and focus on the things you want to be doing. And part of that is that finding great content, great application components that you can incorporate into your apps to move faster is really hard. It's hard to discover. It's hard to find high quality content that you can trust that, you know, passes your test and your configuration needs. >>And it's hard to create good content as well. And you're looking for more safety, more guardrails to help guide you along that way so that you can focus on creating value for your company. Secondly, you're telling us that it's a really far to collaborate effectively with your team and you want to do more, to work more effectively together to help your tools become more and more seamless to help you stay in sync, both with yourself across all of your development environments, as well as with your teammates so that you can more effectively collaborate together. Review each other's work, maintain things and keep them in sync. And finally, you want your applications to run consistently in every single environment, whether that's your local development environment, a cloud-based development environment, your CGI pipeline, or the cloud for production, and you want that micro service to provide that consistent experience everywhere you go so that you have similar tools, similar environments, and you don't need to worry about things getting in your way, but instead things make it easy for you to focus on what you wanna do and what Docker is doing to help solve all of these problems for you and your colleagues is creating a collaborative app dev platform. >>And this collaborative application development platform consists of multiple different pieces. I'm not going to walk through all of them today, but the overall view is that we're providing all the tooling you need from the development environment, to the container images, to the collaboration services, to the pipelines and integrations that enable you to focus on making your applications amazing and changing the world. If we start zooming on a one of those aspects, collaboration we hear from developers regularly is that they're challenged in synchronizing their own setups across environments. They want to be able to duplicate the setup of their teammates. Look, then they can easily get up and running with the same applications, the same tooling, the same version of the same libraries, the same frameworks. And they want to know if their applications are good before they're ready to share them in an official space. >>They want to collaborate on things before they're done, rather than feeling like they have to officially published something before they can effectively share it with others to work on it, to solve this. We're thrilled today to announce Docker, dev environments, Docker, dev environments, transform how your team collaborates. They make creating, sharing standardized development environments. As simple as a Docker poll, they make it easy to review your colleagues work without affecting your own work. And they increase the reproducibility of your own work and decreased production issues in doing so because you've got consistent environments all the way through. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more detail on Docker dev environments. >>Hi, I'm Ben. I work as a principal program manager at DACA. One of the areas that doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner loop where the inner loop is a better development, where you write code, test it, build it, run it, and ultimately get feedback on those changes before you merge them and try and actually ship them out to production. Most amount of us build this flow and get there still leaves a lot of challenges. People need to jump between branches to look at each other's work. Independence. Dependencies can be different when you're doing that and doing this in this new hybrid wall of work. Isn't any easier either the ability to just save someone, Hey, come and check this out. It's become much harder. People can't come and sit down at your desk or take your laptop away for 10 minutes to just grab and look at what you're doing. >>A lot of the reason that development is hard when you're remote, is that looking at changes and what's going on requires more than just code requires all the dependencies and everything you've got set up and that complete context of your development environment, to understand what you're doing and solving this in a remote first world is hard. We wanted to look at how we could make this better. Let's do that in a way that let you keep working the way you do today. Didn't want you to have to use a browser. We didn't want you to have to use a new idea. And we wanted to do this in a way that was application centric. We wanted to let you work with all the rest of the application already using C for all the services and all those dependencies you need as part of that. And with that, we're excited to talk more about docket developer environments, dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, working inside a container, then able to share and collaborate more than just the code. >>We want it to enable you to share your whole modern development environment, your whole setup from DACA, with your team on any operating system, we'll be launching a limited beta of dev environments in the coming month. And a GA dev environments will be ID agnostic and supporting composts. This means you'll be able to use an extend your existing composed files to create your own development environment in whatever idea, working in dev environments designed to be local. First, they work with Docker desktop and say your existing ID, and let you share that whole inner loop, that whole development context, all of your teammates in just one collect. This means if you want to get feedback on the working progress change or the PR it's as simple as opening another idea instance, and looking at what your team is working on because we're using compose. You can just extend your existing oppose file when you're already working with, to actually create this whole application and have it all working in the context of the rest of the services. >>So it's actually the whole environment you're working with module one service that doesn't really understand what it's doing alone. And with that, let's jump into a quick demo. So you can see here, two dev environments up and running. First one here is the same container dev environment. So if I want to go into that, let's see what's going on in the various code button here. If that one open, I can get straight into my application to start making changes inside that dev container. And I've got all my dependencies in here, so I can just run that straight in that second application I have here is one that's opened up in compose, and I can see that I've also got my backend, my front end and my database. So I've got all my services running here. So if I want, I can open one or more of these in a dev environment, meaning that that container has the context that dev environment has the context of the whole application. >>So I can get back into and connect to all the other services that I need to test this application properly, all of them, one unit. And then when I've made my changes and I'm ready to share, I can hit my share button type in the refund them on to share that too. And then give that image to someone to get going, pick that up and just start working with that code and all my dependencies, simple as putting an image, looking ahead, we're going to be expanding development environments, more of your dependencies for the whole developer worst space. We want to look at backing up and letting you share your volumes to make data science and database setups more repeatable and going. I'm still all of this under a single workspace for your team containing images, your dev environments, your volumes, and more we've really want to allow you to create a fully portable Linux development environment. >>So everyone you're working with on any operating system, as I said, our MVP we're coming next month. And that was for vs code using their dev container primitive and more support for other ideas. We'll follow to find out more about what's happening and what's coming up next in the future of this. And to actually get a bit of a deeper dive in the experience. Can we check out the talk I'm doing with Georgie and girl later on today? Thank you, Ben, amazing story about how Docker is helping to make developer teams more collaborative. Now I'd like to talk more about applications while the dev environment is like the workbench around what you're building. The application itself has all the different components, libraries, and frameworks, and other code that make up the application itself. And we hear developers saying all the time things like, how do they know if their images are good? >>How do they know if they're secure? How do they know if they're minimal? How do they make great images and great Docker files and how do they keep their images secure? And up-to-date on every one of those ties into how do I create more trust? How do I know that I'm building high quality applications to enable you to do this even more effectively than today? We are pleased to announce the DACA verified polisher program. This broadens trusted content by extending beyond Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. It gives you confidence that you're getting what you expect because Docker verifies every single one of these publishers to make sure they are who they say they are. This improves our secure supply chain story. And finally it simplifies your discovery of the best building blocks by making it easy for you to find things that you know, you can trust so that you can incorporate them into your applications and move on and on the right. You can see some examples of the publishers that are involved in Docker, official images and our Docker verified publisher program. Now I'm pleased to introduce you to marina. Kubicki our senior product manager who will walk you through more about what we're doing to create a better experience for you around trust. >>Thank you, Dani, >>Mario Andretti, who is a famous Italian sports car driver. One said that if everything feels under control, you're just not driving. You're not driving fast enough. Maya Andretti is not a software developer and a software developers. We know that no matter how fast we need to go in order to drive the innovation that we're working on, we can never allow our applications to spin out of control and a Docker. As we continue talking to our, to the developers, what we're realizing is that in order to reach that speed, the developers are the, the, the development community is looking for the building blocks and the tools that will, they will enable them to drive at the speed that they need to go and have the trust in those building blocks. And in those tools that they will be able to maintain control over their applications. So as we think about some of the things that we can do to, to address those concerns, uh, we're realizing that we can pursue them in a number of different venues, including creating reliable content, including creating partnerships that expands the options for the reliable content. >>Um, in order to, in a we're looking at creating integrations, no link security tools, talk about the reliable content. The first thing that comes to mind are the Docker official images, which is a program that we launched several years ago. And this is a set of curated, actively maintained, open source images that, uh, include, uh, operating systems and databases and programming languages. And it would become immensely popular for, for, for creating the base layers of, of the images of, of the different images, images, and applications. And would we realizing that, uh, many developers are, instead of creating something from scratch, basically start with one of the official images for their basis, and then build on top of that. And this program has become so popular that it now makes up a quarter of all of the, uh, Docker poles, which essentially ends up being several billion pulse every single month. >>As we look beyond what we can do for the open source. Uh, we're very ability on the open source, uh, spectrum. We are very excited to announce that we're launching the Docker verified publishers program, which is continuing providing the trust around the content, but now working with, uh, some of the industry leaders, uh, in multiple, in multiple verticals across the entire technology technical spec, it costs entire, uh, high tech in order to provide you with more options of the images that you can use for building your applications. And it still comes back to trust that when you are searching for content in Docker hub, and you see the verified publisher badge, you know, that this is, this is the content that, that is part of the, that comes from one of our partners. And you're not running the risk of pulling the malicious image from an employee master source. >>As we look beyond what we can do for, for providing the reliable content, we're also looking at some of the tools and the infrastructure that we can do, uh, to create a security around the content that you're creating. So last year at the last ad, the last year's DockerCon, we announced partnership with sneak. And later on last year, we launched our DACA, desktop and Docker hub vulnerability scans that allow you the options of writing scans in them along multiple points in your dev cycle. And in addition to providing you with information on the vulnerability on, on the vulnerabilities, in, in your code, uh, it also provides you with a guidance on how to re remediate those vulnerabilities. But as we look beyond the vulnerability scans, we're also looking at some of the other things that we can do, you know, to, to, to, uh, further ensure that the integrity and the security around your images, your images, and with that, uh, later on this year, we're looking to, uh, launch the scope, personal access tokens, and instead of talking about them, I will simply show you what they look like. >>So if you can see here, this is my page in Docker hub, where I've created a four, uh, tokens, uh, read-write delete, read, write, read only in public read in public creeper read only. So, uh, earlier today I went in and I, I logged in, uh, with my read only token. And when you see, when I'm going to pull an image, it's going to allow me to pull an image, not a problem success. And then when I do the next step, I'm going to ask to push an image into the same repo. Uh, would you see is that it's going to give me an error message saying that they access is denied, uh, because there is an additional authentication required. So these are the things that we're looking to add to our roadmap. As we continue thinking about the things that we can do to provide, um, to provide additional building blocks, content, building blocks, uh, and, and, and tools to build the trust so that our DACA developer and skinned code faster than Mario Andretti could ever imagine. Uh, thank you to >>Thank you, marina. It's amazing what you can do to improve the trusted content so that you can accelerate your development more and move more quickly, move more collaboratively and build upon the great work of others. Finally, we hear over and over as that developers are working on their applications that they're looking for, environments that are consistent, that are the same as production, and that they want their applications to really run anywhere, any environment, any architecture, any cloud one great example is the recent announcement of apple Silicon. We heard from developers on uproar that they needed Docker to be available for that architecture before they could add those to it and be successful. And we listened. And based on that, we are pleased to share with you Docker, desktop on apple Silicon. This enables you to run your apps consistently anywhere, whether that's developing on your team's latest dev hardware, deploying an ARM-based cloud environments and having a consistent architecture across your development and production or using multi-year architecture support, which enables your whole team to collaborate on its application, using private repositories on Docker hub, and thrilled to introduce you to Hughie cower, senior director for product management, who will walk you through more of what we're doing to create a great developer experience. >>Senior director of product management at Docker. And I'd like to jump straight into a demo. This is the Mac mini with the apple Silicon processor. And I want to show you how you can now do an end-to-end arm workflow from my M one Mac mini to raspberry PI. As you can see, we have vs code and Docker desktop installed on a, my, the Mac mini. I have a small example here, and I have a raspberry PI three with an led strip, and I want to turn those LEDs into a moving rainbow. This Dockerfile here, builds the application. We build the image with the Docker, build X command to make the image compatible for all raspberry pies with the arm. 64. Part of this build is built with the native power of the M one chip. I also add the push option to easily share the image with my team so they can give it a try to now Dr. >>Creates the local image with the application and uploads it to Docker hub after we've built and pushed the image. We can go to Docker hub and see the new image on Docker hub. You can also explore a variety of images that are compatible with arm processors. Now let's go to the raspberry PI. I have Docker already installed and it's running Ubuntu 64 bit with the Docker run command. I can run the application and let's see what will happen from there. You can see Docker is downloading the image automatically from Docker hub and when it's running, if it's works right, there are some nice colors. And with that, if we have an end-to-end workflow for arm, where continuing to invest into providing you a great developer experience, that's easy to install. Easy to get started with. As you saw in the demo, if you're interested in the new Mac, mini are interested in developing for our platforms in general, we've got you covered with the same experience you've come to expect from Docker with over 95,000 arm images on hub, including many Docker official images. >>We think you'll find what you're looking for. Thank you again to the community that helped us to test the tech previews. We're so delighted to hear when folks say that the new Docker desktop for apple Silicon, it just works for them, but that's not all we've been working on. As Dani mentioned, consistency of developer experience across environments is so important. We're introducing composed V2 that makes compose a first-class citizen in the Docker CLI you no longer need to install a separate composed biter in order to use composed, deploying to production is simpler than ever with the new compose integration that enables you to deploy directly to Amazon ECS or Azure ACI with the same methods you use to run your application locally. If you're interested in running slightly different services, when you're debugging versus testing or, um, just general development, you can manage that all in one place with the new composed service to hear more about what's new and Docker desktop, please join me in the three 15 breakout session this afternoon. >>And now I'd love to tell you a bit more about bill decks and convince you to try it. If you haven't already it's our next gen build command, and it's no longer experimental as shown in the demo with built X, you'll be able to do multi architecture builds, share those builds with your team and the community on Docker hub. With build X, you can speed up your build processes with remote caches or build all the targets in your composed file in parallel with build X bake. And there's so much more if you're using Docker, desktop or Docker, CE you can use build X checkout tonus is talk this afternoon at three 45 to learn more about build X. And with that, I hope everyone has a great Dr. Khan and back over to you, Donnie. >>Thank you UA. It's amazing to hear about what we're doing to create a better developer experience and make sure that Docker works everywhere you need to work. Finally, I'd like to wrap up by showing you everything that we've announced today and everything that we've done recently to make your lives better and give you more and more for the single price of your Docker subscription. We've announced the Docker verified publisher program we've announced scoped personal access tokens to make it easier for you to have a secure CCI pipeline. We've announced Docker dev environments to improve your collaboration with your team. Uh, we shared with you Docker, desktop and apple Silicon, to make sure that, you know, Docker runs everywhere. You need it to run. And we've announced Docker compose version two, finally making it a first-class citizen amongst all the other great Docker tools. And we've done so much more recently as well from audit logs to advanced image management, to compose service profiles, to improve where you can run Docker more easily. >>Finally, as we look forward, where we're headed in the upcoming year is continuing to invest in these themes of helping you build, share, and run modern apps more effectively. We're going to be doing more to help you create a secure supply chain with which only grows more and more important as time goes on. We're going to be optimizing your update experience to make sure that you can easily understand the current state of your application, all its components and keep them all current without worrying about breaking everything as you're doing. So we're going to make it easier for you to synchronize your work. Using cloud sync features. We're going to improve collaboration through dev environments and beyond, and we're going to do make it easy for you to run your microservice in your environments without worrying about things like architecture or differences between those environments. Thank you so much. I'm thrilled about what we're able to do to help make your lives better. And now you're going to be hearing from one of our customers about what they're doing to launch their business with Docker >>I'm Matt Falk, I'm the head of engineering and orbital insight. And today I want to talk to you a little bit about data from space. So who am I like many of you, I'm a software developer and a software developer about seven companies so far, and now I'm a head of engineering. So I spend most of my time doing meetings, but occasionally I'll still spend time doing design discussions, doing code reviews. And in my free time, I still like to dabble on things like project oiler. So who's Oberlin site. What do we do? Portal insight is a large data supplier and analytics provider where we take data geospatial data anywhere on the planet, any overhead sensor, and translate that into insights for the end customer. So specifically we have a suite of high performance, artificial intelligence and machine learning analytics that run on this geospatial data. >>And we build them to specifically determine natural and human service level activity anywhere on the planet. What that really means is we take any type of data associated with a latitude and longitude and we identify patterns so that we can, so we can detect anomalies. And that's everything that we do is all about identifying those patterns to detect anomalies. So more specifically, what type of problems do we solve? So supply chain intelligence, this is one of the use cases that we we'd like to talk about a lot. It's one of our main primary verticals that we go after right now. And as Scott mentioned earlier, this had a huge impact last year when COVID hit. So specifically supply chain intelligence is all about identifying movement patterns to and from operating facilities to identify changes in those supply chains. How do we do this? So for us, we can do things where we track the movement of trucks. >>So identifying trucks, moving from one location to another in aggregate, same thing we can do with foot traffic. We can do the same thing for looking at aggregate groups of people moving from one location to another and analyzing their patterns of life. We can look at two different locations to determine how people are moving from one location to another, or going back and forth. All of this is extremely valuable for detecting how a supply chain operates and then identifying the changes to that supply chain. As I said last year with COVID, everything changed in particular supply chains changed incredibly, and it was hugely important for customers to know where their goods or their products are coming from and where they were going, where there were disruptions in their supply chain and how that's affecting their overall supply and demand. So to use our platform, our suite of tools, you can start to gain a much better picture of where your suppliers or your distributors are going from coming from or going to. >>So what's our team look like? So my team is currently about 50 engineers. Um, we're spread into four different teams and the teams are structured like this. So the first team that we have is infrastructure engineering and this team largely deals with deploying our Dockers using Kubernetes. So this team is all about taking Dockers, built by other teams, sometimes building the Dockers themselves and putting them into our production system, our platform engineering team, they produce these microservices. So they produce microservice, Docker images. They develop and test with them locally. Their entire environments are dockerized. They produce these doctors, hand them over to him for infrastructure engineering to be deployed. Similarly, our product engineering team does the same thing. They develop and test with Dr. Locally. They also produce a suite of Docker images that the infrastructure team can then deploy. And lastly, we have our R and D team, and this team specifically produces machine learning algorithms using Nvidia Docker collectively, we've actually built 381 Docker repositories and 14 million. >>We've had 14 million Docker pools over the lifetime of the company, just a few stats about us. Um, but what I'm really getting to here is you can see actually doctors becoming almost a form of communication between these teams. So one of the paradigms in software engineering that you're probably familiar with encapsulation, it's really helpful for a lot of software engineering problems to break the problem down, isolate the different pieces of it and start building interfaces between the code. This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows you to scale up certain pieces and keep others at a smaller level so that you can meet customer demands. And for us, one of the things that we can largely do now is use Dockers as that interface. So instead of having an entire platform where all teams are talking to each other, and everything's kind of, mishmashed in a monolithic application, we can now say this team is only able to talk to this team by passing over a particular Docker image that defines the interface of what needs to be built before it passes to the team and really allows us to scalp our development and be much more efficient. >>Also, I'd like to say we are hiring. Um, so we have a number of open roles. We have about 30 open roles in our engineering team that we're looking to fill by the end of this year. So if any of this sounds really interesting to you, please reach out after the presentation. >>So what does our platform do? Really? Our platform allows you to answer any geospatial question, and we do this at three different inputs. So first off, where do you want to look? So we did this as what we call an AOI or an area of interest larger. You can think of this as a polygon drawn on the map. So we have a curated data set of almost 4 million AOIs, which you can go and you can search and use for your analysis, but you're also free to build your own. Second question is what you want to look for. We do this with the more interesting part of our platform of our machine learning and AI capabilities. So we have a suite of algorithms that automatically allow you to identify trucks, buildings, hundreds of different types of aircraft, different types of land use, how many people are moving from one location to another different locations that people in a particular area are moving to or coming from all of these different analyses or all these different analytics are available at the click of a button, and then determine what you want to look for. >>Lastly, you determine when you want to find what you're looking for. So that's just, uh, you know, do you want to look for the next three hours? Do you want to look for the last week? Do you want to look every month for the past two, whatever the time cadence is, you decide that you hit go and out pops a time series, and that time series tells you specifically where you want it to look what you want it to look for and how many, or what percentage of the thing you're looking for appears in that area. Again, we do all of this to work towards patterns. So we use all this data to produce a time series from there. We can look at it, determine the patterns, and then specifically identify the anomalies. As I mentioned with supply chain, this is extremely valuable to identify where things change. So we can answer these questions, looking at a particular operating facility, looking at particular, what is happening with the level of activity is at that operating facility where people are coming from, where they're going to, after visiting that particular facility and identify when and where that changes here, you can just see it's a picture of our platform. It's actually showing all the devices in Manhattan, um, over a period of time. And it's more of a heat map view. So you can actually see the hotspots in the area. >>So really the, and this is the heart of the talk, but what happened in 2020? So for men, you know, like many of you, 2020 was a difficult year COVID hit. And that changed a lot of what we're doing, not from an engineering perspective, but also from an entire company perspective for us, the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. Now those two things often compete with each other. A lot of times you want to increase innovation, that's going to increase your costs, but the challenge last year was how to do both simultaneously. So here's a few stats for you from our team. In Q1 of last year, we were spending almost $600,000 per month on compute costs prior to COVID happening. That wasn't hugely a concern for us. It was a lot of money, but it wasn't as critical as it was last year when we really needed to be much more efficient. >>Second one is flexibility for us. We were deployed on a single cloud environment while we were cloud thought ready, and that was great. We want it to be more flexible. We want it to be on more cloud environments so that we could reach more customers. And also eventually get onto class side networks, extending the base of our customers as well from a custom analytics perspective. This is where we get into our traction. So last year, over the entire year, we computed 54,000 custom analytics for different users. We wanted to make sure that this number was steadily increasing despite us trying to lower our costs. So we didn't want the lowering cost to come as the sacrifice of our user base. Lastly, of particular percentage here that I'll say definitely needs to be improved is 75% of our projects never fail. So this is where we start to get into a bit of stability of our platform. >>Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular project or computation that runs every day and any one of those runs sale account, that is a failure because from an end-user perspective, that's an issue. So this is something that we know we needed to improve on and we needed to grow and make our platform more stable. I'm going to something that we really focused on last year. So where are we now? So now coming out of the COVID valley, we are starting to soar again. Um, we had, uh, back in April of last year, we had the entire engineering team. We actually paused all development for about four weeks. You had everyone focused on reducing our compute costs in the cloud. We got it down to 200 K over the period of a few months. >>And for the next 12 months, we hit that number every month. This is huge for us. This is extremely important. Like I said, in the COVID time period where costs and operating efficiency was everything. So for us to do that, that was a huge accomplishment last year and something we'll keep going forward. One thing I would actually like to really highlight here, two is what allowed us to do that. So first off, being in the cloud, being able to migrate things like that, that was one thing. And we were able to use there's different cloud services in a more particular, in a more efficient way. We had a very detailed tracking of how we were spending things. We increased our data retention policies. We optimized our processing. However, one additional piece was switching to new technologies on, in particular, we migrated to get lab CICB. >>Um, and this is something that the costs we use Docker was extremely, extremely easy. We didn't have to go build new new code containers or repositories or change our code in order to do this. We were simply able to migrate the containers over and start using a new CIC so much. In fact, that we were able to do that migration with three engineers in just two weeks from a cloud environment and flexibility standpoint, we're now operating in two different clouds. We were able to last night, I've over the last nine months to operate in the second cloud environment. And again, this is something that Docker helped with incredibly. Um, we didn't have to go and build all new interfaces to all new, different services or all different tools in the next cloud provider. All we had to do was build a base cloud infrastructure that ups agnostic the way, all the different details of the cloud provider. >>And then our doctors just worked. We can move them to another environment up and running, and our platform was ready to go from a traction perspective. We're about a third of the way through the year. At this point, we've already exceeded the amount of customer analytics we produce last year. And this is thanks to a ton more albums, that whole suite of new analytics that we've been able to build over the past 12 months and we'll continue to build going forward. So this is really, really great outcome for us because we were able to show that our costs are staying down, but our analytics and our customer traction, honestly, from a stability perspective, we improved from 75% to 86%, not quite yet 99 or three nines or four nines, but we are getting there. Um, and this is actually thanks to really containerizing and modularizing different pieces of our platform so that we could scale up in different areas. This allowed us to increase that stability. This piece of the code works over here, toxin an interface to the rest of the system. We can scale this piece up separately from the rest of the system, and that allows us much more easily identify issues in the system, fix those and then correct the system overall. So basically this is a summary of where we were last year, where we are now and how much more successful we are now because of the issues that we went through last year and largely brought on by COVID. >>But that this is just a screenshot of the, our, our solution actually working on supply chain. So this is in particular, it is showing traceability of a distribution warehouse in salt lake city. It's right in the center of the screen here. You can see the nice kind of orange red center. That's a distribution warehouse and all the lines outside of that, all the dots outside of that are showing where people are, where trucks are moving from that location. So this is really helpful for supply chain companies because they can start to identify where their suppliers are, are coming from or where their distributors are going to. So with that, I want to say, thanks again for following along and enjoy the rest of DockerCon.

Published Date : May 27 2021

SUMMARY :

We know that collaboration is key to your innovation sharing And we know from talking with many of you that you and your developer Have you seen the email from Scott? I was thinking we could try, um, that new Docker dev environments feature. So if you hit the share button, what I should do is it will take all of your code and the dependencies and Uh, let me get that over to you, All right. It's just going to grab the image down, which you can take all of the code, the dependencies only get brunches working It's connected to the container. So let's just have a look at what you use So I've had a look at what you were doing and I'm actually going to change. Let me grab the link. it should be able to open up the code that I've changed and then just run it in the same way you normally do. I think we should ship it. For example, in response to COVID we saw global communities, including the tech community rapidly teams make sense of all this specifically, our goal is to provide development teams with the trusted We had powerful new capabilities to the Docker product, both free and subscription. And finally delivering an easy to use well-integrated development experience with best of breed tools and content And what we've learned in our discussions with you will have long asking a coworker to take a look at your code used to be as easy as swiveling their chair around, I'd like to take a moment to share with Docker and our partners are doing for trusted content, providing development teams, and finally, public repos for communities enable community projects to be freely shared with anonymous Lastly, the container images themselves and this end to end flow are built on open industry standards, but the Docker team rose to the challenge and worked together to continue shipping great product, the again for joining us, we look forward to having a great DockerCon with you today, as well as a great year So let's dive in now, I know this may be hard for some of you to believe, I taught myself how to code. And by the way, I'm showing you actions in Docker, And the cool thing is you can use it on any And if I can do it, I know you can too, but enough yapping let's get started to save Now you can do this in a couple of ways, whether you're doing it in your preferred ID or for today's In essence, with automation, you can be kind to your future self And I hope you all go try it out, but why do we care about all of that? And to get into that wonderful state that we call flow. and eliminate or outsource the rest because you don't need to do it, make the machines Speaking of the open source ecosystem we at get hub are so to be here with all you nerds. Komack lovely to see you here. We want to help you get your applications from your laptops, And it's all a seamless thing from, you know, from your code to the cloud local And we all And we know that you use So we need to make that as easier. We know that they might go to 25% of poles we need just keep updating base images and dependencies, and we'll, we're going to help you have the control to cloud is RA and the cloud providers aware most of you ship your occasion production Then we know you do, and we know that you want it to be easier to use in your It's hard to find high quality content that you can trust that, you know, passes your test and your configuration more guardrails to help guide you along that way so that you can focus on creating value for your company. that enable you to focus on making your applications amazing and changing the world. Now, I'm going to pass it off to our principal product manager, Ben Gotch to walk you through more doc has been looking at to see what's hard today for developers is sharing changes that you make from the inner dev environments are new part of the Docker experience that makes it easier you to get started with your whole inner leap, We want it to enable you to share your whole modern development environment, your whole setup from DACA, So you can see here, So I can get back into and connect to all the other services that I need to test this application properly, And to actually get a bit of a deeper dive in the experience. Docker official images, to give you more and more trusted building blocks that you can incorporate into your applications. We know that no matter how fast we need to go in order to drive The first thing that comes to mind are the Docker official images, And it still comes back to trust that when you are searching for content in And in addition to providing you with information on the vulnerability on, So if you can see here, this is my page in Docker hub, where I've created a four, And based on that, we are pleased to share with you Docker, I also add the push option to easily share the image with my team so they can give it a try to now continuing to invest into providing you a great developer experience, a first-class citizen in the Docker CLI you no longer need to install a separate composed And now I'd love to tell you a bit more about bill decks and convince you to try it. image management, to compose service profiles, to improve where you can run Docker more easily. So we're going to make it easier for you to synchronize your work. And today I want to talk to you a little bit about data from space. What that really means is we take any type of data associated with a latitude So to use our platform, our suite of tools, you can start to gain a much better picture of where your So the first team that we have is infrastructure This allows you to scale different pieces of the platform or different pieces of your code in different ways that allows So if any of this sounds really interesting to you, So we have a suite of algorithms that automatically allow you to identify So you can actually see the hotspots in the area. the motivation really became to make sure that we were lowering our costs and increasing innovation simultaneously. of particular percentage here that I'll say definitely needs to be improved is 75% Now I'm not saying that 25% of our projects fail the way we measure this is if you have a particular And for the next 12 months, we hit that number every month. night, I've over the last nine months to operate in the second cloud environment. And this is thanks to a ton more albums, they can start to identify where their suppliers are, are coming from or where their distributors are going

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mario AndrettiPERSON

0.99+

DaniPERSON

0.99+

Matt FalkPERSON

0.99+

Dana LawsonPERSON

0.99+

AmazonORGANIZATION

0.99+

Maya AndrettiPERSON

0.99+

DonniePERSON

0.99+

MicrosoftORGANIZATION

0.99+

MonaPERSON

0.99+

NicolePERSON

0.99+

UNICEFORGANIZATION

0.99+

25%QUANTITY

0.99+

GermanyLOCATION

0.99+

14 millionQUANTITY

0.99+

75%QUANTITY

0.99+

ManhattanLOCATION

0.99+

KhanPERSON

0.99+

10 minutesQUANTITY

0.99+

last yearDATE

0.99+

99QUANTITY

0.99+

1.3 timesQUANTITY

0.99+

1.2 timesQUANTITY

0.99+

ClairePERSON

0.99+

DockerORGANIZATION

0.99+

ScottPERSON

0.99+

BenPERSON

0.99+

UC IrvineORGANIZATION

0.99+

85%QUANTITY

0.99+

OracleORGANIZATION

0.99+

34%QUANTITY

0.99+

JustinPERSON

0.99+

JoeyPERSON

0.99+

80%QUANTITY

0.99+

160 imagesQUANTITY

0.99+

2020DATE

0.99+

$10,000QUANTITY

0.99+

10 secondsQUANTITY

0.99+

23 minutesQUANTITY

0.99+

JavaScriptTITLE

0.99+

AprilDATE

0.99+

twoQUANTITY

0.99+

56%QUANTITY

0.99+

PythonTITLE

0.99+

MollyPERSON

0.99+

Mac miniCOMMERCIAL_ITEM

0.99+

Hughie cowerPERSON

0.99+

two weeksQUANTITY

0.99+

100%QUANTITY

0.99+

GeorgiePERSON

0.99+

Matt fallPERSON

0.99+

MarsLOCATION

0.99+

Second questionQUANTITY

0.99+

KubickiPERSON

0.99+

MobyPERSON

0.99+

IndiaLOCATION

0.99+

DockerConEVENT

0.99+

Youi CalPERSON

0.99+

three ninesQUANTITY

0.99+

J frogORGANIZATION

0.99+

200 KQUANTITY

0.99+

appleORGANIZATION

0.99+

SharonPERSON

0.99+

AWSORGANIZATION

0.99+

10 XQUANTITY

0.99+

COVID-19OTHER

0.99+

windowsTITLE

0.99+

381QUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Isabelle Guis, Tim Carben, & Manoj Nair | CUBEconversation


 

>> Commvault was an idea that incubated as a project inside of bell labs, one of the most prestigious research and development organizations in the world, back in the day. It became an official company in 1996, and Commvault just celebrated its 25th anniversary. As such, Commvault has had to reinvent itself many times over the past two and a half decades. From riding the waves of the very early PC networking era, to supporting a rich set of solutions for the evolving enterprise. This includes things like cloud computing, ransomware disaster, recovery, security compliance, and pretty much all things data protection and data management. And with me to talk about the company, its vision for the future, with also a voice of the customer. Three great guests, Isabel Geese is the Chief Marketing Officer of Commvault, Manoj Nair is the GM of Metallic, and Tim Carben is a principal systems engineer with Mitchell International. Folks, welcome to the Commvault power panel. Come inside the cube. It's awesome to have you. >> Great to be here Dave. >> All right. First of all, I got to congratulate you celebrating 25 years. That's a long time, not a lot of tech companies make it that far and are still successful and relevant. So Isabelle, maybe you could start off. What do you think has been the driving factor for your ability to kind of lead through the subsequent technological waves that I alluded to upfront? >> So well, 25 years is commendable but we are not counting success in number of years we're really counting success in how many customer we've helped over those years. And I will say what has been the driving mater for us as who that has been innovating with our customers. You know, we were there every step of the way, when they migrate to hybrid cloud. And now as they go to multi cloud in a post COVID world, where they have to win gold you know, distributed workforce, different types of workloads and devices, we are there too. We have that workload as well. So the innovation keep coming in, thanks to us listening to our customer. And then adding needs that change over the last 25 years and probably for the next 25 as well. You know, we, we want to be here for customer who think that data is an asset, not a liability. And also making sure that we offer them a broad range of use cases to book why things simple because the word is getting too complex for them. So let's take the complexity on us. >> Thank you for that. So Manoj, you riffed on the cube before about, you know putting on the, the binoculars and looking at the future. So let's talk about that. Where do you see the future for this industry? What are some of the key driving factors that matter. >> Dave it's great to be back on the Cube. You know, we see our industry no different than lots of other industries. The SAS model is rapidly being adopted. And the reason is, you know customers are looking for simplicity simplicity not just in leveraging, you know the great technology that Commvault has built but in the business model and the experience. So, you know, that's one of the fastest growing trends that started in consumer apps and other applications, other B to B apps. And now we're seeing it in core infrastructure like data management, data protection. They're also trying to leverage their data better, make sure it's not fragmented. So, how do you deliver more intelligent services? You know, securing the data insights from the beta, transforming the data and that combination, you know our ability to do that in a multi-cloud world like Isabel said, now with increasing edge work loads. Sometimes, you know, our customers say their data centers are the new edge too. So you kind of have this, you know, data everywhere, workloads everywhere, yet the desire to deliver that with a holistic experience, we call it the power of bank. The ability to manage your data and leverage the data with the simple lesson without compromise. And that's really what we're seeing as part of the future. >> Okay. Manoj I to come back to you and double click on that but I want to introduce Tim to the conversation here. You bring in the voice of the customer, as they say. Tim, my understanding is Mitchell has been a Commvault customer since the mid two thousands. So tell us why Commvault? What has kept you with the company for more than 15 years? >> Yeah! It was what, 2006 when we started and really when it all boils down to it, it's just as Isabel said, innovation. At Mitchell we're always looking to stay ahead of the trend. And, you know, just to like was mentioned earlier data is the most important part here. Commvault provides us peace of mind to protect and manage our data. And they do data protection for all of our environments, right now. We've been a partner to help enable our digital transformation including SAS and cloud adoption. When we start talking about the solutions we have, I mean we of course started in 2006. I mean, this was version six, if I remember right, this predates me at the company. Upgraded to seven, eight, nine. We brought in 10, brought in 11, brought in hyper scale and then moved on to bring in the Metallic. And Commvault provides the reason for this. I guess I should say is, Commvault provides a reliable backup but most importantly, recovery, rapid recovery. That's what gives me confidence. That's what helps me sleep better at night. So when I started looking at SAS, as a differentiator to protect our 036 environments or 065 environments. Metallic was a natural choice, and the one thing I wanted to add to that is it came out cheaper than us building it ourselves. When you take into account resources, as well as compute and storage. So again, just a natural choice. >> Yeah. As the saying goes, back up is one thing, recoveries everything. Isabel you know we've seen the SAS suffocation of the enterprise, particularly, you know from the app side. You came from Salesforce. So you, the company that is the poster child for SAS. But my question is what's catalyzing this shift and why do you think data protection is ready to make the move? >> Well, there's so many good things about SAS. You know, you remember when people started moving to the cloud and transforming their CapEx into OPEX, well SAS bring yet another level of benefits. I.T, we know always has to do more with less. And so SAS allows you to, once you set up you've got all the software upgrades automatically without you know, I think it's smart work. You can better manage your cash flow because you pay as you grow. And also you have a faster time to value. So all of this at help, the fast adoption and I will tell you today, I don't think there is a single customer who doesn't have at least one SAS application because they have things of value of this. Now, when it comes to backup and recovery, everybody's at different stages you still have on premise, you have cloud, they have SAS and workloads devices. And so what we think was the most important was to offer a raw choice of delivery model. Being able to support them if they want software subscription, if they want an integrated appliance or easy one as SAS. As a service model, and also some of our partners are actually delivering this in a more custom and managed way as well. So offering choice because everybody is at a different stage on this journey when it comes to data management and protection, I actually, you know I think team is the example of taking full advantage or this broad choice. >> Well, you mentioned Tim that you leaned into Metallic. We have seen the SAS everywhere. We used to have a email server, right? I mean, (laughing) on prem, that just doesn't happen anymore. But how was Mitchell International thinking about SAS? Maybe you could share your, from your customer perch, what you're seeing. >> Well, What's interesting about this is Mitchell is been providing SAS for a long time. We are a technology company and we do provide solutions, SAS solutions to our customers. And this makes it so important to be able to embrace it because we know the value behind it. We're providing that to our customers. And when I look at what Commvault is doing I know that Commvault is doing the same thing. They're providing the SAS model as a value to their customers. And it's so important to go with this because we keep our environments cutting edge. As GDPR says, you need to have a cutting edge environment. And if you don't, if you cannot check that box you do not move forward. Commvault has that. And this is one less thing that I have to worry about when choosing Metallic to do my backup of O365. >> So thank you for that, Tim. So Manoj, thinking about what you just heard from Isabel and Tim, you know kind of fitting into a company's cloud or hybrid cloud, more importantly strategy, you were talking before about this. And in other words, it's not an either or, it's not a zero sum game. It's simpatico, if you will. I wonder if you could elaborate. >> Yeah. The power of band Dave I'm very proud of that. You know, when I think of the power of band I think of actually folks like Tim, our customers and Commonwealth first, right. And, really that need for choice. So for example, you know customers on various different paths to the cloud, we kind of homogenize it and say, they're on a cloud journey or they're on a digital transformation journey but the journey looks different. And so part of that, and as Isabella was saying is really the ability to meet them where they are in that journey. So for example, do you, go in there and say, "Hey, you know what I'm going to be some customers 100% multi-cloud or single cloud even. And that includes SaaS applications and my infrastructure running as a service." So there's a natural fit there saying great all your data protection. You're not going to be running software appliances for that. So you've got to data protection, data management as a service that Metallic is able to offer across the whole estate. And that's, you know, that's probably a small set of customers, but rapidly growing. Then you see a lot more customers were saying I'm going to do away as you're talking about with the email server, I'm going to move to Office 365 leverage the power of Teams. And there's a shared responsibility model there which is different than an on-prem data protection use case. And so they're, they're able to just add on Metallic to the existing Commvault environment whether it's a Commvault software or hyper-scale and connect the two. So it's a single integrated experience. And then you kind of go to the other end of the spectrum and say "great" customers' all in on a SaaS delivered data protection, as you know and you hear a lot from a lot of your guests and we hear from our customers, there's still a lot of data sitting out there. you know 90 plus percent of workloads in data centers, increasing edge data workloads. And if you were to back up one of those data workloads and say that the only copy can be in the cloud, then that would take like a 10 day recovery SLA. You know, we have some competent users who say that then that's what they have. Our flexibility, our ability to kind of bring in the hyper-scale deployment and just, you know dock it into Metallic and have a local copy instant recovery, SLA, remote backup copy in the cloud for ransomware or your worst case scenario. That's the kind of flexibility. So all those are scenarios we're really seeing with our customers. And that's kind of really the power of mandates. Very unique part of our portfolio. Companies can have portfolio products but to have a single integrated offering with that flexibility, that kind of depending on the use case you can start here and grow into a different point. That's really the unique part of the power event. >> Yeah, yeah. 10 day RTO just doesn't cut it, but Tim, maybe maybe you could weigh in here. Why, what was the catalyst for you adopting Metallic and maybe you could share what was the business impact there? >> Well, the catalyst and impact obviously two different things. The catalyst, when we look at it, there was a lot of what are we going to do with this? We have an environment, we need to back it up and how are we going to approach this? So we looked at it from a few different standpoints and of course, when it boils down to it one of the major reasons was the financial. But when we started looking at everything else that we have available to us and the flexibility that Commvault has in rolling out new solutions, this really was a no brainer, at this point. We are able to essentially back up new features and new products, as soon as they're available. within our Metallic environment we are running the activate. We are running the, the self-service for the end users, to where they can actually recover their own files. We are adding the teams into it to be able to recover and perform these backups for teams. And I want to step aside really quick and mention something about this because I'd been with, you know, Metallic for a long time and I'd been waiting for this. We've been waiting for an ability to do these backups and anyone I know, Manoj knows that I've been waiting for it. And you know, Commvault came back to me a while back and they said, we just have to wait for the API. We have to wait for Microsoft to release it. Well, I follow the news. I saw Microsoft released the API and I think it may have been two days later that Commvault reached out to me and said, Hey we got it available. Are you ready to do this? And that sort of turned around, that sort of flexibility, being on top of new applications with that, with Salesforce, that is, you know just not necessarily the reason why I adopted Metallic but one of those things that puts a smile on my face because I adopted Metallic. >> Well, that's an interesting story. I mean, you get the SDKs and if you're a leader you get them, you know, you can put the resources on it and you're ready when, when the product comes to GA. Manoj, I wonder if we could talk about just the notion of backing up a SAS. Part of the announcements today included within Metallica included backup and, and offerings for dynamics 365. But my question is why support dynamics specifically in in SAS apps generally? I mean, customers might say, doesn't my SAS provider protect my data. Why do I need a third party? And, and the second part of that question is why Commvault? >> Dave, a great question as always. I'll start with the second part of the question. It's really three words, the shared responsibility model, and, you know, a lot of times our customers, as they go into the cloud model they really start understanding that there is something, that you're getting a lot of advantages that certain things you don't have to do. But the shared responsibility model is what every cloud and SAS provider will indoctrinate in it's in desolate. And certainly the application data is owned by the customer. And the meaning of that is not something that, you know some SAS provider can understand. And so that requires specialized skills. And that's a partnership where we've done this now very successfully with Microsoft and LG 65. We've added support for Salesforce, And we see a rapid customer adoption because of that shared responsibility model, If you have a, some kind of an admin issue as we have seen in the news somebody changed their team setting and then lost all their chat. Then that data is discoverable. And you, the customer, responsible for making sure that data is discoverable or ransomware attacks. Again, covering that SAS data is your responsibility because the attack could be coming in from your instance, not from the SAS provider. So those are the reasons dynamics is, you know one of the fastest growing SAS applications from a business applications perspective out there. And as we looked at our roadmap and you look at at the right compliment. What is arriving by the agency, we're seeing this part of a Microsoft's business application suite growing, you know, millions of users out there and it's rapidly growing. And it's also integrated with the rest of the Microsoft family. So we're now, you know, proud to say that we support all three Microsoft clouds by Microsoft 365 dynamics. Those applications are increasingly degraded so we're seeing commonality in customer base and that's a business critical data. And so customers are looking to manage the data, have solutions that they can be sure they can leverage, it's not just protecting data from worst-case scenarios. In the case of some of the apps like dynamics we offer a support, like setting up the staging environment. So it's improving productivity off the application admins and that's really kind of that the value we're bringing able to bring to the table. >> Yeah. You know, that shared responsibility model. I'm glad you brought that up because I think it's oftentimes misunderstood but when you talk to CSOs, they understand it well. They'll tell you the shared responsibility is my responsibility. You know, maybe the cloud provider who will secure the the object storage bucket for the physical space, but it's, it's on me. So that's really important. So thank you for that. Isabelle, last question. The roadmap, you know how do you see Commvaults, Metallic, SAS portfolio evolving? what can you tell us? >> Oh, well, it has a big strategic impact on Commvault for sure, first because all of our existing customer as you mentioned earlier, 25 years, it's a lot of customer will have somehow some workload as SaaS. And so the ability without adding more complexity without adding another vendor just to be able to protect them in one take, and as teams, they bring a smile to his face is really important for us. The second is also a lot of customer come toCommvault from Metallic. This is the first time they enter the Commvault community and Commvault family and as they start protecting their SaaS application they realize that they could leverage the same application to protect their on-premise, data as well. So back to the power of hand and without writing off their past investments, you know going to the cloud at the pace they want. So from that perspective, there is a big impact on our customer community that quickened that Metallic brings. I don't know Manojs' way too humble, but, you know he doubled his customers every quarter. And, you know, we have added 24 countries to the portfolio, to the product. So we see a rapid adoption. And so obviously back to your question, we see the impacts of Metallic growing and growing fast because of the market demand because of the rapid innovation. We can take the Commvault technology and put it in the SaaS model and our customers really like it. So I'm very excited. I think it's going to be, you know, a great innovation, a great positive impact for customers and our new customer will welcome it, which by the way I think half, Manoj correct me but I think half of the Metallic customer at Commvault and the other half are new to our family. So, so they're very bullish about this. And it's just the beginning, as you know we all 25 year old or sorry, 25 year young and looking forward to the next 25. >> Well, I can confirm, you know we have a data partner, survey partner ETR enterprise technology research, and I was looking at the Commvault data and it shows within the cloud segment, when you cut the data by cloud, you're actually accelerating the spending momentum is accelerating. And I think it's a function of, you know some of the acquisitions you've made some of the moves. You made an integration. So congratulations on 25 years and you know you're riding the correct wave. Isabel, Manoj, Tim thanks so much for coming in the cube. It was great to have you. >> Thank you. >> Thank you Dave. >> I really appreciate it. >> And thank you everybody for watching. This is Dave Volante for the Cube. We'll see you next time.

Published Date : May 19 2021

SUMMARY :

of bell labs, one of the So Isabelle, maybe you could start off. So let's take the complexity on us. and looking at the future. And the reason is, you know You bring in the voice of the customer, and the one thing I wanted of the enterprise, particularly, you know And so SAS allows you to, once you set up that you leaned into Metallic. And it's so important to go with this So thank you for that, Tim. is really the ability to for you adopting Metallic and and the flexibility that Commvault has the product comes to GA. And the meaning of that is You know, maybe the cloud And it's just the beginning, as you know And I think it's a function of, you know And thank you everybody for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim CarbenPERSON

0.99+

Isabel GeesePERSON

0.99+

MicrosoftORGANIZATION

0.99+

IsabelPERSON

0.99+

DavePERSON

0.99+

TimPERSON

0.99+

IsabellePERSON

0.99+

1996DATE

0.99+

Manoj NairPERSON

0.99+

IsabellaPERSON

0.99+

Isabelle GuisPERSON

0.99+

CommvaultORGANIZATION

0.99+

2006DATE

0.99+

Dave VolantePERSON

0.99+

ManojPERSON

0.99+

10 dayQUANTITY

0.99+

MetallicORGANIZATION

0.99+

100%QUANTITY

0.99+

25 yearQUANTITY

0.99+

second partQUANTITY

0.99+

25 yearsQUANTITY

0.99+

065 environmentsQUANTITY

0.99+

millionsQUANTITY

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

three wordsQUANTITY

0.99+

036 environmentsQUANTITY

0.99+

Mitchell InternationalORGANIZATION

0.99+

more than 15 yearsQUANTITY

0.99+

SASORGANIZATION

0.99+

MitchellORGANIZATION

0.99+

second partQUANTITY

0.99+

MetallicaORGANIZATION

0.99+

oneQUANTITY

0.99+

24 countriesQUANTITY

0.99+

25th anniversaryQUANTITY

0.99+

Mitchell InternationalORGANIZATION

0.98+

twoQUANTITY

0.98+

firstQUANTITY

0.98+

first timeQUANTITY

0.98+

Three great guestsQUANTITY

0.98+