Image Title

Search Results for second form:

James Bryan, Dell Technologies & Heather Rahill, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (bright music) >> Hey everyone! Welcome back. Good evening from Barcelona, Spain. It's theCUBE, the leader in live tech coverage. As you well know, Lisa Martin and Dave Nicholson. Day two of our coverage of MWC 23. Dave, we've been talking about sexy stuff all day. It's about to get, we're bringing sexy back. >> It's about to get hot. >> It's about to get hot. We've had two guests with us, two senior consultants from the product planning, networking and emerging server solutions group at Dell, Heather Raheel and James Bryan. Welcome guys. >> Thanks for having us. >> Thanks for having us. >> Really appreciate it. >> Lisa: Dude, you're bringing sexy back. >> I know. We are. We are. We wanted to bring it, yes. >> This is like XR8000 >> We've been talking about this all day. It's here... >> Yes. Yes. Talk to us about why this is so innovative. >> So, actually we wanted to bring this, getting a lot of attention here on site. Matter of fact, we even have a lot of our competition taking pictures of it. And why is it so innovative? So one of the things that we've done here is we've taken a lot of insights and feedback from our customers that are looking at 5G deployments and looking at how do they, basically, bring commercial off the shelf to a very proprietary industry. So what we've done is we've built a very flexible and scalable form factor in the XR8000. And so this is actually a product that we've purposely built for the telecommunications space. Specifically can be deployed for serving a virtual DU or DUC at a cell site for distributed ram. Or it can be put in a local data center, but outside a main data center to support centralized ram. We'll get into it, which is where the really excitement gets is it's sled-based in its design. And so because of that, it enables us to provide both functionality for telecommunications. Could be network, could be enterprise edge as well as being designed to be configured to whatever that workload is, and be cost-optimized for whatever that work. >> Ah, you're killing us! Let's see. Show, show it to us. >> Actually this is where I have to hand it off to my colleague Heather. But what I really want to show you here is the flexibility that we have and the scalability. So, right here what I'm going to show you first is a one U sled. So I'll set that out here, and I'll let Heather tell us all about it. >> Yeah. So XR8000. Let's talk about flexibility first. So the chassis is a two U chassis with a hot swap shared power supply on the right. Within it there are two form factors for the sleds. What James brought out here, this is the one U form factor. Each sled features one node or one CPU first sled. So we're calling the one U the highest, highest density sled right? Cause you can have up to four one node one U sleds in the chassis. The other form factor is a two U sled, on the right here. And that's just really building on top of the one U sled that adds two PCIe sleds on top. So this is really our general purpose sled. You could have up to two of these sleds within the chassis. So what's really cool about the flexibility is you can plug and play with these. So you could have two one Us, two two Us, or mix and match of each of those. >> Talk about the catalyst to build this for telco and some of the emerging trends that, that you guys have seen and said this needs to be purpose-built for the telco. There's so much challenge and complexity there, they need this. >> Want me to take this? So actually that, that's a great question by the way. It turns out that the market's growing. It's nascent right now. Different telecommunication providers have different needs. Their workloads are different. So they're looking for a form factor like this that, when we say flexible, they need to be able to configure it for theirs. They don't all configure the same way. And so they're looking for something that they can configure to their needs, but they also don't want to pay for things that they don't need. And so that's what led to the creation of, of this device the way we've created it. >> How is it specific for edge use cases, though? We think of the edge: it's emerging, it's burgeoning. What makes this so (pause) specific to edge use cases? >> Yeah, let's talk about some of the the ruggedized features of the product. So first of all, it is short depth. So only 430 millimeters. And this is designed for extreme temperatures, really for any environment. So the normal temperatures of operating are negative five to 55, but we've also developed an enhanced heat sink to get us even beyond that. >> Dave: That's Celsius? >> Celsius. Thank you. >> Lisa: Right. So this will get us all the way down to negative 20 boot in operating all the way up to 65 C. So this is one of the most extreme temperature edge offerings we've seen on the market so far. >> And so this is all outside the data center, so not your typical data center server. So not only are we getting those capabilities, but half the size when you look at a typical data center server. >> So these can go into a place where there's a rack, maybe, but definitely not, not doesn't have to be raised for... >> Could be a cell side cabinet. >> Yeah. Okay. >> Heather: Yeah. And we also have AC and DC power options that can be changed over time as well. >> So what can you pack into that one one U sled in terms of CPU cores and memory, just as an example? >> Yeah, great. So, each of the sleds will support the fourth generation of Intel Sapphire Rapids up to 32 corp. They'll also be supporting their new vRAN boost SKUs. And the benefit of those is it has an integrated FEC accelerator within the CPU. Traditionally, to get FEC acceleration, you would need a PCIe card that would take up one of the slots here. Now with it integrated, you're freeing up a PCIe slot, and there's also a power savings involved with that as well. >> So talk about the involvement of, of the telco customer here and then design, I know Dell is very tight with its customers. I imagine there was a lot of communications and collaboration with customers to, to deliver this. >> Interesting question. So it turns out that early on, we had had some initial insight, but it was actually through deep engagement with our customers that we actually redesigned the form factor to what you see here today. So we actually spent significant amount of time with various telecommunication customers from around the world, and they had a very strong influence in this form factor. Even to the point, like Lisa mentioned, we ended up redesigning it. >> Do, do you have a sense for how many of these, or in what kinds of configurations would you deploy in like the typical BBU? So if we're thinking about radio access network literally tran- tower transmitter receiver... somewhere down there (pause) in a cabinet, you have one of these, you have multiple units. I know, I know the answer is "it depends". >> You are right. >> But if, but if someone tells you, well you know, we have 20, 20 cellular sites, and we need (pause) we're we're moving to an open model, and we need the horsepower to do what we want to do. I'm trying to, I'm trying to gauge like what, one of these, what does that, what does that mean? Or is it more like four of these? >> So that, so we'll go >> It depends? >> Yeah it depends, you're absolutely right. However, we can go right there. So if you look in the two U >> Yeah. >> we have three PCIe slots, you know, as Heather mentioned. And so let's say you have a typical cell site, right? We could be able to support a cell site that could have it could have three radios in the configuration here, it could have a, multiply by three, right? It could have up to 18 radios, and we could actually support that. We could support multiple form factors or multiple deployments at a particular cell site. It really then to your point, it does depend, and that's one of the reasons that we've designed it the way we have. For example, if a customer says their initial deployment, they only need one compute node because maybe they're only going to have, you know, two or three carriers. So then, there, you've got maybe six or eight or nine radios. Well then, you put in a single node, but then they may want to scale over time. Well then, you actually have a chassis. They just come in, and they put in a new chassis. The other beauty of that is, is that maybe they wait, but then they want to do new technology. They don't even have to buy a whole new server. They can update to >> Heather: Yeah. the newest technology, same chassis put that in, connect to the radios, and keep going. >> But in this chassis, is it fair to say that most people will be shocked by how much traffic can go through something like this? In the sense that, if a tower is servicing 'n' number of conversations and data streams, going through something like this? I mean somehow blow, it blows my mind to think of thousands of people accessing something and having them all wrapped through something like this. >> It, it'll depend on what they're doing with that data. So you've probably talked a lot about a type of radios, right? Are we going to be massive MIMO or what type of radio? Is it going to be a mix of 4G or 5G? So it'll really depend on that type of radio, and then where this is located. Is it in a dense urban environment, or is it in a rural type of environment at that cell site shelter, but out in a suburban area. So will depend, but then, that's the beauty of this is then, (pause) I get the right CPU, I get the right number of adding cards to connect to the right radios. I purchase whatever, what I need. I may scale to that. I may be (pause) in a growing part of the city, like where we're from or where I'm from or in San Diego where Heather's from where she's in a new suburban, and they put out a new tower and the community grows rapidly. Well then, we may, they may put out one and then you may add another one and I can connect to more radios, more carriers. So it really just comes down to the type and what you're trying to put through that. It could edit a stadium where I may have a lot of people. I may have like, video streaming, and other things. Not only could I be a network connectivity, but I could do other functions like me, multi-axis axon point that you've heard about, talked about here. So I could have a GPU processing information on one side. I could do network on the other side. >> I do, I do. >> Go for it >> Yeah, no, no, I'm sorry. I'm sorry. I don't want to, don't want to hog all of the time. What about expansion beyond the chassis? Is there a scenario where you might load this chassis up with four of those nodes, but then because you need some type of external connectivity, you go to another chassis that has maybe some of these sleds? Or are these self-contained and independent of one another? >> They are all independent. >> Okay. >> So, and then we've done that for a reason. So one of the things that was clear from the customers, again and again and again, was cost, right? Total cost of ownership. So not only, how much does this cost when I buy it from you to what is it going to take to power and run it. And so basically we've designed that with that in mind. So we've separated the compute and isolated the compute from the chassis, from the power. So (pause) I can only deal with this. And the other thing is is it's, it's a sophisticated piece of equipment that people that would go out and service it are not used to. So they can just come out, pull it out without even bringing the system down. If they've got multiple nodes, pull it. They don't have to pull out a whole chassis or whole server. Put one in, connect it back up while the system is still running. If a power supply goes out, they can come and pull it out. We've got one, it's designed with a power infrastructure that if I lose one power supply, I'm not losing the whole system. So it's really that serviceability, total cost of ownership at the edge, which led us to do this as a configurable chassis. >> I was just going to ask you about TCO reduction but another thing that I'm curious about is: there seems to be like a sustainability angle here. Is that something that you guys talk with customers about in terms of reducing footprint and being able to pack more in with less reducing TCO, reducing storage, power consumption, that sort of thing? >> Go ahead. >> You want me to take that one as well? So yes, so it comes at me, varies by the customer, but it does come up and matter of fact one- in that vein, similar to this from a chassis perspective is, I don't, especially now with the technology changing so fast and and customers still trying to figure out well is this how we're really going to deploy it? You basically can configure, and so maybe that doesn't work. They reconfigure it, or, as I mentioned earlier, I purchased a single sled today, and I purchased a chassis. Well then the next generation comes. I don't have to purchase a new chassis. I don't have to purchase a new power supply. So we're trying to address those sustainability issues as we go, you know, again, back to the whole TCO. So they, they're kind of related to some extent. >> Right. Right, right. Definitely. We hear a lot from customers in every industry about ESG, and it's, and it's an important initiative. So Dell being able to, to help facilitate that for customers, I'm sure is part of what gives you that competitive advantage, but you talked about, James, that and, and we talked about it in an earlier segment that competitors are coming by, sniffing around your booth. What's going on? Talk about, from both of your lenses, the (pause) competitive advantage that you think this gives Dell in telco. Heather, we'll start with you. >> Heather: Yeah, I think the first one which we've really been hitting home with is the flexibility for scalability, right? This is really designed for any workload, from AI and inferencing on like a factory floor all the way to the cell site. I don't know another server that could say that. All in one box, right? And the second thing is, really, all of the TCO savings that will happen, you know, immediately at the point of sale and also throughout the life cycle of this product that is designed to have an extremely long lifetime compared to a traditional server. >> Yeah, I'll get a little geeky with you on that one. Heather mentioned that we'll be able to take this, eventually, to 65 C operating conditions. So we've even designed some of the thermal solutions enabling us to go there. We'll also help us become more power efficient. So, again, back to the flexibility even on how we cool it so it enables us to do that. >> So do, do you expect, you just mentioned maybe if I, if I heard you correctly, the idea that this might have a longer (pause) user-usable life than the average kind of refresh cycle we see in general IT. What? I mean, how often are they replacing equipment now in, kind of, legacy network environments? >> I believe the traditional life cycle of a of a server is, what? Three? Three to five years? Three to five years traditionally. And with the sled based design, like James said, we'll be designing new sleds, you know, every year two years that can just be plugged in, and swapped out. So the chassis is really designed to live much longer than, than just three to five years. >> James: We're having customers ask anywhere from seven to when it dies. So (pause) substantial increase in the life cycle as we move out because as you can, as you probably know, well, right? The further I get out on the edge, it, the more costly it is. >> Lisa: Yep. >> And, I don't want to change it if I don't have to. And so something has to justify me changing it. And so we're trying to build to support that both that longevity, but then with that longevity, things change. I mean, seven years is a long time in technology. >> Lisa: Yes it is. >> So we need to be there for those customers that are ready for that change, or something changed, and they want to still be able to, to adopt that without having to change a lot of their infrastructure. >> So customers are going to want to get their hands on this, obviously. We know, we, we can tell by your excitement. Is this GA now? Where is it GA, and where can folks go to learn more? >> Yeah, so we are here at Mobile World Congress in our booth. We've got a few featured here, and other booths throughout the venue. But if you're not here at Mobile World Congress, this will be launched live on the market at the end of May for Dell. >> Awesome. And what geographies? >> Worldwide. >> Worldwide. Get your hands on the XR8000. Worldwide in just a couple months. Guys, thank you >> James: Thank you very much. >> for the show and tell, talking to us about really why you're designing this for the telco edge, the importance there, what it's going to enable operators to achieve. We appreciate your time and your insights and your show and tell. >> Thanks! >> Thank you. >> For our guests and for Dave Nicholson, I'm Lisa Martin. You're watching theCUBE live, Spain in Mobile MWC 23. Be back with our sho- day two wrap with Dave Valente and some guests in just a minute. (bright music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. It's about to get, we're It's about to get hot. I know. We've been talking about this all day. Talk to us about why So one of the things that we've done here Show, show it to us. I'm going to show you So the chassis is a two Talk about the catalyst to build this that they can configure to their needs, specific to edge use cases? So the normal temperatures of operating Thank you. So this is one of the most but half the size when you look not doesn't have to be raised for... that can be changed over time as well. So, each of the sleds will support So talk about the involvement of, the form factor to what I know, I know the answer is "it depends". to do what we want to do. So if you look in the two U and that's one of the reasons that put that in, connect to But in this chassis, is it fair to say So it really just comes down to the type What about expansion beyond the chassis? So one of the things that Is that something that you guys talk I don't have to purchase a new chassis. advantage that you think of the TCO savings that will happen, So, again, back to the flexibility even the idea that this might So the chassis is really in the life cycle as we And so something has to So we need to be there for to want to get their hands on the market at the end of May for Dell. And what geographies? hands on the XR8000. for the telco edge, the importance there, Be back with our sho- day two wrap

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamesPERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave ValentePERSON

0.99+

20QUANTITY

0.99+

LisaPERSON

0.99+

James BryanPERSON

0.99+

HeatherPERSON

0.99+

sixQUANTITY

0.99+

DavePERSON

0.99+

ThreeQUANTITY

0.99+

Heather RaheelPERSON

0.99+

DellORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

eightQUANTITY

0.99+

Heather RahillPERSON

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

San DiegoLOCATION

0.99+

telcoORGANIZATION

0.99+

two guestsQUANTITY

0.99+

oneQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

five yearsQUANTITY

0.99+

seven yearsQUANTITY

0.99+

Each sledQUANTITY

0.99+

SpainLOCATION

0.99+

MWC 23EVENT

0.99+

millimetersQUANTITY

0.99+

XR8000COMMERCIAL_ITEM

0.99+

55QUANTITY

0.99+

eachQUANTITY

0.99+

bothQUANTITY

0.99+

sevenQUANTITY

0.99+

one nodeQUANTITY

0.99+

one boxQUANTITY

0.99+

todayDATE

0.98+

three radiosQUANTITY

0.98+

three carriersQUANTITY

0.98+

firstQUANTITY

0.98+

first oneQUANTITY

0.98+

Sapphire RapidsCOMMERCIAL_ITEM

0.98+

fourth generationQUANTITY

0.98+

two form factorsQUANTITY

0.98+

end of MayDATE

0.97+

thousands of peopleQUANTITY

0.97+

two senior consultantsQUANTITY

0.97+

TCOORGANIZATION

0.97+

second thingQUANTITY

0.96+

GALOCATION

0.96+

first sledQUANTITY

0.96+

IntelORGANIZATION

0.96+

one UQUANTITY

0.96+

Michael Foster, Red Hat | CloudNativeSecurityCon 23


 

(lively music) >> Welcome back to our coverage of Cloud Native Security Con. I'm Dave Vellante, here in our Boston studio. We're connecting today, throughout the day, with Palo Alto on the ground in Seattle. And right now I'm here with Michael Foster with Red Hat. He's on the ground in Seattle. We're going to discuss the trends and containers and security and everything that's going on at the show in Seattle. Michael, good to see you, thanks for coming on. >> Good to see you, thanks for having me on. >> Lot of market momentum for Red Hat. The IBM earnings call the other day, announced OpenShift is a billion-dollar ARR. So it's quite a milestone, and it's not often, you know. It's hard enough to become a billion-dollar software company and then to have actually a billion-dollar product alongside. So congratulations on that. And let's start with the event. What's the buzz at the event? People talking about shift left, obviously supply chain security is a big topic. We've heard a little bit about or quite a bit about AI. What are you hearing on the ground? >> Yeah, so the last event I was at that I got to see you at was three months ago, with CubeCon and the talk was supply chain security. Nothing has really changed on that front, although I do think that the conversation, let's say with the tech companies versus what customers are actually looking at, is slightly different just based on the market. And, like you said, thank you for the shout-out to a billion-dollar OpenShift, and ACS is certainly excited to be part of that. We are seeing more of a consolidation, I think, especially in security. The money's still flowing into security, but people want to know what they're running. We've allowed, had some tremendous growth in the last couple years and now it's okay. Let's get a hold of the containers, the clusters that we're running, let's make sure everything's configured. They want to start implementing policies effectively and really get a feel for what's going on across all their workloads, especially with the bigger companies. I think bigger companies allow some flexibility in the security applications that they can deploy. They can have different groups that manage different ones, but in the mid to low market, you're seeing a lot of consolidation, a lot of companies that want basically one security tool to manage them all, so to speak. And I think that the features need to somewhat accommodate that. We talk supply chain, I think most people continue to care about network security, vulnerability management, shifting left and enabling developers. That's the general trend I see. Still really need to get some hands on demos and see some people that I haven't seen in a while. >> So a couple things on, 'cause, I mean, we talk about the macroeconomic climate all the time. We do a lot of survey data with our partners at ETR, and their recent data shows that in terms of cost savings, for those who are actually cutting their budgets, they're looking to consolidate redundant vendors. So, that's one form of consolidation. The other theme, of course, is there's so many tools out in the security market that consolidating tools is something that can help simplify, but then at the same time, you see opportunities open up, like IOT security. And so, you have companies that are starting up to just do that. So, there's like these countervailing trends. I often wonder, Michael, will this ever end? It's like the universe growing and tooling, what are your thoughts? >> I mean, I completely agree. It's hard to balance trying to grow the company in a time like this, at the same time while trying to secure it all, right? So you're seeing the consolidation but some of these applications and platforms need to make some promises to say, "Hey, we're going to move into this space." Right, so when you have like Red Hat who wants to come out with edge devices and help manage the IOT devices, well then, you have a security platform that can help you do that, that's built in. Then the messaging's easy. When you're trying to do that across different cloud providers and move into IOT, it becomes a little bit more challenging. And so I think that, and don't take my word for this, some of those IOT startups, you might see some purchasing in the next couple years in order to facilitate those cloud platforms to be able to expand into that area. To me it makes sense, but I don't want to hypothesize too much from the start. >> But I do, we just did our predictions post and as a security we put up the chart of candidates, and there's like dozens, and dozens, and dozens. Some that are very well funded, but I mean, you've seen some down, I mean, down rounds everywhere, but these many companies have raised over a billion dollars and it's like uh-oh, okay, so they're probably okay, maybe. But a lot of smaller firms, I mean there's just, there's too many tools in the marketplace, but it seems like there is misalignment there, you know, kind of a mismatch between, you know, what customers would like to have happen and what actually happens in the marketplace. And that just underscores, I think, the complexities in security. So I guess my question is, you know, how do you look at Cloud Native Security, and what's different from traditional security approaches? >> Okay, I mean, that's a great question, and it's something that we've been talking to customers for the last five years about. And, really, it's just a change in mindset. Containers are supposed to unleash developer speed, and if you don't have a security tool to help do that, then you're basically going to inhibit developers in some form or another. I think managing that, while also giving your security teams the ability to tell the message of we are being more secure. You know, we're limiting vulnerabilities in our cluster. We are seeing progress because containers, you know, have a shorter life cycle and there is security and speed. Having that conversation with the C-suites is a little different, especially when how they might be used to virtual machines and managing it through that. I mean, if it works, it works from a developer's standpoint. You're not taking advantage of those containers and the developer's speed, so that's the difference. Now doing that and then first challenge is making that pitch. The second challenge is making that pitch to then scale it, so you can get onboard your developers and get your containers up and running, but then as you bring in new groups, as you move over to Kubernetes or you get into more container workloads, how do you onboard your teams? How do you scale? And I tend to see a general trend of a big investment needed for about two years to make that container shift. And then the security tools come in and really blossom because once that core separation of responsibilities happens in the organization, then the security tools are able to accelerate the developer workflow and not inhibit it. >> You know, I'm glad you mentioned, you know, separation of responsibilities. We go to a lot of shows, as you know, with theCUBE, and many of them are cloud shows. And in the one hand, Cloud has, you know, obviously made the world, you know, more interesting and better in so many different ways and even security, but it's like new layers are forming. You got the cloud, you got the shared responsibility model, so the cloud is like the first line of defense. And then you got the CISO who is relying heavily on devs to, you know, the whole shift left thing. So we're asking developers to do a lot and then you're kind of behind them. I guess you have audit is like the last line of defense, but my question to you is how can software developers really ensure that cloud native tools that they're using are secure? What steps can they take to improve security and specifically what's Red Hat doing in that area? >> Yeah, well I think there's, I would actually move away from that being the developer responsibility. I think the job is the operators' and the security people. The tools to give them the ability to see. The vulnerabilities they're introducing. Let's say signing their images, actually verifying that the images that's thrown in the cloud, are the ones that they built, that can all be done and it can be done open source. So we have a DevSecOps validated pattern that Red Hat's pushed out, and it's all open source tools in the cloud native space. And you can sign your builds and verify them at runtime and make sure that you're doing that all for free as one option. But in general, I would say that the hope is that you give the developer the information to make responsible choices and that there's a dialogue between your security and operations and developer teams but security, we should not be pushing that on developer. And so I think with ACS and our tool, the goal is to get in and say, "Let's set some reasonable policies, have a conversation, let's get a security liaison." Let's say in the developer team so that we can make some changes over time. And the more we can automate that and the more we can build and have that conversation, the better that you'll, I don't say the more security clusters but I think that the more you're on your path of securing your environment. >> How much talk is there at the event about kind of recent high profile incidents? We heard, you know, Log4j, of course, was mentioned in the Keynote. Somebody, you know, I think yelled out from the audience, "We're still dealing with that." But when you think about these, you know, incidents when looking back, what lessons do you think we've learned from these events? >> Oh, I mean, I think that I would say, if you have an approach where you're managing your containers, managing the age and using containers to accelerate, so let's say no images that are older than 90 days, for example, you're going to avoid a lot of these issues. And so I think people that are still dealing with that aspect haven't set up the proper, let's say, disclosure between teams and update strategy and so on. So I don't want to, I think the Log4j, if it's still around, you know, something's missing there but in general you want to be able to respond quickly and to do that and need the tools and policies to be able to tell people how to fix that issue. I mean, the Log4j fix was seven days after, so your developers should have been well aware of that. Your security team should have been sending the messages out. And I remember even fielding all the calls, all the fires that we had to put out when that happened. But yeah. >> I thought Brian Behlendorf's, you know, talk this morning was interesting 'cause he was making an attempt to say, "Hey, here's some things that you might not be thinking about that are likely to occur." And I wonder if you could, you know, comment on them and give us your thoughts as to how the industry generally, maybe Red Hat specifically, are thinking about dealing with them. He mentioned ChatGPT or other GPT to automate Spear phishing. He said the identity problem is still not fixed. Then he talked about free riders sniffing repos essentially for known vulnerabilities that are slow to fix. He talked about regulations that might restrict shipping code. So these are things that, you know, essentially, we can, they're on the radar, but you know, we're kind of putting out, you know, yesterday's fire. What are your thoughts on those sort of potential issues that we're facing and how are you guys thinking about it? >> Yeah, that's a great question, and I think it's twofold. One, it's brought up in front of a lot of security leaders in the space for them to be aware of it because security, it's a constant battle, constant war that's being fought. ChatGPT lowers the barrier of entry for a lot of them, say, would-be hackers or people like that to understand systems and create, let's say, simple manifests to leverage Kubernetes or leverage a misconfiguration. So as the barrier drops, we as a security team in security, let's say group organization, need to be able to respond and have our own tools to be able to combat that, and we do. So a lot of it is just making sure that we shore up our barriers and that people are aware of these threats. The harder part I think is educating the public and that's why you tend to see maybe the supply chain trend be a little bit ahead of the implementation. I think they're still, for example, like S-bombs and signing an attestation. I think that's still, you know, a year, two years, away from becoming, let's say commonplace, especially in something like a production environment. Again, so, you know, stay bleeding edge, and then make sure that you're aware of these issues and we'll be constantly coming to these calls and filling you in on what we're doing and make sure that we're up to speed. >> Yeah, so I'm hearing from folks like yourself that the, you know, you think of the future of Cloud Native Security. We're going to see continued emphasis on, you know, better integration of security into the DevSecOps. You're pointing out it's really, you know, the ops piece, that runtime that we really need to shore up. You can't just put it on the shoulders of the devs. And, you know, using security focused tools and best practices. Of course you hear a lot about that and the continued drive toward automation. My question is, you know, automation, machine learning, how, where are we in that maturity cycle? How much of that is being adopted? Sometimes folks are, you know, they embrace automation but it brings, you know, unknown, unintended consequences. Are folks embracing that heavily? Are there risks associated around that, or are we kind of through that knothole in your view? >> Yeah, that's a great question. I would compare it to something like a smart home. You know, we sort of hit a wall. You can automate so much, but it has to actually be useful to your teams. So when we're going and deploying ACS and using a cloud service, like one, you know, you want something that's a service that you can easily set up. And then the other thing is you want to start in inform mode. So you can't just automate everything, even if you're doing runtime enforcement, you need to make sure that's very, very targeted to exactly what you want and then you have to be checking it because people start new workloads and people get onboarded every week or month. So it's finding that balance between policies where you can inform the developer and the operations teams and that they give them the information to act. And that worst case you can step in as a security team to stop it, you know, during the onboarding of our ACS cloud service. We have an early access program and I get on-calls, and it's not even security team, it's the operations team. It starts with the security product, you know, and sometimes it's just, "Hey, how do I, you know, set this policy so my developers will find this vulnerability like a Log4Shell and I just want to send 'em an email, right?" And these are, you know, they have the tools and they can do that. And so it's nice to see the operations take on some security. They can automate it because maybe you have a NetSec security team that doesn't know Kubernetes or containers as well. So that shared responsibility is really useful. And then just again, making that automation targeted, even though runtime enforcement is a constant thing that we talk about, the amount that we see it in the wild where people are properly setting up admission controllers and it's acting. It's, again, very targeted. Databases, cubits x, things that are basically we all know is a no-go in production. >> Thank you for that. My last question, I want to go to the, you know, the hardest part and 'cause you're talking to customers all the time and you guys are working on the hardest problems in the world. What is the hardest aspect of securing, I'm going to come back to the software supply chain, hardest aspect of securing the software supply chain from the perspective of a security pro, software engineer, developer, DevSecOps Pro, and then this part b of that is, is how are you attacking that specifically as Red Hat? >> Sure, so as a developer, it's managing vulnerabilities with updates. As an operations team, it's keeping all the cluster, because you have a bunch of different teams working in the same environment, let's say, from a security team. It's getting people to listen to you because there are a lot of things that need to be secured. And just communicating that and getting it actionable data to the people to make the decisions as hard from a C-suite. It's getting the buy-in because it's really hard to justify the dollars and cents of security when security is constantly having to have these conversations with developers. So for ACS, you know, we want to be able to give the developer those tools. We also want to build the dashboards and reporting so that people can see their vulnerabilities drop down over time. And also that they're able to respond to it quickly because really that's where the dollars and cents are made in the product. It's that a Log4Shell comes out. You get immediately notified when the feeds are updated and you have a policy in action that you can respond to it. So I can go to my CISOs and say, "Hey look, we're limiting vulnerabilities." And when this came out, the developers stopped it in production and we were able to update it with the next release. Right, like that's your bread and butter. That's the story that you want to tell. Again, it's a harder story to tell, but it's easy when you have the information to be able to justify the money that you're spending on your security tools. Hopefully that answered your question. >> It does. That was awesome. I mean, you got data, you got communication, you got the people, obviously there's skillsets, you have of course, tooling and technology is a big part of that. Michael, really appreciate you coming on the program, sharing what's happening on the ground in Seattle and can't wait to have you back. >> Yeah. Awesome. Thanks again for having me. >> Yeah, our pleasure. All right. Thanks for watching our coverage of the Cloud Native Security Con. I'm Dave Vellante. I'm in our Boston studio. We're connecting to Palo Alto. We're connecting on the ground in Seattle. Keep it right there for more coverage. Be right back. (lively music)

Published Date : Feb 2 2023

SUMMARY :

He's on the ground in Seattle. Good to see you, and it's not often, you know. but in the mid to low market, And so, you have companies that can help you do kind of a mismatch between, you know, and if you don't have a And in the one hand, Cloud has, you know, that and the more we can build We heard, you know, Log4j, of course, but in general you want to that you might not be in the space for them to be but it brings, you know, as a security team to stop it, you know, to go to the, you know, That's the story that you want to tell. and can't wait to have you back. Thanks again for having me. of the Cloud Native Security Con.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

MichaelPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

Palo AltoLOCATION

0.99+

Michael FosterPERSON

0.99+

Brian BehlendorfPERSON

0.99+

Red HatORGANIZATION

0.99+

dozensQUANTITY

0.99+

BostonLOCATION

0.99+

second challengeQUANTITY

0.99+

two yearsQUANTITY

0.99+

first challengeQUANTITY

0.99+

ACSORGANIZATION

0.99+

billion-dollarQUANTITY

0.99+

GPTORGANIZATION

0.99+

yesterdayDATE

0.99+

ETRORGANIZATION

0.99+

three months agoDATE

0.98+

todayDATE

0.98+

one optionQUANTITY

0.98+

Cloud Native Security Con.EVENT

0.97+

a yearQUANTITY

0.97+

over a billion dollarsQUANTITY

0.97+

one formQUANTITY

0.97+

NetSecORGANIZATION

0.97+

OneQUANTITY

0.97+

about two yearsQUANTITY

0.96+

this morningDATE

0.96+

ChatGPTORGANIZATION

0.96+

older than 90 daysQUANTITY

0.94+

OpenShiftORGANIZATION

0.93+

one security toolQUANTITY

0.92+

SpearPERSON

0.89+

KubernetesTITLE

0.87+

first lineQUANTITY

0.86+

last couple yearsDATE

0.85+

seven daysDATE

0.85+

Log4jPERSON

0.84+

Log4ShellTITLE

0.82+

last five yearsDATE

0.82+

oneQUANTITY

0.79+

CloudTITLE

0.77+

DevSecOpsTITLE

0.77+

CubeConEVENT

0.76+

CloudNativeSecurityCon 23EVENT

0.75+

twofoldQUANTITY

0.72+

theCUBEORGANIZATION

0.71+

next couple yearsDATE

0.67+

coupleQUANTITY

0.66+

DevSecOps ProTITLE

0.59+

Cloud NativeTITLE

0.59+

Log4jTITLE

0.35+

Lee Klarich, Palo Alto Networks | Palo Alto Networks Ignite22


 

>>The cube presents Ignite 22, brought to you by Palo Alto Networks. >>Good morning. Live from the MGM Grand. It's the cube at Palo Alto Networks Ignite 2022. Lisa Martin here with Dave Valante, day two, Dave of our coverage, or last live day of the year, which I can't believe, lots of good news coming out from Palo Alto Networks. We're gonna sit down with its Chief product officer next and dissect all of that. >>Yeah. You know, oftentimes in, in events like this, day two is product day. And look, it's all about products and sales. Yeah, I mean those, that's the, the, the golden rule. Get the product right, get the sales right, and everything else will take care of itself. So let's talk product. >>Yeah, let's talk product. Lee Claridge joins us, the Chief Product Officer at Palo Alto Networks. Welcome Lee. Great to have >>You. Thank you so much. >>So we didn't get to see your keynote yesterday, but we heard one of the things, you know, we've been talking about the threat landscape, the challenges. We had Unit 42, Wendy on yesterday. We had Nash on and near talking about the massive challenges in the threat landscape. But we understand, despite that you are optimistic. I am. Talk about your optimism given the massive challenges that every organization is facing today. >>Look, cybersecurity's hard and often in cybersecurity in the industry, a lot of people get sort of really focused on what the threat actors are doing, why they're successful. We investigate breaches and we think of it, it just starts to feel somewhat overwhelming for a lot of folks. And I just happen to think a little bit differently. I, I look at it and I think it's actually a solvable problem. >>Talk about cyber resilience. How does Palo Alto Networks define that and how does it help customers achieve that? Cuz that's the, that's the holy grail these days. >>Yes. Look, the, the way I think about cyber resilience is basically in two pieces. One, it's all about how do we prevent the threat actors from actually being successful in the first place. Second, we also have to be prepared for what happens if they happen to find a way to get through, and how do we make sure that that happens? The blast radius is, is as narrowly contained as possible. And so the, the way that we approach this is, you know, I, I kind of think in terms of like threes three core principles. Number one, we have to have amazing technology and we have to constantly be, keep keeping up with and ideally ahead of what attackers are doing. It's a big part of my job as the chief product officer, right? Second is we, you know, one of the, the big transformations that's happened is the advent of, of AI and the opportunity, as long as we can do it, a great job of collecting great data, we can drive AI and machine learning models that can start to be used for our advantage as defenders, and then further use that to drive automation. >>So we take the human out of the response as much as possible. What that allows us to do is actually to start using AI and automation to disrupt attackers as it's happening. The third piece then becomes natively integrating these capabilities into a platform. And when we do that, what allows us to do is to make sure that we are consistently delivering cybersecurity everywhere that it needs to happen. That we don't have gaps. Yeah. So great tech AI and automation deliver natively integrated through platforms. This is how we achieve cyber resilience. >>So I like the positivity. In fact, Steven Schmidt, who's now the CSO of, of Amazon, you know, Steven, and it was the CSO at AWS at the time, the first reinforced, he stood up on stage and said, listen, this narrative that's all gloom and doom is not the right approach. We actually are doing a good job and we have the capability. So I was like, yeah, you know, okay. I'm, I'm down with that. Now when I, my question is around the, the portfolio. I, I was looking at, you know, some of your alternatives and options and the website. I mean, you got network security, cloud security, you got sassy, you got capp, you got endpoint, pretty much everything. You got cider security, which you just recently acquired for, you know, this whole shift left stuff, you know, nothing in there on identity yet. That's good. You partner for that, but, so could you describe sort of how you think about the portfolio from a product standpoint? How you continue to evolve it and what's the direction? Yes. >>So the, the, the cybersecurity industry has long had this, I'm gonna call it a major flaw. And the major flaw of the cybersecurity industry has been that every time there is a problem to be solved, there's another 10 or 20 startups that get funded to solve that problem. And so pretty soon what you have is you're, if you're a customer of this is you have 50, a hundred, the, the record is over 400 different cybersecurity products that as a customer you're trying to operationalize. >>It's not a good record to have. >>No, it's not a good record. No. This is, this is the opposite of Yes. Not a good personal best. So the, so the reason I start there in answering your question is the, the way that, so that's one end of the extreme, the other end of the extreme view to say, is there such a thing as a single platform that does everything? No, there's not. That would be nice. That was, that sounds nice. But the reality is that cybersecurity has to be much broader than any one single thing can do. And so the, the way that we approach this is, is three fundamental areas that, that we, Palo Alto Networks are going to be the best at. One is network security within network security. This includes hardware, NextGen, firewalls, software NextGen, firewalls, sassy, all the different security services that tie into that. All of that makes up our network security platforms. >>So everything to do with network security is integrated in that one place. Second is around cloud security. The shift to the cloud is happening is very real. That's where Prisma Cloud takes center stage. C a P is the industry acronym. If if five letters thrown together can be called an acronym. The, so cloud native application protection platform, right? So this is where we bring all of the different cloud security capabilities integrated together, delivered through one platform. And then security, security operations is the third for us. This is Cortex. And this is where we bring together endpoint security, edr, ndr, attack, surface management automation, all of this. And what we had, what we announced earlier this year is x Im, which is a Cortex product for actually integrating all of that together into one SOC transformation platform. So those are the three platforms, and that's how we deliver much, much, much greater levels of native integration of capabilities, but in a logical way where we're not trying to overdo it. >>And cider will fit into two or three >>Into Prisma cloud into the second cloud to two. Yeah. As part of the shift left strategy of how we secure makes sense applications in the cloud >>When you're in customer conversations. You mentioned the record of 400 different product. That's crazy. Nash was saying yesterday between 30 and 50 and we talked with him and near about what's realistic in terms of getting organizations to, to be able to consolidate. I'd love to understand what does cybersecurity transformation look like for the average organization that's running 30 to 50 point >>Solutions? Yeah, look, 30 to 50 is probably, maybe normal. A hundred is not unusual. Obviously 400 is the extreme example. But all of those are, those numbers are too big right now. I think, I think realistic is high. Single digits, low double digits is probably somewhat realistic for most organizations, the most complex organizations that might go a bit above that if we're really doing a good job. That's, that's what I think. Now second, I do really want to point out on, on the product guy. So, so maybe this is just my way of thinking, consolidation is an outcome of having more tightly and natively integrated capabilities. Got you. And the reason I flip that around is if I just went to you and say, Hey, would you like to consolidate? That just means maybe fewer vendors that that helps the procurement person. Yes. You know, have to negotiate with fewer companies. Yeah. Integration is actually a technology statement. It's delivering better outcomes because we've designed multiple capabilities to work together natively ourselves as the developers so that the customer doesn't have to figure out how to do it. It just happens that by, by doing that, the customer gets all this wonderful technical benefit. And then there's this outcome sitting there called, you've just consolidated your complexity. How >>Specialized is the customer? I think a data pipelines, and I think I have a data engineer, have a data scientists, a data analyst, but hyper specialized roles. If, if, let's say I have, you know, 30 or 40, and one of 'em is an SD wan, you know, security product. Yeah. I'm best of breed an SD wan. Okay, great. Palo Alto comes in as you, you pointed out, I'm gonna help you with your procurement side. Are there hyper specialized individuals that are aligned to that? And how that's kind of part A and B, how, assuming that's the case, how does that integration, you know, carry through to the business case? So >>Obviously there are specializations, this is the, and, and cybersecurity is really important. And so there, this is why there had, there's this tendency in the past to head toward, well I have this problem, so who's the best at solving this one problem? And if you only had one problem to solve, you would go find the specialist. The, the, the, the challenge becomes, well, what do you have a hundred problems to solve? I is the right answer, a hundred specialized solutions for your a hundred problems. And what what I think is missing in this approach is, is understanding that almost every problem that needs to be solved is interconnected with other problems to be solved. It's that interconnectedness of the problems where all of a sudden, so, so you mentioned SD wan. Okay, great. I have Estee wan, I need it. Well what are you connecting SD WAN to? >>Well, ideally our view is you would connect SD WAN and branch to the cloud. Well, would you run in the cloud? Well, in our case, we can take our SD wan, connect it to Prisma access, which is our cloud security solution, and we can natively integrate those two things together such that when you use 'em together, way easier. Right? All of a sudden we took what seemed like two separate problems. We said, no, actually these problems are related and we can deliver a solution where those, those things are actually brought together. And that's just one simple example, but you could, you could extend that across a lot of these other areas. And so that's the difference. And that's how the, the, the mindset shift that is happening. And, and I I was gonna say needs to happen, but it's starting to happen. I'm talking to customers where they're telling me this as opposed to me telling them. >>So when you walk around the floor here, there's a visual, it's called a day in the life of a fuel member. And basically what it has, it's got like, I dunno, six or seven different roles or personas, you know, one is management, one is a network engineer, one's a coder, and it gives you an X and an O. And it says, okay, put the X on things that you spend your time doing, put the o on things that you wanna spend your time doing a across all different sort of activities that a SecOps pro would do. There's Xs and O's in every one of 'em. You know, to your point, there's so much overlap going on. This was really difficult to discern, you know, any kind of consistent pattern because it, it, it, unlike the hyper specialization and data pipelines that I just described, it, it's, it's not, it, it, there's way more overlap between those, those specialization roles. >>And there's a, there's a second challenge that, that I've observed and that we are, we've, we've been trying to solve this and now I'd say we've become, started to become a lot more purposeful in, in, in trying to solve this, which is, I believe cybersecurity, in order for cyber security vendors to become partners, we actually have to start to become more opinionated. We actually have to start, guys >>Are pretty opinionated. >>Well, yes, but, but the industry large. So yes, we're opinionated. We build these products, but that have, that have our, I'll call our opinions built into it, and then we, we sell the, the product and then, and then what happens? Customer says, great, thank you for the product. I'm going to deploy it however I want to, which is fine. Obviously it's their choice at the end of the day, but we actually should start to exert an opinion to say, well, here's what we would recommend, here's why we would recommend that. Here's how we envisioned it providing the most value to you. And actually starting to build that into the products themselves so that they start to guide the customer toward these outcomes as opposed to just saying, here's a product, good luck. >>What's, what's the customer lifecycle, not lifecycle, but really kind of that, that collaboration, like it's one thing to, to have products that you're saying that have opinions to be able to inform customers how to deploy, how to use, but where is their feedback in this cycle of product development? >>Oh, look, my, this, this is, this is my life. I'm, this is, this is why I'm here. This is like, you know, all day long I'm meeting with customers and, and I share what we're doing. But, but it's, it's a, it's a 50 50, I'm half the time I'm listening as well to understand what they're trying to do, what they're trying to accomplish, and how, what they need us to do better in order to help them solve the problem. So the, the, and, and so my entire organization is oriented around not just telling customers, here's what we did, but listening and understanding and bringing that feedback in and constantly making the products better. That's, that's the, the main way in which we do this. Now there's a second way, which is we also allow our products to be customized. You know, I can say, here's our best practices, we see it, but then allowing our customer to, to customize that and tailor it to their environment, because there are going to be uniquenesses for different customers in parti, we need more complex environments. Explain >>Why fire firewalls won't go away >>From your perspective. Oh, Nikesh actually did a great job of explaining this yesterday, and although he gave me credit for it, so this is like a, a circular kind of reference here. But if you think about the firewalls slightly more abstract, and you basically say a NextGen firewalls job is to inspect every connection in order to make sure the connection should be allowed. And then if it is allowed to make sure that it's secure, >>Which that is the definition of an NextGen firewall, by the way, exactly what I just said. Now what you noticed is, I didn't describe it as a hardware device, right? It can be delivered in hardware because there are environments where you need super high throughput, low latency, guess what? Hardware is the best way of delivering that functionality. There's other use cases cloud where you can't, you, you can't ship hardware to a cloud provider and say, can you install this hardware in front of my cloud? No, no, no. You deployed in a software. So you take that same functionality, you instantly in a software, then you have other use cases, branch offices, remote workforce, et cetera, where you say, actually, I just want it delivered from the cloud. This is what sassy is. So when I, when I look at and say, the firewall's not going away, what, what, what I see is the functionality needed is not only not going away, it's actually expanding. But how we deliver it is going to be across these three form factors. And then the customer's going to decide how they need to intermix these form factors for their environment. >>We put forth this notion of super cloud a while about a year ago. And the idea being you're gonna leverage the hyperscale infrastructure and you're gonna build a, a, you're gonna solve a common problem across clouds and even on-prem, super cloud above the cloud. Not Superman, but super as in Latin. But it turned into this sort of, you know, superlative, which is fun. But the, my, my question to you is, is, is, is Palo Alto essentially building a common cross-cloud on-prem, presumably out to the edge consistent experience that we would call a super cloud? >>Yeah, I don't know that we've ever used the term surfer cloud to describe it. Oh, you don't have to, but yeah. But yes, based on how you describe it, absolutely. And it has three main benefits that I describe to customers all the time. The first is the end user experience. So imagine your employee, and you might work from the office, you might work from home, you might work while from, from traveling and hotels and conferences. And, and by the way, in one day you might actually work from all of those places. So, so the first part is the end user experience becomes way better when it doesn't matter where they're working from. They always get the same experience, huge benefit from productivity perspective, no second benefit security operations. You think about the, the people who are actually administering these policies and analyzing the security events. >>Imagine how much better it is for them when it's all common and consistent across everywhere that has to happen. Cloud, on-prem branch, remote workforce, et cetera. So there's a operational benefit that is super valuable. Third, security benefit. Imagine if in this, this platform-based approach, if we come out with some new amazing innovation that is able to detect and block, you know, new types of attacks, guess what, we can deliver that across hardware, software, and sassi uniformly and keep it all up to date. So from a security perspective, way better than trying to figure out, okay, there's some new technology, you know, does my hardware provider have that technology or not? Does my soft provider? So it's bringing that in to one place. >>From a developer perspective, is there a, a, a PAs layer, forgive me super PAs, that a allows the developers to have a common experience across irrespective of physical location with the explicit purpose of serving the objective of your platform. >>So normally when I think of the context of developers, I'm thinking of the context of, of the people who are building the applications that are being deployed. And those applications may be deployed in a data center, increasing the data centers, depending private clouds might be deployed into, into public cloud. It might even be hybrid in nature. And so if you think about what the developer wants, the developer actually wants to not have to think about security, quite frankly. Yeah. They want to think about how do I develop the functionality I need as quickly as possible with the highest quality >>Possible, but they are being forced to think about it more and more. Well, but anyway, I didn't mean to >>Interrupt you. No, it's a, it is a good, it's a, it's, it's a great point. The >>Well we're trying to do is we're trying to enable our security capabilities to work in a way that actually enables what the developer wants that actually allows them to develop faster that actually allows them to focus on the things they want to focus. And, and the way we do that is by actually surfacing the security information that they need to know in the tools that they use as opposed to trying to bring them to our tools. So you think about this, so our customer is a security customer. Yet in the application development lifecycle, the developer is often the user. So we, we we're selling, we're so providing a solution to security and then we're enabling them to surface it in the developer tools. And by, by doing this, we actually make life easier for the developers such that they're not actually thinking about security so much as they're just saying, oh, I pulled down the wrong open source package, it's outdated, it has vulnerabilities. I was notified the second I did it, and I was told which one I should pull down. So I pulled down the right one. Now, if you're a developer, do you think that's security getting your way? Not at all. No. If you're a developer, you're thinking, thank god, thank you, thank, thank you. Yeah. You told me at a point where it was easy as opposed to waiting a week or two and then telling me where it's gonna be really hard to fix it. Yeah. Nothing >>More than, so maybe be talking to Terraform or some other hash corp, you know, environment. I got it. Okay. >>Absolutely. >>We're 30 seconds. We're almost out of time. Sure. But I'd love to get your snapshot. Here we are at the end of calendar 2022. What are you, we know you're optimistic in this threat landscape, which we're gonna see obviously more dynamics next year. What kind of nuggets can you drop about what we might hear and see in 23? >>You're gonna see across everything. We do a lot more focus on the use of AI and machine learning to drive automated outcomes for our customers. And you're gonna see us across everything we do. And that's going to be the big transformation. It'll be a multi-year transformation, but you're gonna see significant progress in the next 12 months. All >>Right, well >>What will be the sign of that progress? If I had to make a prediction, which >>I'm better security with less effort. >>Okay, great. I feel like that's, we can measure that. I >>Feel, I feel like that's a mic drop moment. Lee, it's been great having you on the program. Thank you for walking us through such great detail. What's going on in the organization, what you're doing for customers, where you're meeting, how you're meeting the developers, where they are. We'll have to have you back. There's just, just too much to unpack. Thank you both so much. Actually, our pleasure for Lee Cler and Dave Valante. I'm Lisa Martin. You're watching The Cube Live from Palo Alto Networks Ignite 22, the Cube, the leader in live, emerging and enterprise tech coverage.

Published Date : Dec 14 2022

SUMMARY :

The cube presents Ignite 22, brought to you by Palo Alto It's the cube at Palo Alto Networks get the sales right, and everything else will take care of itself. Great to have But we understand, despite that you are optimistic. And I just happen to think a little bit Cuz that's the, that's the holy grail these days. And so the, the way that we approach this is, you know, I, I kind of think in terms of like threes three core delivering cybersecurity everywhere that it needs to happen. So I was like, yeah, you know, And so pretty soon what you have is you're, the way that we approach this is, is three fundamental areas that, So everything to do with network security is integrated in that one place. Into Prisma cloud into the second cloud to two. look like for the average organization that's running 30 to 50 point And the reason I flip that around is if I just went to you and say, Hey, would you like to consolidate? kind of part A and B, how, assuming that's the case, how does that integration, the problems where all of a sudden, so, so you mentioned SD wan. And so that's the difference. and it gives you an X and an O. And it says, okay, put the X on things that you spend your And there's a, there's a second challenge that, that I've observed and that we And actually starting to build that into the products themselves so that they start This is like, you know, all day long I'm meeting with customers and, and I share what we're doing. And then if it is allowed to make sure that it's secure, Which that is the definition of an NextGen firewall, by the way, exactly what I just said. my question to you is, is, is, is Palo Alto essentially building a And, and by the way, in one day you might actually work from all of those places. with some new amazing innovation that is able to detect and block, you know, forgive me super PAs, that a allows the developers to have a common experience And so if you think Well, but anyway, I didn't mean to No, it's a, it is a good, it's a, it's, it's a great point. And, and the way we do that is by actually More than, so maybe be talking to Terraform or some other hash corp, you know, environment. But I'd love to get your snapshot. And that's going to be the big transformation. I feel like that's, we can measure that. We'll have to have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave ValantePERSON

0.99+

Lee ClaridgePERSON

0.99+

Lee KlarichPERSON

0.99+

DavePERSON

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Lee ClerPERSON

0.99+

NashPERSON

0.99+

StevenPERSON

0.99+

LeePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

Palo Alto NetworksORGANIZATION

0.99+

yesterdayDATE

0.99+

30QUANTITY

0.99+

a weekQUANTITY

0.99+

30 secondsQUANTITY

0.99+

three platformsQUANTITY

0.99+

SecondQUANTITY

0.99+

one platformQUANTITY

0.99+

two piecesQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

first partQUANTITY

0.99+

50QUANTITY

0.99+

five lettersQUANTITY

0.99+

one problemQUANTITY

0.99+

threeQUANTITY

0.99+

sixQUANTITY

0.99+

two separate problemsQUANTITY

0.99+

two thingsQUANTITY

0.99+

third pieceQUANTITY

0.99+

bothQUANTITY

0.99+

NextGenORGANIZATION

0.99+

oneQUANTITY

0.99+

10QUANTITY

0.99+

ThirdQUANTITY

0.99+

TerraformORGANIZATION

0.99+

second challengeQUANTITY

0.98+

second wayQUANTITY

0.98+

secondQUANTITY

0.98+

20 startupsQUANTITY

0.98+

400QUANTITY

0.98+

sevenQUANTITY

0.98+

second cloudQUANTITY

0.98+

OneQUANTITY

0.97+

The Cube LiveTITLE

0.97+

over 400 different cybersecurity productsQUANTITY

0.97+

one placeQUANTITY

0.96+

one dayQUANTITY

0.96+

day twoQUANTITY

0.96+

todayDATE

0.96+

40QUANTITY

0.96+

one simple exampleQUANTITY

0.95+

three fundamental areasQUANTITY

0.94+

next 12 monthsDATE

0.94+

earlier this yearDATE

0.93+

three main benefitsQUANTITY

0.93+

WendyPERSON

0.91+

Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22


 

(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)

Published Date : Nov 16 2022

SUMMARY :

David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

2019DATE

0.99+

David NicholsonPERSON

0.99+

2020DATE

0.99+

PetePERSON

0.99+

TexasLOCATION

0.99+

AugustDATE

0.99+

PeterPERSON

0.99+

SavannahPERSON

0.99+

30 speedsQUANTITY

0.99+

200 gigQUANTITY

0.99+

Savannah PetersonPERSON

0.99+

50 gigQUANTITY

0.99+

ArmandoPERSON

0.99+

128QUANTITY

0.99+

DellORGANIZATION

0.99+

9,000QUANTITY

0.99+

400 gigQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

50%QUANTITY

0.99+

twoQUANTITY

0.99+

128, 400 gigQUANTITY

0.99+

800 gigQUANTITY

0.99+

DallasLOCATION

0.99+

512 channelsQUANTITY

0.99+

9,352QUANTITY

0.99+

24 monthsQUANTITY

0.99+

one chipQUANTITY

0.99+

Tomahawk 4COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

North AmericaLOCATION

0.99+

next yearDATE

0.99+

oneQUANTITY

0.98+

512 fiberQUANTITY

0.98+

seven timesQUANTITY

0.98+

Tomahawk 5COMMERCIAL_ITEM

0.98+

four lanesQUANTITY

0.98+

9,000 plusQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

todayDATE

0.97+

AquamanPERSON

0.97+

BothQUANTITY

0.97+

InfiniBandORGANIZATION

0.97+

QSFP 112OTHER

0.96+

hundred gigQUANTITY

0.96+

Peter Del VecchioPERSON

0.96+

25.6 terabytes per secondQUANTITY

0.96+

two fascinating guestsQUANTITY

0.96+

single sourceQUANTITY

0.96+

64 OSFPQUANTITY

0.95+

RockyORGANIZATION

0.95+

two million CPUsQUANTITY

0.95+

25.6 T.QUANTITY

0.95+

Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22


 

>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.

Published Date : Nov 16 2022

SUMMARY :

how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David NicholsonPERSON

0.99+

Savannah PetersonPERSON

0.99+

AugustDATE

0.99+

2019DATE

0.99+

PetePERSON

0.99+

128QUANTITY

0.99+

PeterPERSON

0.99+

2 millionQUANTITY

0.99+

2020DATE

0.99+

400 gigQUANTITY

0.99+

200 gigQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

DellORGANIZATION

0.99+

400 gigQUANTITY

0.99+

200 gigQUANTITY

0.99+

DallasLOCATION

0.99+

30 speedsQUANTITY

0.99+

50 gigQUANTITY

0.99+

one chipQUANTITY

0.99+

400 giggiQUANTITY

0.99+

512 channelsQUANTITY

0.99+

9,000QUANTITY

0.99+

seven timesQUANTITY

0.99+

800 gigQUANTITY

0.99+

ArmandoPERSON

0.99+

24 monthsQUANTITY

0.99+

oneQUANTITY

0.99+

50%QUANTITY

0.99+

9,000 plusQUANTITY

0.99+

bothQUANTITY

0.99+

Peter Del VecchioPERSON

0.99+

single sourceQUANTITY

0.99+

North AmericaLOCATION

0.98+

doubleQUANTITY

0.98+

todayDATE

0.98+

BothQUANTITY

0.98+

Hawk fourCOMMERCIAL_ITEM

0.98+

threeQUANTITY

0.98+

Day twoQUANTITY

0.97+

next yearDATE

0.97+

hpcORGANIZATION

0.97+

Tomahawk fiveCOMMERCIAL_ITEM

0.97+

Dell TechnologiesORGANIZATION

0.97+

T sixCOMMERCIAL_ITEM

0.96+

twoQUANTITY

0.96+

one switchQUANTITY

0.96+

TexasLOCATION

0.96+

six efficiencyQUANTITY

0.96+

25 pointQUANTITY

0.95+

ArmandoORGANIZATION

0.95+

50QUANTITY

0.93+

25.6 tets per secondQUANTITY

0.92+

51.2 terabytes per secondQUANTITY

0.92+

18QUANTITY

0.91+

512 fiber pairsQUANTITY

0.91+

two fascinating guestsQUANTITY

0.91+

hundred gigQUANTITY

0.91+

four lanesQUANTITY

0.9+

HPCORGANIZATION

0.9+

51.2 T.QUANTITY

0.9+

InfiniBandORGANIZATION

0.9+

256 endQUANTITY

0.89+

firstQUANTITY

0.89+

Armando AcostaPERSON

0.89+

two different network technologiesQUANTITY

0.88+

Ameya Talwalker & Subbu Iyer, Cequence Security | AWS Startup Showcase S2 E4 | Cybersecurity


 

>>Hello, and welcome to the cubes presentation of the AWS startup showcase. This is season two, episode four, the ongoing series covering exciting startups from the AWS ecosystem to talk about cyber security. I'm your host, John feer. And today we're excited to join by a Mediatel Walker, CEO of Quin security and sub IER, vice president of product management of sequence security gentlemen, thanks for joining us today on this showcase. >>Thank you, John PRAs. >>So the title of this session is continuous API protection life cycle to discover, detect, and defend security. APIs are part of it. They're hardened, everyone's using them, but they're they're target for malicious behavior. This is the focus of this segment. You guys are in the leading edge of this. What are the biggest challenges for organizations right now in assessing their security risks? Because you're seeing APIs all over the place in the news, just even this week, Twitter had a whistleblower come out from the security group, talking about their security plans, misleading the FTC on the bots and some of the malicious behavior inside the API interface of Twitter. This is really a mainstream Washington post is reporting on it. New York times, all the global outlets are talking about this story. This is the risk. I mean, yeah, this is what you guys do protect against this. >>Yeah, this is absolutely top of mind for a lot of security folks today. So obviously in the media and the type of attack that that is being discussed with this whistleblower coming out is called reputation bombing. This is not new. This has been going on since I would say at least eight to 10 years where the, the bad actors are using bots or automation and ultimately using APIs on these large social media platforms, whether it's Facebook, whether it's Twitter or some other social media platform and messing with the reputation system of those large platforms. And what I mean by that is they will do fake likes, fake commenting, fake retweeting in the case of Twitter. And what that means is that things that are, should not be very popular, all of a sudden become popular. That that way they're able to influence things like elections, shopping habits, personnel. >>We, we work with similar profile companies and we see this all the time. We, we mostly work on some of the secondary platforms like dating and other sort of social media platforms around music sharing and things like video sharing. And we see this all the time. These, these bots are bad. Actors are using bots, but ultimately it's an API problem. It's not just a bot problem. And that's what we've been trying to sort of preach to the world, which is your bot problem is subset of your API security challenges that you deal as an organization. >>You know, IMIA, we talked about this in the past on a previous conversation, but this really is front and center mainstream for the whole world to see around the challenges. All companies face, every CSO, every CIO, every board member organizations out there looking at this security posture that spans not just information technology, but physical and now social engineering. You have all kinds of new payloads of malicious behavior that are being compromised through, through things like APIs. This is not just about CSO, chief information security officer. This is chief security officer issues. What's your reaction >>Very much so I think the, this is a security problem, but it's also a reputation problem. In some cases, it's a data governance problem. We work with several companies which have very restrictive data governance and data regulations or data residency regulations there to conform to those regulations. And they have to look at that. It's not just a CSO problem anymore. In case of the, the news of the day to day, this is a platform problem. This goes all the way to the, that time CTO of Twitter. And now the CEO of Twitter, who was in charge of dealing with these problems. We see as just to give you an example, we, we work, we work with a similar sort of social media platform that allows Oop based login to their platform that is using tokens. You can sort of sign in with Facebook, sign in with Twitter, sign in with Google. These are API keys that are generated and trusted by these social media platforms. When we saw that Facebook leaked about 50 million of these login credentials or API keys, this was about three, four years ago. I wrote a blog about it. We saw a huge spike in those API keys being used to log to other social media platforms. So although one social platform might be taking care of its, you know, API or what problem, if something else gets reached somewhere else, it has a cascading impact on a variety of platforms. >>You know, that's a really interesting dynamic. And if you think about just the token piece that you mentioned, that's kind of under the coverage, that's a technology challenge, but also you get in the business logic. So let's go back and, and unpack that, okay, they discontinue the tokens. Now they're being reused here. In the case of Twitter, I was talking to an executive here in Silicon valley and they said, yeah, it's a cautionary tale, for sure. Although Twitter's a unique situation, but they abstract out the business value and say, Hey, they had an M and a deal on the table. And so if someone wants to unwind that deal, all I gotta say is, Hey, there's a bot problem. And now you have essentially new kinds of risk in the business have nothing to do with some sign the technology, okay. They got a security breach, but here with Twitter, you have an, an, an M and a deal, an acquisition that's being contested because of the, the APIs. So, so if you're in business, you gotta think to yourself, what am I risking with my API? So every organization should be assessing their security risks, tied to their APIs. This is a huge awakening for them. Where should they start? And that's the, that's the core question. Okay. You got my attention risks with the API. What do I do? >>So when I talked to you in my previous interview, the start is basically knowing what to, in most cases, you see these that are hitting the wire much. Every now there is a major in cases you'll find these APIs are targeted, that are not poorly protected. They're absolutely just not protected at all, which means the security team or any sort of team that is responsible for protecting these APIs are just completely unaware of these APIs being there in the first place. And this is where we talk about the shadow it or shadow API problem. Large enterprises have teams that are geo distributed, and this problem is escalated after the pandemic even more because now you have teams that are completely distributed. They do M and a. So they acquire new companies and have no visibility into their API or security practices. And so there are a lot of driving factors why these APIs are just not protected and, and just unknown even more to the security team. So the first step has to be discover your API attack surface, and then prioritize which APIs you wanna target in terms of runtime protection. >>Yeah. I wanna dig into that API kind of attack surface area management, runtime monitoring capability in a second, but so I wanna get you in here too, because we're talking about APIs, we're talking about attacks. What does an API attack look like? >>Yeah, that's a very good question, John, there are really two different forms of attacks of APIs, one type of attack, exploits, APIs that have known vulnerabilities or some form of vulnerabilities. For instance, APIs that may use a weak form of authentication or are really built with no authentication at all, or have some sort of vulnerability that makes them very good targets for an attacker to target. And the second form of attack is a more subtle one. It's called business logic abuse. It's, it's utilizing APIs in completely legitimate manner manners, but exploiting those APIs to exfiltrate information or key sensitive information that was probably not thought through by the developer or the designers or those APIs. And really when we do API protection, we really need to be able to handle both of those scenarios, protect against abuse of APIs, such as broken authentication, or broken object level authorization APIs with that problem, as well as protecting APIs from business logic abuse. And that's really how we, you know, differentiate against other vendors in this >>Market. So just what are the, those key differentiated ways to identify the, in the malicious intents with APIs? Can you, can you just summarize that real quick, the three ways? >>Sure. Yeah, absolutely. There are three key ways that we differentiate against our competition. One is in the, we have built out a, in the ability to actually detect such traffic. We have built out a very sophisticated threat intelligence network built over the entire lifetime of the company where we have very well curated information about malicious infrastructures, malicious operators around the world, including not just it address ranges, but also which infrastructures do they operate on and stuff like that, which actually helps a lot in, in many environments in especially B2C environments, that alone accounts for a lot of efficacy for us in detecting our weed out bad traffic. The second aspect is in analyzing the request that are coming in the API traffic that is coming in and from the request itself, being able to tell if there is credential abuse going on or credential stuffing going on or known patterns that the traffic is exhibiting, that looks like it is clearly trying to attack the attack, the APM. >>And the third one is, is really more sophisticated as they go farther and farther. It gets more sophisticated where sequence actually has a lot of machine learning models built in which actually profile the traffic that is coming in and separate. So the legitimate or learns the legitimate traffic from the anomalous or suspicious traffic. So as the traffic, as the API requests are coming in, it automatically can tell that this traffic does not look like legitimate traffic does not look like the traffic that this API typically gets and automatically uses that to figure out, okay, where is this traffic coming from? And automatically takes action to prevent that attack? >>You know, it's interesting APIs have been part of the goodness of cloud and cloud scale. And it reminds me of the old Andy Grove quote, founder of, in one of the founders of Intel, you know, let chaos, let, let the chaos happen, then reign it in it's APIs. You know, a lot of people have been creating them and you've got a lot of different stakeholders involved in creating them. And so now securing them and now manage them. So a lot of creation now you're starting to secure them and now you gotta manage 'em. This all is now big focus. As you pointed out, what are some of the dynamics that customers who have to deal with on the product side and, and organization, let, let chaos rain, and then rain in the chaos, as, as the saying goes, what, what do companies do? >>Yeah. Typically companies start off with like, like a mayor talked about earlier. Discovery is really the key thing to start with, like figuring out what your API attack surfaces and really getting your arms around that problem. And typically we are finding customers start that off from the security organization, the CSO organization to really go after that problem. And in some cases, in some customers, we even find like dedicated centers of excellence that are created for API security, which go after that problem to be able to get their arms around the whole API attack surface and the API protection problem statement. So that's where usually that problem starts to get addressed. >>I mean, organizations and your customers have to stop the attacks. A lot of different techniques, you know, run time. You mentioned that earlier, the surface area monitoring, what's the choice. What's the, where are, where are, where is everybody? Is everyone in the, in the boiling water, like the frog and boiling water or they do, they know it's happening? Like what did they do? What's their opportunity to get in >>Position? Yeah. So I, I think let's take a step back a little bit, right? What has happened is if you draw the cloud security market, if you will, right. Which is the journey to the cloud, the security of these applications or APIs at a container level, in terms of vulnerabilities and, and other things that market grew with the journey to the cloud, pretty much locked in lockstep. What has happened in the API side is the API space has kind of lacked behind the growth and explosion in the API space. So what that means is APIs are getting published way faster than the security teams are able to sort of control and secure them. APIs are getting published in environments that the security completely unaware of. We talked about in the past about the parameter, the parameter, as we know, it doesn't exist anymore. It used to be the case that you hit a CDN, you terminate your SSL, you stop your layer three and four DDoS. >>And then you go into the application and do the business logic. That parameter is just gone because it's now could be living in multi-cloud environment. It could be living in the on-prem environment, which is PubNet is friendly. And so security teams that are used to protecting apps, using a perimeter defense plus changes, it's gone. You need to figure out where your perimeter is. And therefore we sort of recommend an approach, which is have a uniform view across all your APIs, wherever they could be distributed and have a single point of control across those with a solution like sequence. And there are others also in this space, which is giving you that uniform view, which is first giving you that, you know, outside and looking view of what APIs to protect. And then let's, you sort of take the journey of securing the API life cycle. >>So I would say that every company now hear me out on this indulges me for a second. Every company in the world will be non perimeter based, except for maybe 5% because of maybe unique reason, proprietary lockdown, information, whatever. But for most, most companies, everyone will be in the cloud or some cloud native, non perimeter based security posture. So the question is, how does your platform fit into that trajectory? And specifically, why are you guys in the position in your mind to help customers solve this API problem? Because again, APIs have been the greatest thing about the cloud, right? Yeah. So the goodness is there because of APS. Now you gotta reign it in reign in the chaos. Yeah. What, what about your platform share? What is it, why is it win? Why should customers care about this? >>Absolutely. So if you think about it, you're right, the parameter doesn't exist. People have APIs deployed in multiple environments, multicloud hybrid, you name it sequence is uniquely positioned in a way that we can work with your environment. No matter what that environment is. We're the only player in this space that can protect your APIs purely as a SA solution or purely as an on-prem deployment. And that could be a SaaS platform. It doesn't need to be RackN, but we also support that and we could be a hybrid deployment. We have some deployments which are on your prem and the rest of this solution is in our SA. If you think about it, customers have secured their APIs with sequence with 15 minutes, you know, going live from zero to life and getting that protection instantaneously. We have customers that are processing a billion API calls per day, across variety of different cloud environments in sort of six different brands. And so that scale, that flexibility of where we can plug into your infrastructure or be completely off of your infrastructure is something unique to sequence that we offer that nobody else is offering >>Today. Okay. So I'll be, I'll be a naysayer. Yeah, look, it, we are perfectly coded APIs. We are the best in the business. We're locked down. Our APIs are as tight as a drum. Why do I need you? >>So that goes back to who's answer. Of course, >>Everyone's say that that's, that's great, but that's my argument. >>There are two types of API attacks. One is a tactic problem, which is exploiting a vulnerability in an API, right? So what you're saying is my APIs are secure. It does not have any vulnerability I've taken care of all vulnerabilities. The second type of attack that targets APIs is the business logic. Use this stuff in the news this week, which is the whistleblower problem, which is, if you think APIs that Twitter is publishing for users are perfectly secure. They are taking care of all the vulnerabilities and patching them when they find new ones. But it's the business logic of, you know, REWE liking or commenting that the bots are targeting, which they have no against. Right. And then none of the other social networks too. Yeah. So there are many examples. Uber wrote a program to impersonate users in different geo locations to find lifts, pricing, and driver information and passenger information, completely legitimate use of APIs for illegitimate, illegitimate purpose using bots. So you don't need bots by the way, don't, don't make this about bot versus not. Yeah. You can use APIs sort of for the, the purpose that they're not designed for sort of exploiting their business logic, either using a human interacting, a human farm, interacting with those APIs or a bot form targeting those APIs, I think. But that's the problem when you have, even when you've secured all your problem, all your APIs, you still have to worry about these of challenges. >>I think that's the big one. I think the business logic one, certainly the Twitter highlights that the Uber example is a good one. That is basically almost the, the backlash of having a simplistic API, which people design to. Right. Yeah. You know, as you point out, Twitter is very simple API, hardened, very strong security, but they're using it to maliciously manipulate what's inside. So in a way that perimeter's dead too. Right. So how do you stop that business logic? What's the, what's the solution what's the customer do about that? Because their goal is to create simple, scalable APIs. >>Yeah. I'll, I'll give you a little bit, and then I think Subaru should maybe go into a little bit of the depth of the problem, but what I think that the answer lies in what Subaru spoke earlier, which is our ML. AI is, is good at profiling plus split between the API users, are these legitimate users, humans versus bots. That's the first split we do. The split second split we do is even when these, these are classified users as bots, we will say there are some good bots that are necessary for the business and bad bots. So we are able to split this across three types of users, legitimate humans, good bots and bad bots. And just to give you an example of good bots is there are in the financial work, there are aggregators that are scraping your data and aggregating for end users to consume, right? Your, your, and other type of financial aggregators FinTech companies like MX. These are good bots and you wanna allow them to, you know, use your APIs, whereas you wanna stop the bad bots from using your APIs super, if you wanna add so, >>So good bots versus bad bots, that's the focus. Go ahead. Weigh in, weigh in on your thought on this >>Really breaks down into three key areas that we talk about here, sequence, right? One is you start by discovering all your APIs. How many APIs do I have in my environment that ly immediately highlight and say, Hey, you have, you know, 10,000 APIs. And that usually is an eye opener to many customers where they go, wow. I thought we had a 10th of that number. That usually is an eyeopener for them to, to at least know where they're at. The second thing is to tell them detection information. So discover, detect, and defend detect will tell them, Hey, your APIs are getting traffic from. So and so it addresses so and so infrastructure. So and so countries and so on that usually is another eye opener for them. They then get to see where their API traffic is coming from. Let's say, if you are a, if you're running a pizza delivery service out of California and your traffic is coming from Eastern Europe to go, wait a minute, nobody's trying, I'm not, I'm not, I don't deliver pizzas in Eastern Europe. Why am I getting traffic from that part of the world? So that sort of traffic immediately comes up and it will tell you that it is hitting your unauthenticated API. It is hitting your API. That has, that is vulnerable to a broken object level, that authorization, vulnerable be and so on. >>Yeah, I think, and >>Then comes the different aspect. Yeah. The different aspect is where you can take action and say, I wanna block certain types of traffic, or I wanna rate limit certain types of traffic. If, if you're seeing spikes there or you could maybe insert header so that it passes on to the end application and the application team can use that bit to essentially take a, a conscious response. And so, so the platform is very flexible in allowing them to take an action that suits their needs. >>Yeah. And I think this is the big trend. This is why I like what you guys are doing. One APIs we're built for the goodness of cloud. They're now the plumbing, you know, anytime you see plumbing involved, connection points, you know, that's pretty important. People are building it out and it has made the cloud what it is. Now, you got a security challenge. You gotta add more intelligence, more smarts to it. This is where I think platform versus tools matter. Can you guys just quickly share your thoughts on that? Cuz a lot of your customers and, and future customers have dealt with the sprawls of all these different tools. Right? I got a tool for this. I got a tool for that, but people are gravitating towards platforms, but how many platforms can a customer have? So again, this brings up the point point around how you guys are engaging with customers. Can you share your thoughts on tooling platforms? Your customers are constantly inundated with the same tsunami. Isn't new thing. Why, what, how should they look at this? >>Yeah, I mean, we don't wanna be, we don't wanna add to that alert fatigue problem that affects much of the cybersecurity industry by generating a whole bunch of alerts and so on. So what we do is we actually integrate very well with S IEM systems or so systems and allow customers to integrate the information that we are detecting or mitigating and feed them onto enterprise systems like a Splunk or a Datadog where they may have sophisticated processes built in to monitor, you know, spikes in anomalous traffic or actions that are taken by sequence. And that can be their dashboard where a whole bunch of alerting and reporting actually happens. So we play in the security ecosystem very well by integrating with other products and integrate very tightly with them, right outta the box. >>Okay. Mia, this is a wrap up now for the showcase. Really appreciate you guys sharing your awesome technology and very relevant product for your customers and where we are right now in this we call Supercloud or now multi-cloud or hybrid world of cloud. Share a, a little bit about the company, how people can get involved in your solution, how they can consume it and things they should know about, about sequence security. >>Yeah, we've been on this journey, an exciting journey it's been for, for about eight years. We have very large fortune 100 global 500 customers that use our platform on a daily basis. We have some amazing logos, both in Europe and, and, and in us customers are, this is basically not the shelf product customers not only use it, but depend on sequence. Several retailers. We are sitting in front of them handling, you know, black Friday, cyber, Monday, Christmas shopping, or any sort of holiday seasonality shopping. And we have handled that the journey starts by, by just simply looking at your API attack surface, just to a discover call with sequence, figure out where your APIs are posted work with you to prioritize how to protect them in a sort of a particular order and take the whole life cycle with sequence. This is, this is an exciting phase exciting sort of stage in the company's life. We just raised a very sort of large CDC round of funding in December from Menlo ventures. And we are excited to see, you know, what's next in, in, in the next, you know, 12 to 18 months. It certainly is the, you know, one of the top two or three items on the CSOs, you know, budget list for next year. So we are extremely busy, but we are looking for, for what the next 12 to 18 months are, are in store for us. >>Well, congratulations to all the success. So will you run the roadmap? You know, APIs are the plumbing. If you will, you know, they connection points, you know, you want to kind of keep 'em simple, as they say, keep the pipes dumb and make the intelligence around it. You seem to see more and more intelligence coming around, not just securing it, but does, where does this go in your mind? Where, where do we go beyond once we secure everything and manage it properly, APRs, aren't going away, they're only gonna get better and smarter. Where's the intelligence coming share a little bit. >>Absolutely. Yeah. I mean, there's not a dull moment in the space. As digital transformation happens to most enterprise systems, many applications are getting transformed. We are seeing an absolute explosion in the volume of APIs and the types of APIs as well. So the applications that were predominantly limited to data centers sort of deployments are now splintered across multiple different cloud environments are completely microservices based APIs, deep inside a Kubernetes cluster, for instance, and so on. So very exciting stuff in terms of proliferation of volume of APIs, as well as types of APIs, there's nature of APIs. And we are building very sophisticated machine learning models that can analyze traffic patterns of such APIs and automatically tell legitimate behavior from anomalous or suspicious behavior and so on. So very exciting sort of breadth of capabilities that we are looking at. >>Okay. I mean, yeah. I'll give you the final words since you're the CEO for the CSOs out there, the chief information security officers and the chief security officers, what do you want to tell them? If you could give them a quick shout out? What would you say to them? >>My shout out is just do an assessment with sequence. I think this is a repeating thing here, but really get to know your APIs first, before you decide what and where to protect them. That's the one simple thing I can mention for thes >>Am. Thank you so much for, for joining me today. Really appreciate it. >>Thank you. >>Thank you. Okay. That is the end of this segment of the eight of his startup showcase. Season two, episode four, I'm John for your host and we're here with sequin security. Thanks for watching.

Published Date : Sep 7 2022

SUMMARY :

This is season two, episode four, the ongoing series covering exciting startups from the AWS ecosystem So the title of this session is continuous API protection life cycle to discover, So obviously in the media and the type of attack that that is being discussed And that's what we've been trying to sort of preach to the world, which is your bot problem is mainstream for the whole world to see around the challenges. the news of the day to day, this is a platform problem. of risk in the business have nothing to do with some sign the technology, okay. So the first step has to be discover your API attack surface, runtime monitoring capability in a second, but so I wanna get you in here too, And that's really how we, you know, differentiate against other So just what are the, those key differentiated ways to identify the, in the malicious in the ability to actually detect such traffic. So the legitimate or learns the legitimate traffic from the anomalous or suspicious traffic. And it reminds me of the old Andy Grove quote, founder of, in one of the founders of Intel, Discovery is really the key thing to start with, You mentioned that earlier, the surface area monitoring, Which is the journey to the cloud, the security of And there are others also in this space, which is giving you that uniform And specifically, why are you guys in the position in your mind to help customers solve And so that scale, that flexibility of where we can plug into your infrastructure or We are the best in the business. So that goes back to who's answer. in the news this week, which is the whistleblower problem, which is, if you think APIs So how do you stop that business logic? And just to give you an example of good bots is there are in the financial work, there are aggregators that So good bots versus bad bots, that's the focus. So that sort of traffic immediately comes up and it will tell you that it is hitting your unauthenticated And so, so the platform is very flexible in They're now the plumbing, you know, anytime you see plumbing involved, connection points, in to monitor, you know, spikes in anomalous traffic or actions that are taken by Really appreciate you guys sharing your awesome And we are excited to see, you know, what's next in, in, in the next, So will you run the roadmap? So the applications that were predominantly limited to data centers sort of I'll give you the final words since you're the CEO for the CSOs out there, but really get to know your APIs first, before you decide what and where Am. Thank you so much for, for joining me today. Season two, episode four, I'm John for your host and we're here with sequin security.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

CaliforniaLOCATION

0.99+

JohnPERSON

0.99+

DecemberDATE

0.99+

SubaruORGANIZATION

0.99+

UberORGANIZATION

0.99+

5%QUANTITY

0.99+

TwitterORGANIZATION

0.99+

Andy GrovePERSON

0.99+

15 minutesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

two typesQUANTITY

0.99+

OneQUANTITY

0.99+

eightQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Silicon valleyLOCATION

0.99+

Ameya TalwalkerPERSON

0.99+

10thQUANTITY

0.99+

todayDATE

0.99+

second aspectQUANTITY

0.99+

AWSORGANIZATION

0.99+

three waysQUANTITY

0.99+

12QUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

10,000 APIsQUANTITY

0.99+

next yearDATE

0.99+

third oneQUANTITY

0.99+

first splitQUANTITY

0.99+

Eastern EuropeLOCATION

0.98+

about 50 millionQUANTITY

0.98+

second thingQUANTITY

0.98+

three key waysQUANTITY

0.98+

MondayDATE

0.98+

18 monthsQUANTITY

0.98+

second formQUANTITY

0.98+

firstQUANTITY

0.98+

Quin securityORGANIZATION

0.98+

oneQUANTITY

0.98+

this weekDATE

0.97+

TodayDATE

0.97+

singleQUANTITY

0.97+

first stepQUANTITY

0.97+

one typeQUANTITY

0.97+

six different brandsQUANTITY

0.97+

MenloORGANIZATION

0.97+

IMIAORGANIZATION

0.97+

second typeQUANTITY

0.97+

New YorkLOCATION

0.96+

second splitQUANTITY

0.96+

about eight yearsQUANTITY

0.95+

500 customersQUANTITY

0.95+

Subbu IyerPERSON

0.95+

four years agoDATE

0.95+

10 yearsQUANTITY

0.94+

John PRAsPERSON

0.94+

a billion API callsQUANTITY

0.94+

first placeQUANTITY

0.93+

REWEORGANIZATION

0.92+

MiaPERSON

0.91+

two different formsQUANTITY

0.91+

PubNetORGANIZATION

0.9+

three itemsQUANTITY

0.9+

Season twoQUANTITY

0.88+

SupercloudORGANIZATION

0.88+

Mediatel WalkerORGANIZATION

0.88+

one simpleQUANTITY

0.87+

a minuteQUANTITY

0.86+

twoQUANTITY

0.86+

Steve George, Weaveworks & Steve Waterworth, Weaveworks | AWS Startup Showcase S2 E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two of the ongoing series. We're covering exciting start startups in the AWS ecosystem to talk about open source community stuff. I'm your host, Dave Nicholson. And I'm delighted today to have two guests from Weaveworks. Steve George, COO of Weaveworks, and Steve Waterworth, technical marketing engineer from Weaveworks. Welcome, gentlemen, how are you? >> Very well, thanks. >> Very well, thanks very much. >> So, Steve G., what's the relationship with AWS? This is the AWS Startup Showcase. How do Weaveworks and AWS interact? >> Yeah sure. So, AWS is a investor in Weaveworks. And we, actually, collaborate really closely around EKS and some specific EKS tooling. So, in the early days of Kubernetes when AWS was working on EKS, the Elastic Kubernetes Service, we started working on the command line interface for EKS itself. And due to that partnership, we've been working closely with the EKS team for a long period of time, helping them to build the CLI and make sure that users in the community find EKS really easy to use. And so that brought us together with the AWS team, working on GitOps and thinking about how to deploy applications and clusters using this GitOps approach. And we've built that into the EKS CLI, which is an open source tool, is a project on GitHub. So, everybody can get involved with that, use it, contribute to it. We love hearing user feedback about how to help teams take advantage of the elastic nature of Kubernetes as simply and easily as possible. >> Well, it's great to have you. Before we get into the specifics around what Weaveworks is doing in this area that we're about to discuss, let's talk about this concept of GitOps. Some of us may have gotten too deep into a Netflix series, and we didn't realize that we've moved on from the world of DevOps or DevSecOps and the like. Explain where GitOps fits into this evolution. >> Yeah, sure. So, really GitOps is an instantiation, a version of DevOps. And it fits within the idea that, particularly in the Kubernetes world, we have a model in Kubernetes, which tells us exactly what we want to deploy. And so what we're talking about is using Git as a way of recording what we want to be in the runtime environment, and then telling Kubernetes from the configuration that is stored in Git exactly what we want to deploy. So, in a sense, it's very much aligned with DevOps, because we know we want to bring teams together, help them to deploy their applications, their clusters, their environments. And really with GitOps, we have a specific set of tools that we can use. And obviously what's nice about Git is it's a very developer tool, or lots and lots of developers use it, the vast majority. And so what we're trying to do is bring those operational processes into the way that developers work. So, really bringing DevOps to that generation through that specific tooling. >> So Steve G., let's continue down this thread a little bit. Why is it necessary then this sort of added wrinkle? If right now in my organization we have developers, who consider themselves to be DevOps folks, and we give them Amazon gift cards each month. And we say, "Hey, it's a world of serverless, "no code, low code lights out data centers. "Go out and deploy your code. "Everything should be fine." What's the problem with that model, and how does GitOps come in and address that? >> Right. I think there's a couple of things. So, for individual developers, one of the big challenges is that, when you watch development teams, like deploying applications and running them, you watch them switching between all those different tabs, and services, and systems that they're using. So, GitOps has a real advantage to developers, because they're already sat in Git, they're already using their familiar tooling. And so by bringing operations within that developer tooling, you're giving them that familiarity. So, it's one advantage for developers. And then for operations staff, one of the things that it does is it centralizes where all of this configuration is kept. And then you can use things like templating and some other things that we're going to be talking about today to make sure that you automate and go quickly, but you also do that in a way which is reliable, and secure, and stable. So, it's really helping to bring that run fast, but don't break things kind of ethos to how we can deploy and run applications in the cloud. >> So, Steve W., let's start talking about where Weaveworks comes into the picture, and what's your perspective. >> So, yeah, Weaveworks has an engine, a set of software, that enables this to happen. So, think of it as a constant reconciliation engine. So, you've got your declared state, your desired state is declared in Git. So, this is where all your YAML for all your Kubernetes hangs out. And then you have an agent that's running inside Kubernetes, that's the Weaveworks GitOps agent. And it's constantly comparing the desired state in Git with the actual state, which is what's running in Kubernetes. So, then as a developer, you want to make a change, or an operator, you want to make a change. You push a change into Git. The reconciliation loop runs and says, "All right, what we've got in Git does not match "what we've got in Kubernetes. "Therefore, I will create story resource, whatever." But it also works the other way. So, if someone does directly access Kubernetes and make a change, then the next time that reconciliation loop runs, it's automatically reverted back to that single source of truth in Git. So, your Kubernetes cluster, you don't get any configuration drift. It's always configured as you desire it to be configured. And as Steve George has already said, from a developer or engineer point of view, it's easy to use. They're just using Git just as they always have done and continue to do. There's nothing new to learn. No change to working practices. I just push code into Git, magic happens. >> So, Steve W., little deeper dive on that. When we hear Ops, a lot of us start thinking about, specifically in terms of infrastructure, and especially since infrastructure when deployed and left out there, even though it's really idle, you're paying for it. So, anytime there's an Ops component to the discussion, cost and resource management come into play. You mentioned this idea of not letting things drift from a template. What are those templates based on? Are they based on... Is this primarily an infrastructure discussion, or are we talking about the code itself that is outside of the infrastructure discussion? >> It's predominantly around the infrastructure. So, what you're managing in Git, as far as Kubernetes is concerned, is always deployment files, and services, and horizontal pod autoscalers, all those Kubernetes entities. Typically, the source code for your application, be it in Java, Node.js, whatever it is you happen to be writing it in, that's, typically, in a separate repository. You, typically, don't combine the two. So, you've got one set of repository, basically, for building your containers, and your CLI will run off that, and ultimately push a container into a registry somewhere. Then you have a separate repo, which is your config. repo, which declares what version of the containers you're going to run, how many you're going to run, how the services are bound to those containers, et cetera. >> Yeah, that makes sense. Steve G., talk to us about this concept of trusted application delivery with GitOps, and frankly, it's what led to the sort of prior question. When you think about trusted application delivery, where is that intertwinement between what we think of as the application code versus the code that is creating the infrastructure? So, what is trusted application delivery? >> Sure, so, with GitOps, we have the ability to deploy the infrastructure components. And then we also define what the application containers are, that would go to be deployed into that environment. And so, this is a really interesting question, because some teams will associate all of the services that an application needs within an application team. And sometimes teams will deploy sort of horizontal infrastructure, which then all application teams services take advantage of. Either way, you can define that within your configuration, within your GitOps configuration. Now, when you start deploying speed, particularly when you have multiple different teams doing these sorts of deployments, one of the questions that starts to come up will be from the security team, or someone who's thinking about, well, what happens if we make a deployment, which is accidentally incorrect, or if there is a security issue in one of those dependencies, and we need to get a new version deployed as quickly as possible? And so, in the GitOps pipeline, one of the things that we can do is to put in various checkpoints to check that the policy is being followed correctly. So, are we deploying the right number of applications, the right configuration of an application? Does that application follow certain standards that the enterprise has set down? And that's what we talk about when we talk about trusted policy and trusted delivery. Because really what we're thinking about here is enabling the development teams to go as quickly as possible with their new deployments, but protecting them with automated guard rails. So, making sure that they can go fast, but they are not going to do anything which destroys the reliability of the application platform. >> Yeah, you've mentioned reliability and kind of alluded to scalability in the application environment. What about looking at this from the security perspective? There've been some recently, pretty well publicized breaches. Not a lot of senior executives in enterprises understand that a very high percentage of code that their businesses are running on is coming out of the open source community, where developers and maintainers are, to a certain degree, what they would consider to be volunteers. That can be a scary thing. So, talk about why an enterprise struggles today with security, policy, and governance. And I toss this out to Steve W. Or Steve George. Answer appropriately. >> I'll try that in a high level, and Steve W. can give more of the technical detail. I mean, I'll say that when I talk to enterprise customers, there's two areas of concern. One area of concern is that, we're in an environment with DevOps where we started this conversation of trying to help teams to go as quickly as possible. But there's many instances where teams accidentally do things, but, nonetheless, that is a security issue. They deploy something manually into an environment, they forget about it, and that's something which is wrong. So, helping with this kind of policy as code pipeline, ensuring that everything goes through a set of standards could really help teams. And that's why we call it developer guard rails, because this is about helping the development team by providing automation around the outside, that helps them to go faster and relieves them from that mental concern of have they made any mistakes or errors. So, that's one form. And then the other form is the form, where you are going, David, which is really around security dependencies within software, a whole supply chain of concern. And what we can do there, by, again, having a set of standard scanners and policy checking, which ensures that everything is checked before it goes into the environment. That really helps to make sure that there are no security issues in the runtime deployment. Steve W., anything that I missed there? >> Yeah, well, I'll just say, I'll just go a little deeper on the technology bit. So, essentially, we have a library of policies, which get you started. Of course, you can modify those policies, write your own. The library is there just to get you going. So, as a change is made, typically, via, say, a GitHub action, the policy engine then kicks in and checks all those deployment files, all those YAML for Kubernetes, and looks for things that then are outside of policy. And if that's the case, then the action will fail, and that'll show up on the pull request. So, things like, are your containers coming from trusted sources? You're not just pulling in some random container from a public registry. You're actually using a trusted registry. Things like, are containers running as route, or are they running in privileged mode, which, again, it could be a security? But it's not just about security, it can also be about coding standards. Are the containers correctly annotated? Is the deployment correctly annotated? Does it have the annotation fields that we require for our coding standards? And it can also be about reliability. Does the deployment script have the health checks defined? Does it have a suitable replica account? So, a rolling update. We'll actually do a rolling update. You can't do a rolling update with only one replica. So, you can have all these sorts of checks and guards in there. And then finally, there's an admission controller that runs inside Kubernetes. So, if someone does try and squeeze through, and do something a little naughty, and go directly to the cluster, it's not going to happen, 'cause that admission controller is going to say, "Hey, no, that's a policy violation. "I'm not letting that in." So, it really just stops. It stops developers making mistakes. I know, I know, I've done development, and I've deployed things into Kubernetes, and haven't got the conflict quite right, and then it falls flat on its face. And you're sitting there scratching your head. And with the policy checks, then that wouldn't happen. 'Cause you would try and put something in that has a slightly iffy configuration, and it would spit it straight back out at you. >> So, obviously you have some sort of policy engine that you're you're relying on. But what is the user experience like? I mean, is this a screen that is reminiscent of the matrix with non-readable characters streaming down that only another machine can understand? What does this look like to the operator? >> Yeah, sure, so, we have a console, a web console, where developers and operators can use a set of predefined policies. And so that's the starting point. And we have a set of recommendations there and policies that you can just attach to your deployments. So, set of recommendations about different AWS resources, deployment types, EKS deployment types, different sets of standards that your enterprise might be following along with. So, that's one way of doing it. And then you can take those policies and start customizing them to your needs. And by using GitOps, what we're aiming for here is to bring both the application configuration, the environment configuration. We talked about this earlier, all of this being within Git. We're adding these policies within Git as well. So, for advanced users, they'll have everything that they need together in a single unit of change, your application, your definitions of how you want to run this application service, and the policies that you want it to follow, all together in Git. And then when there is some sort of policy violation on the other end of the pipeline, people can see where this policy is being violated, how it was violated. And then for a set of those, we try and automate by showing a pull request for the user about how they can fix this policy violation. So, try and make it as simple as possible. Because in many of these sorts of violations, if you're a busy developer, there'll be minor configuration details going against the configuration, and you just want to fix those really quickly. >> So Steve W., is that what the Mega Leaks policy engine is? >> Yes, that's the Mega Leaks policy engine. So, yes, it's a SaaS-based service that holds the actual policy engine and your library of policies. So, when your GitHub action runs, it goes and essentially makes a call across with the configuration and does the check and spits out any violation errors, if there are any. >> So, folks in this community really like to try things before they deploy them. Is there an opportunity for people to get a demo of this, get their hands on it? what's the best way to do that? >> The best way to do it is have a play with it. As an engineer, I just love getting my hands dirty with these sorts of things. So, yeah, you can go to the Mega Leaks website and get a 30-day free trial. You can spin yourself up a little, test cluster, and have a play. >> So, what's coming next? We had DevOps, and then DevSecOps, and now GitOps. What's next? Are we going to go back to all infrastructure on premises all the time, back to waterfall? Back to waterfall, "Hot Tub Time Machine?" What's the prediction? >> Well, I think the thing that you set out right at the start, actually, is the prediction. The difference between infrastructure and applications is steadily going away, as we try and be more dynamic in the way that we deploy. And for us with GitOps, I think we're... When we talk about operations, there's a lots of depth to what we mean about operations. So, I think there's lots of areas to explore how to bring operations into developer tooling with GitOps. So, that's, I think, certainly where Weaveworks will be focusing. >> Well, as an old infrastructure guy myself, I see this as vindication. Because infrastructure still matters, kids. And we need sophisticated ways to make sure that the proper infrastructure is applied. People are shocked to learn that even serverless application environments involve servers. So, I tell my 14-year-old son this regularly, he doesn't believe it, but it is what it is. Steve W., any final thoughts on this whole move towards GitOps and, specifically, the Weaveworks secret sauce and superpower. >> Yeah. It's all about (indistinct)... It's all about going as quickly as possible, but without tripping up. Being able to run fast, but without tripping over your shoe laces, which you forgot to tie up. And that's what the automation brings. It allows you to go quickly, does lots of things for you, and yeah, we try and stop you shooting yourself in the foot as you're going. >> Well, it's been fantastic talking to both of you today. For the audience's sake, I'm in California, and we have a gentleman in France, and a gentlemen in the UK. It's just the wonders of modern technology never cease. Thanks, again, Steve Waterworth, Steve George from Weaveworks. Thanks for coming on theCUBE for the AWS Startup Showcase. And to the rest of us, keep it right here for more action on theCUBE, your leader in tech coverage. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase This is the AWS Startup Showcase. So, in the early days of Kubernetes from the world of DevOps from the configuration What's the problem with that model, to make sure that you and what's your perspective. that enables this to happen. that is outside of the how the services are bound to that is creating the infrastructure? one of the things that we can do and kind of alluded to scalability that helps them to go And if that's the case, is reminiscent of the matrix and start customizing them to your needs. So Steve W., is that what that holds the actual policy engine So, folks in this community So, yeah, you can go to on premises all the in the way that we deploy. that the proper infrastructure is applied. and yeah, we try and stop you and a gentlemen in the UK.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve WaterworthPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

Steve GeorgePERSON

0.99+

AWSORGANIZATION

0.99+

Steve G.PERSON

0.99+

FranceLOCATION

0.99+

Steve W.PERSON

0.99+

CaliforniaLOCATION

0.99+

30-dayQUANTITY

0.99+

WeaveworksORGANIZATION

0.99+

GitTITLE

0.99+

UKLOCATION

0.99+

GitOpsTITLE

0.99+

JavaTITLE

0.99+

twoQUANTITY

0.99+

Node.jsTITLE

0.99+

one advantageQUANTITY

0.99+

two guestsQUANTITY

0.99+

Mega LeaksTITLE

0.99+

Mega LeaksTITLE

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

each monthQUANTITY

0.99+

DevOpsTITLE

0.98+

NetflixORGANIZATION

0.98+

one setQUANTITY

0.98+

DevSecOpsTITLE

0.98+

one formQUANTITY

0.98+

EKSTITLE

0.98+

oneQUANTITY

0.97+

One areaQUANTITY

0.97+

KubernetesTITLE

0.97+

two areasQUANTITY

0.97+

one replicaQUANTITY

0.96+

GitHubORGANIZATION

0.95+

Nick Barcet, Red Hat | KubeCon + CloudNativeCon NA 2021


 

(bright music) >> Welcome to this Kube Conversation. I'm Dave Nicholson. And today we have a very special guest from Red Hat, Nick Barcet. Nick is the Senior Director of Technology, Technology Strategy at Red Hat. Nick, welcome back to theCUBE. >> Thank you. It's always a pleasure to be visiting you here virtually. >> It's fantastic to have you here. I see a new office surroundings at Red Hat. Have they taken a kind of a nautical theme at the office there? Where are you joining us from? >> I'm joining from my boat now, I've been living on my boat for the past few years, and that's where you'll find me most of the time. >> So would you consider your boat to be on the Edge? >> It's certainly one form of Edge. You know, there are multiple forms of Edge and a boat is one of those forms. >> Let's talk about Edge now. We're having this conversation in anticipation of KubeCon CloudNativeCon that's coming up North America 2021, coming up in Los Angeles. Let's talk about specifically the Edge, where the Edge, Edge computing and Kubernetes come together from a Red Hat perspective. Walk us through that, talk about some of the challenges that people are having at the Edge, why Kubernetes is something that would be considered at the edge. Walk us through that. >> Let's start from the premises that people have been doing stuff at the Edge for ages. I mean, nobody has been waiting for Kubernetes or any other technology to start implementing some form of computing that is happening in their stores, in their factories, wherever. What's really new today is when we talk about Edge computing, it's reusing the same technology we've been using to deploy inside of the data center and expand that all the way to the Edge. And that's what, from my perspective, constituents, Edge computing or the revolution it bring. So that means that the same GitOps, DevSecOps methodology that we were using into that center are now expandable all the way to those devices that leaves in where locations and that we can reuse the same methodology, the same tooling, and that includes Kubernetes. And all the efforts we've been doing over the past couple of years has been to make Kubernetes even more accessible for the various Edge typologies that we are encountering when discussing with our customer that have Edge projects. >> So typically when we think of a Kubernetes environment, you're talking about containers that are contained in pods, that live on physical clusters, despite all of the talk of a no-code and serverless, we still live in a world where applications and microservices run on physical servers. Are there practical limitations in terms of just how small you can scale Kubernetes? How far, how close to the Edge can you get with the Kubernetes deployment? >> So in theory, there is really no limit. As the smallest devices are always bigger than Kubernetes itself. But the reality is you never use just Kubernetes, you use Kubernetes with a series of other projects that makes it complete, or for example, stuff that is going to be reporting telemetry, components that are going to help you automatically scale, et cetera. And the further you go into the Edge, the less of these competence you can afford. So you have to make trade-offs when you reduce the size of the device. Today, what Red Hat offers, is really concentrated to where we can deliver a full OpenShift experience. So the smallest environments on which we would recommend to run OpenShift at the Edge is a single node is roughly 24 gigabytes of RAM, which is you could buy it, sorry, which is already a relatively big Edge device. And when you go a step lower then, that's where we would recommend using a standard rail for Edge configuration or something similar. Not Kubernetes anymore. >> So you said single node, are you let's double click on that for a second. Is that a single physical node that is abstracted in a way to create some level of logical redundancy? When you say single node, walk us through that. We've got containers that are in pods, so what are we talking about? >> You have, based on your requirements, you can have different way of addressing your compute need at the Edge. You can have those smallest of clusters. And this would be three nodes that are delivered, with is the control plane and the worker nodes integrated into one. When you want to go a step further, you could use worker nodes that are controlled remotely via a central control plane that is at a central site. And when you want to go, even one step further deploy Kubernetes on a very small machine, but that remains fully functional even if disconnected that's when you would use the thing that is not anymore a cluster, which is a single note, Kubernetes where you still have access to the full Kubernetes API, regardless of the connectivity of your site, whether it's active or not, whether you're at sea or in the air or not. And that's where we still offer some form of software high vulnerability, because Kubernetes, even on a single node, it'll still detect if a container dies and restarted and provide similar functionality like this, but it won't provide hardware availability since we are a single node. >> And that makes sense. Yeah, that makes, yeah, it makes perfect sense. And I would suggest that we refer to that as a single node cluster, just because we like to mix it up with terminology in our business and sometimes confuse people with it. >> Technically, that was the choice we made, actually. You like to call it a cluster because it's not a cluster >> Exactly. No, I appreciate that. Absolutely. So what's be explicit about what the trade-offs are there. Let's say that I'm thinking of deploying something at the Edge, and I'm going use Kubernetes to orchestrate my container environment and pretend for a moment that space and cost aren't huge limiting factors. I could put a three node cluster in, but the idea of putting in a single node is very, it's attractive. Where does, where's the line drawn in terms of what you would recommend from, you know, what are the trade offs? What am I losing, going to the single node cluster? See I just called that. >> Well, in a nutshell, you're losing hardware high availability. Meaning if one of your server fails since you only have one server, you lose everything. And there is no way around that. That's the biggest trade-off. Then you have also a trade-off on the memory used by the control plane, which you won't be able to use to do something else. So if I have a site with excellent connectivity and the biggest loss of connectivity might be counted in hours, maybe a remote worker use a better solution because this way, I have a single central-side that carries my control plane, and I can use all the RAM and all the CPU's on my local site to deploy my workloads, not to carries a control plane. To give you an example of these trade-off in the telco space, for example, if you're deploying an antenna in a city, you have plenty of antennas covering that city. And therefore, the loss of one antenna is not a big deal. So in that case, you will be tempted to use a remote worker because you will be maximizing your use of the RAM on the sites for the workload, which is let's have people establish communication using their phones. But now, we take another antenna that we are getting to locate in a very remote location. There, if this antenna fail, everybody fails. There's nobody that is able to make calls, even emergency vehicles cannot discuss together very often. So in that case, it's a lot better to have an autonomous deployment, something where the control plane and the workload itself are being run in one box. And this one box in fact can be duplicated. There could be a another box that is either seating in a truck in case of emergency or off, but on the antenna site, so that in case of a major failure, you have a possibility to re to restore it. So it really depends on what's your sets of constraints in terms of availability in SIM of efficiency of your RAM use is going to be that it's going to make you choose between one or the other of the deployment models. >> No, that's a great example. And so it sounds like it's not a one size fits all world, obviously. Now, from the perspective of the marketplace, looking in at Red Hat, participating in this business, some think of Red Hat as the company that deployed Linux 20 years ago. Help us make that connection between Red Hat today and what you've been doing for the last 20 years and this topic of Edge computing, 'cause some people don't automatically think of Red Hat an Edge computing. I do, I think they should, (chuckles) but help us understand that. >> Yeah, obviously a lot of people consider that Red Hat is Red Hat, Linux, and that's it. The Red Hat Enterprise Linux is what we've been known since our beginnings 25 years ago, and what has made our early success. But we consider ourselves more of an infrastructure company. We have been offering for the past 20 years, the various component that you need to deploy server, run and manage your workloads across data centers and make sure that you can store your data, and that you can automate your operations on top of this infrastructure. So we really consider ourselves much more of a company that offers everything that enables you to run your servers and run your workloads on top of your server. And that includes a tool to do virtualization, that includes tool to do continuous deployment of containers. And that's where Kubernetes entered in play about 10 years ago. Well, first it was OPAs that then became Kubernetes and the OpenShift offering that we have today. >> Yeah. Thanks for that. So I have, I've got a final question for you. It's a little bit off topic, but it's related, this is in the category of Nick predicts. So when does Nick predict that we will get to a point where we tip beyond the 50/50 point cloud versus on-premises IT spending, if you accept today that we're still in the neighborhood of 75 to 80% on-premises. When will we hit the 50/50 mark? I'm not asking you for the hundred percent cloud date, but give us a date, you give us a month and a year for 50/50. >> Given the progression of cloud, if there was no Edge, we could said two to three years from now, we would be at this 50/50 mark. But the funny thing is that at the same time, as the cloud progresses, people start realizing that they have needs that needs to be solved locally. And this is why we are deploying Edge-based solution, solution which reliably can provide answers, regardless of the connectivity to the cloud, regardless of the bandwidth. There are things that I would never want to do, like feeding a size on feeds from 4K cameras, into my cloud environment that won't scale, I won't have the bandwidth to do so. And therefore, maybe the answer to your question is, it's going to be asymptotic, and it's almost impossible to predict. >> So that is a much better answer than giving me an exact date and time, because (chuckles) because it reveals exactly the reality that we're living in. Again, there is, you know, it's fit for function. It's not cloud for cloud's sake, compute resources, data, resources have a place that they naturally belong oftentimes. And oftentimes that is on the Edge, whether it's on the edge of the edge of the world in a sailboat or out in a single server, not node, or I keep wanting to single node cluster, it's killing me. I dunno why, I think it's so funny, but a single node implementation of OpenShift where you can run Kubernetes on the Edge, it's a fascinating subject. Anything else that you want to share with us that we didn't get? >> I think one aspect that we never talk enough is how do you manage at the scale of Edge? Because even though each Edge site is very small, you can have thousands, even hundreds of thousands of these single node something that are running all over the place. And I think that what you're seeing in advent cluster management for Kubernetes, and particularly the 2.4 version that we are going to be announcing this week and actually releasing in November is I think a pretty good answer to that problem on how do I deploy with zero touch these devices? How do I update them, upgrade them? How do I deploy the workloads on top of that? How do I ensure to have the right tooling to deploy that at the scale? And we've done the testing now of ACM with up to 2,000 clusters, connected to a single ACMs. And in the future, we are planning on building federation of those, which really gives us the possibility to provide the tooling needed to manage at its scale. >> Excellent. Excellent. Yeah. That's whenever we start talking about anything in the realm of containerization and Kubernetes scale starts to become an issue. It's no longer a question of a human being managing 10 servers and 50 applications. We start talking about tens of thousands and hundreds of thousands of instances where it's beyond human scale. So that's obviously something that's very, very important. Well, Nick, I want to thank you for becoming a Kube veteran once again. Thanks for joining this Kube Conversation from Dave Nicholson, this has been a Kube Conversation in anticipation of KubeCon and CloudNativeCon North America 2021. Thanks for tuning in. (bright music)

Published Date : Oct 14 2021

SUMMARY :

Nick is the Senior Director of Technology, to be visiting you here virtually. It's fantastic to have you here. find me most of the time. and a boat is one of those forms. Let's talk about specifically the Edge, So that means that the same How far, how close to the Edge can you get And the further you go into the Edge, on that for a second. and the worker nodes And that makes sense. Technically, that was the but the idea of putting in a single node So in that case, you will be of the marketplace, and that you can automate your operations in the neighborhood of that at the same time, And oftentimes that is on the Edge, that are running all over the place. in the realm of containerization

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

NickPERSON

0.99+

Nick BarcetPERSON

0.99+

hundred percentQUANTITY

0.99+

NovemberDATE

0.99+

10 serversQUANTITY

0.99+

50 applicationsQUANTITY

0.99+

Los AngelesLOCATION

0.99+

thousandsQUANTITY

0.99+

75QUANTITY

0.99+

one boxQUANTITY

0.99+

Red HatORGANIZATION

0.99+

TodayDATE

0.99+

KubeConEVENT

0.99+

one serverQUANTITY

0.99+

24 gigabytesQUANTITY

0.99+

todayDATE

0.98+

LinuxTITLE

0.98+

twoQUANTITY

0.98+

single nodeQUANTITY

0.98+

20 years agoDATE

0.98+

50/50QUANTITY

0.98+

singleQUANTITY

0.98+

eachQUANTITY

0.98+

CloudNativeCon North America 2021EVENT

0.98+

EdgeTITLE

0.97+

one aspectQUANTITY

0.97+

KubernetesTITLE

0.97+

80%QUANTITY

0.97+

telcoORGANIZATION

0.97+

hundreds of thousandsQUANTITY

0.97+

25 years agoDATE

0.97+

OpenShiftTITLE

0.97+

this weekDATE

0.96+

Red HatTITLE

0.96+

single noteQUANTITY

0.96+

oneQUANTITY

0.95+

0 yearsQUANTITY

0.95+

one antennaQUANTITY

0.95+

firstQUANTITY

0.95+

Kube ConversationEVENT

0.94+

KubeCon CloudNativeConEVENT

0.94+

GitOpsTITLE

0.93+

one formQUANTITY

0.93+

three yearsQUANTITY

0.93+

up to 2,000 clustersQUANTITY

0.92+

one stepQUANTITY

0.91+

North AmericaLOCATION

0.91+

three nodesQUANTITY

0.91+

doubleQUANTITY

0.89+

EdgeORGANIZATION

0.89+

single serverQUANTITY

0.89+

a month andQUANTITY

0.88+

CloudNativeCon NA 2021EVENT

0.87+

Stijn Paul Fireside Chat Accessible Data | Data Citizens'21


 

>>Really excited about this year's data, citizens with so many of you together. Uh, I'm going to talk today about accessible data, because what good is the data. If you can get it into your hands and shop for it, but you can't understand it. Uh, and I'm here today with, uh, bald, really thrilled to be here with Paul. Paul is an award-winning author on all topics data. I think 20 books with 21st on the way over 300 articles, he's been a frequent speaker. He's an expert in future trends. Uh, he's a VP at cognitive systems, uh, over at IBM teachers' data also, um, at the business school and as a champion of diversity initiatives. Paul, thank you for being here, really the conformance, uh, to the session with you. >>Oh, thanks for having me. It's a privilege. >>So let's get started with, uh, our origins and data poll. Um, and I'll start with a little story of my own. So, uh, I trained as an engineer way back when, uh, and, um, in one of the courses we got as an engineer, it was about databases. So we got the stick thick book of CQL and me being in it for the programming. I was like, well, who needs this stuff? And, uh, I wanted to do my part in terms of making data accessible. So essentially I, I was the only book that I sold on. Uh, obviously I learned some hard lessons, uh, later on, as I did a master's in AI after that, and then joined the database research lab at the university that Libra spun off from. Uh, but Hey, we all learned along the way. And, uh, Paula, I'm really curious. Um, when did you awaken first to data? If you will? >>You know, it's really interesting Stan, because I come from the opposite side, an undergrad in economics, uh, with some, uh, information systems research at the higher level. And so I think I was always attuned to what data could do, but I didn't understand how to get at it and the kinds of nuances around it. So then I started this job, a database company, like 27 years ago, and it started there, but I would say the awakening has never stopped because the data game is always changing. Like I look at these epochs that I've been through data. I was a real relational databases thinking third normal form, and then no SQL databases. And then I watch no SQL be about no don't use SQL, then wait a minute. Not only sequel. And today it's really for the data citizens about wait, no, I need SQL. So, um, I think I'm always waking up in data, so I'll call it a continuum if you will. But that was it. It was trying to figure out the technology behind driving analytics in which I took in school. >>Excellent. And I fully agree with you there. Uh, every couple of years they seem to reinvent new stuff and they want to be able to know SQL models. Let me see. I saw those come and go. Uh, obviously, and I think that's, that's a challenge for most people because in a way, data is a very abstract concepts, um, until you get down in the weeds and then it starts to become really, really messy, uh, until you, you know, from that end button extract a certain insights. Um, and as the next thing I want to talk about with you is that challenging organizations, we're hearing a lot about data, being valuable data, being the new oil data, being the new soil, the new gold, uh, data as an asset is being used as a slogan all over. Uh, people are investing a lot in data over multiple decades. Now there's a lot of new data technologies, always, but still, it seems that organizations fundamentally struggle with getting people access to data. What do you think are some of the key challenges that are underlying the struggles that mud, that organizations seem to face when it comes to data? >>Yeah. Listen, Stan, I'll tell you a lot of people I think are stuck on what I call their data, acumen curves, and you know, data is like a gym membership. If you don't use it, you're not going to get any value on it. And that's what I mean by accurate. And so I like to think that you use the analogy of some mud. There's like three layers that are holding a lot of organizations back at first is just the amount of data. Now, I'm not going to give you some stat about how many times I can go to the moon and back with the data regenerate, but I will give you one. I found interesting stat. The average human being in their lifetime will generate a petabyte of data. How much data is that? If that was my apple music playlist, it would be about 2000 years of nonstop music. >>So that's some kind of playlist. And I think what's happening for the first layer of mud is when I first started writing about data warehousing and analytics, I would be like, go find a needle in the haystack. But now it's really finding a needle in a stack of needles. So much data. So little time that's level one of mine. I think the second thing is people are looking for some kind of magic solution, like Cinderella's glass slipper, and you put it on her. She turns into a princess that's for Disney movies, right? And there's nothing magical about it. It is about skill and acumen and up-skilling. And I think if you're familiar with the duper, you recall the Hadoop craze, that's exactly what happened, right? Like people brought all their data together and everyone was going to be able to access it and give insights. >>And it teams said it was pretty successful, but every line of business I ever talked to said it was a complete failure. And the third layer is governance. That's actually where you're going to find some magic. And the problem in governance is every client I talked to is all about least effort to comply. They don't want to violate GDPR or California consumer protection act or whatever governance overlooks, where they do business and governance. When you don't lead me separate to comply and try not to get fine, but as an accelerant to your analytics, and that gets you out of that third layer of mud. So you start to invoke what I call the wisdom of the crowd. Now imagine taking all these different people with intelligence about the business and giving them access and acumen to hypothesize on thousands of ideas that turn into hundreds, we test and maybe dozens that go to production. So those are three layers that I think every organization is facing. >>Well. Um, I definitely follow on all the days, especially the one where people see governance as a, oh, I have to comply to this, which always hurts me a little bit, honestly, because all good governance is about making things easier while also making sure that they're less riskier. Um, but I do want to touch on that Hadoop thing a little bit, uh, because for me in my a decade or more over at Libra, we saw it come as well as go, let's say around 2015 to 2020 issue. So, and it's still around. Obviously once you put your data in something, it's very hard to make it go away, but I've always felt that had do, you know, it seemed like, oh, now we have a bunch of clusters and a bunch of network engineers. So what, >>Yeah. You know, Stan, I fell for, I wrote the book to do for dummies and it had such great promise. I think the problem is there wasn't enough education on how to extract value out of it. And that's why I say it thinks it's great. They liked clusters and engineers that you just said, but it didn't drive lineup >>Business. Got it. So do you think that the whole paradigm with the clouds that we're now on is going to fundamentally change that or is just an architectural change? >>Yeah. You know, it's, it's a great comment. What you're seeing today now is the movement for the data lake. Maybe a way from repositories, like Hadoop into cloud object stores, right? And then you look at CQL or other interfaces over that not allows me to really scale compute and storage separately, but that's all the technical stuff at the end of the day, whether you're on premise hybrid cloud, into cloud software, as a service, if you don't have the acumen for your entire organization to know how to work with data, get value from data, this whole data citizen thing. Um, you're not going to get the kind of value that goes into your investment, right? And I think that's the key thing that business leaders need to understand is it's not about analytics for kind of science project sakes. It's about analytics to drive. >>Absolutely. We fully agree with that. And I want to touch on that point. You mentioned about the wisdom of the crowds, the concept that I love about, right, and your organization is a big grout full of what we call data citizens. Now, if I remember correctly from the book of the wisdom of the crowds, there's, there's two points that really, you have to take Canada. What is, uh, for the wisdom of the grounds to work, you have to have all the individuals enabled, uh, for them to have access to the right information and to be able to share that information safely kept from the bias from others. Otherwise you're just biasing the outcome. And second, you need to be able to somehow aggregate that wisdom up to a certain decision. Uh, so as Felix mentioned earlier, we all are United by data and it's a data citizen topic. >>I want to touch on with you a little bit, because at Collibra we look at it as anyone who uses data to do their job, right. And 2020 has sort of accelerated digitization. Uh, but apart from that, I've always believed that, uh, you don't have to have data in your title, like a data analyst or a data scientist to be a data citizen. If I take a look at the example inside of Libra, we have product managers and they're trying to figure out which features are most important and how are they used and what patterns of behavior is there. You have a gal managers, and they're always trying to know the most they can about their specific accounts, uh, to be able to serve as them best. So for me, the data citizen is really in its broadest sense. Uh, anyone who uses data to do their job, does that, does that resonate with you? >>Yeah, absolutely. It reminds me of myself. And to be honest in my eyes where I got started from, and I agree, you don't need the word data in your title. What you need to have is curiosity, and that is in your culture and in your being. And, and I think as we look at organizations to transform and take full advantage of their, their data investments, they're going to need great governance. I guarantee you that, but then you're going to have to invest in this data citizen concept. And the first thing I'll tell you is, you know, that kind of acumen, if you will, as a team sport, it's not a departmental sport. So you need to think about what are the upskilling programs of where we can reach across to the technical and the non-technical, you know, lots and lots of businesses rely on Microsoft Excel. >>You have data citizens right there, but then there's other folks who are just flat out curious about stuff. And so now you have to open this up and invest in those people. Like, why are you paying people to think about your business without giving the data? It would be like hiring Tom Brady as a quarterback and telling him not to throw a pass. Right. And I see it all the time. So we kind of limit what we define as data citizen. And that's why I love what you said. You don't need the word data in your title and more so if you don't build the acumen, you don't know how to bring the data together, maybe how to wrangle it, but where did it come from? And where can you fixings? One company I worked with had 17 definitions for a sales individual, 17 definitions, and the talent team and HR couldn't drive to a single definition because they didn't have the data accurate. So when you start thinking of the data citizen, concept it about enabling everybody to shop for data much. Like I would look for a USB cable on Amazon, but also to attach to a business glossary for definition. So we have a common version of what a word means, the lineage of the data who owns it, who did it come from? What did it do? So bring that all together. And, uh, I will tell you companies that invest in the data, citizen concept, outperform companies that don't >>For all of that, I definitely fully agree that there's enough research out there that shows that the ones who are data-driven are capturing the most markets, but also capturing the most growth. So they're capturing the market even faster. And I love what you said, Paul, about, um, uh, the brains, right? You've already paid for the brains you've already invested in. So you may as well leverage them. Um, you may as well recognize and, and enable the data citizens, uh, to get access to the assets that they need to really do their job properly. That's what I want to touch on just a little bit, if, if you're capable, because for me, okay. Getting access to data is one thing, right? And I think you already touched on a few items there, but I'm shopping for data. Now I have it. I have a cul results set in my hands. Let's say, but I'm unable to read and write data. Right? I don't know how to analyze it. I don't know maybe about bias. Uh, maybe I, I, I don't know how to best visualize it. And maybe if I do, maybe I don't know how to craft a compelling persuasion narrative around it to change my bosses decisions. So from your viewpoint, do you think that it's wise for companies to continuously invest in data literacy to continuously upgrade that data citizens? If you will. >>Yeah, absolutely. Forest. I'm going to tell you right now, data literacy years are like dog years stage. So fast, new data types, new sources of data, new ways to get data like API APIs and microservices. But let me take it away from the technical concept for a bit. I want to talk to you about the movie. A star is born. I'm sure most of you have seen it or heard it Bradley Cooper, lady Gaga. So everyone knows the movie. What most people probably don't know is when lady Gaga teamed up with Bradley Cooper to do this movie, she demanded that he sing everything like nothing could be auto-tuned everything line. This is one of the leading actors of Hollywood. They filmed this remake in 42 days and Bradley Cooper spent 18 months on singing lessons. 18 months on a guitar lessons had a voice coach and it's so much and so forth. >>And so I think here's the point. If one of the best actors in the world has to invest three and a half years for 42 days to hit a movie out of the park. Why do we think we don't need a continuous investment in data literacy? Even once you've done your initial training, if you will, over the data, citizen, things are going to change. I don't, you don't. If I, you Stan, if you go to the gym and workout every day for three months, you'll never have to work out for the rest of your life. You would tell me I was ridiculous. So your data literacy is no different. And I will tell you, I have managed thousands of individuals, some of the most technical people around distinguished engineers, fellows, and data literacy comes from curiosity and a culture of never ending learning. That is the number one thing to success. >>And that curiosity, I hire people who are curious, I'll give you one more story. It's about Mozart. And this 21 year old comes to Mozart and he says, Mozart, can you teach me how to compose a symphony? And Mozart looks at this person that says, no, no, you're too young, too young. You compose your fourth symphony when you were 12 and Mozart looks at him and says, yeah, but I didn't go around asking people how to compose a symphony. Right? And so the notion of that story is curiosity. And those people who show up in always want to learn, they're your home run individuals. And they will bring data literacy across the organization. >>I love it. And I'm not going to try and be Mozart, but you know, three and a half years, I think you said two times, 18 months, uh, maybe there's hope for me yet in a singing, you'll be a good singer. Um, Duchy on the, on the, some of the sports references you've made, uh, Paul McGuire, we first connected, uh, I'm not gonna like disclose where you're from, but, uh, I saw he did come up and I know it all sorts of sports that drive to measure everything they can right on the field of the field. So let's imagine that you've done the best analysis, right? You're the most advanced data scientists schooled in the classics, as well as the modernist methods, the best tools you've made a beautiful analysis, beautiful dashboards. And now your coach just wants to put their favorite player on the game, despite what you're building to them. How do you deal with that kind of coaches? >>Yeah. Listen, this is a great question. I think for your data analytics strategy, but also for anyone listening and watching, who wants to just figure out how to drive a career forward? I would give the same advice. So the story you're talking about, indeed hockey, you can figure out where I'm from, but it's around the Ottawa senators, general manager. And he made a quote in an interview and he said, sometimes I want to punch my analytics, people in the head. Now I'm going to tell you, that's not a good culture for analytics. And he goes on to say, they tell me not to play this one player. This one player is very tough. You know, throws four or five hits a game. And he goes, I'd love my analytics people to get hit by bore a wacky and tell me how it feels. That's the player. >>Sure. I'm sure he hits hard, but here's the deal. When he's on the ice, the opposing team gets more shots on goal than the senators do on the opposing team. They score more goals, they lose. And so I think whenever you're trying to convince a movement forward, be it management, be it a project you're trying to fund. I always try to teach something that someone didn't previously know before and make them think, well, I never thought of it that way before. And I think the great opportunity right now, if you're trying to get moving in a data analytics strategy is around this post COVID era. You know, we've seen post COVID now really accelerate, or at least post COVID in certain parts of the world, but accelerate the appetite for digital transformation by about half a decade. Okay. And getting the data within your systems, as you digitize will give you all kinds of types of projects to make people think differently than the way they thought before. >>About data. I call this data exhaust. I'll give you a great example, Uber. I think we're all familiar with Uber. If we all remember back in the days when Uber would offer you search pricing. Okay? So basically you put Uber on your phone, they know everything about you, right? Who are your friends, where you going, uh, even how much batteries on your phone? Well, in a data science paper, I read a long time ago. They recognize that there was a 70% chance that you would accept a surge price. If you had less than 10% of your battery. So 10% of battery on your phone is an example of data exhaust all the lawns that you generate on your digital front end properties. Those are logs. You can take those together and maybe show executive management with data. We can understand why people abandoned their cart at the shipping phase, or what is the amount of shipping, which they abandoned it. When is the signal when our systems are about to go to go down. So, uh, I think that's a tremendous way. And if you look back to the sports, I mean the Atlanta Falcons NFL team, and they monitor their athletes, sleep performance, the Toronto Raptors basketball, they're running AI analytics on people's personalities and everything they tweet and every interview to see if the personality fits. So in sports, I think athletes are the most important commodity, if you will, or asset a yet all these teams are investing in analytics. So I think that's pretty telling, >>Okay, Paul, it looks like we're almost out of time. So in 30 seconds or less, what would you recommend to the data citizens out there? >>Okay. I'm going to give you a four tips in 30 seconds. Number one, remember learning never ends be curious forever. You'll drive your career. Number two, remember companies that invest in analytics and data, citizens outperform those that don't McKinsey says it's about 1.4 times across many KPIs. Number three, stop just collecting the dots and start connecting them with that. You need a strong governance strategy and that's going to help you for the future because the biggest thing in the future is not going to be about analytics, accuracy. It's going to be about analytics, explainability. So accuracy is no longer going to be enough. You're going to have to explain your decisions and finally stay positive and forever test negative. >>Love it. Thank you very much fall. Um, and for all the data seasons is out there. Um, when it comes down to access to data, it's more than just getting your hands on the data. It's also knowing what you can do with it, how you can do that and what you definitely shouldn't be doing with it. Uh, thank you everyone out there and enjoy your learning and interaction with the community. Stay healthy. Bye-bye.

Published Date : Jun 17 2021

SUMMARY :

If you can get it into your hands and shop for it, but you can't understand it. It's a privilege. Um, when did you awaken first to data? And so I think I was always attuned to what data could do, but I didn't understand how to get Um, and as the next thing I want to talk about with you is And so I like to think that you use And I think if you're familiar with the duper, you recall the Hadoop craze, And the problem in governance is every client I talked to is Obviously once you put your They liked clusters and engineers that you just said, So do you think that the whole paradigm with the clouds that And then you look at CQL or other interfaces over that not allows me to really scale you have to have all the individuals enabled, uh, uh, you don't have to have data in your title, like a data analyst or a data scientist to be a data citizen. and I agree, you don't need the word data in your title. And so now you have to open this up and invest in those people. And I think you already touched on a few items there, but I'm shopping for data. I'm going to tell you right now, data literacy years are like dog years I don't, you don't. And that curiosity, I hire people who are curious, I'll give you one more story. And I'm not going to try and be Mozart, but you know, And he goes on to say, they tell me not to play this one player. And I think the great opportunity And if you look back to the sports, what would you recommend to the data citizens out there? You need a strong governance strategy and that's going to help you for the future thank you everyone out there and enjoy your learning and interaction with the community.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PaulPERSON

0.99+

Toronto RaptorsORGANIZATION

0.99+

PaulaPERSON

0.99+

Paul McGuirePERSON

0.99+

UberORGANIZATION

0.99+

17 definitionsQUANTITY

0.99+

Tom BradyPERSON

0.99+

MozartPERSON

0.99+

IBMORGANIZATION

0.99+

Bradley CooperPERSON

0.99+

70%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

18 monthsQUANTITY

0.99+

30 secondsQUANTITY

0.99+

20 booksQUANTITY

0.99+

12QUANTITY

0.99+

hundredsQUANTITY

0.99+

42 daysQUANTITY

0.99+

fourth symphonyQUANTITY

0.99+

two timesQUANTITY

0.99+

three monthsQUANTITY

0.99+

Atlanta FalconsORGANIZATION

0.99+

lady GagaPERSON

0.99+

Bradley CooperPERSON

0.99+

StanPERSON

0.99+

2020DATE

0.99+

10%QUANTITY

0.99+

21stQUANTITY

0.99+

todayDATE

0.99+

one playerQUANTITY

0.99+

CQLTITLE

0.99+

CinderellaPERSON

0.99+

second thingQUANTITY

0.99+

GDPRTITLE

0.99+

two pointsQUANTITY

0.99+

FelixPERSON

0.99+

dozensQUANTITY

0.99+

three and a half yearsQUANTITY

0.99+

single definitionQUANTITY

0.99+

thousandsQUANTITY

0.99+

secondQUANTITY

0.99+

fourQUANTITY

0.99+

less than 10%QUANTITY

0.98+

CollibraORGANIZATION

0.98+

firstQUANTITY

0.98+

third layerQUANTITY

0.98+

three layersQUANTITY

0.98+

oneQUANTITY

0.98+

2015DATE

0.98+

about 2000 yearsQUANTITY

0.98+

CanadaLOCATION

0.98+

California consumer protection actTITLE

0.98+

four tipsQUANTITY

0.97+

DisneyORGANIZATION

0.97+

thirdQUANTITY

0.97+

SQLTITLE

0.97+

MicrosoftORGANIZATION

0.97+

this yearDATE

0.96+

HollywoodORGANIZATION

0.96+

one more storyQUANTITY

0.96+

over 300 articlesQUANTITY

0.94+

27 years agoDATE

0.94+

one thingQUANTITY

0.94+

a decadeQUANTITY

0.94+

DuchyPERSON

0.93+

level oneQUANTITY

0.92+

Kaustubh Das and Vijay Venugopal 5 28


 

>>from around >>the globe. It's >>the cube presenting future >>Cloud one >>event. A >>world of opportunities >>brought to you by Cisco. >>Okay. We're here with costume. Does, who is the Senior Vice President, General Manager of Cloud and compute at Cisco And VJ Venugopal, who is the Senior Director for Product management for Cloud, compute at Cisco. KTV J Good to see you guys welcome. >>Great to see you. Dave >>to be here. >>Katie. Let's talk about cloud. You. And I, last time we're face to face was in Barcelona where we love talking about cloud. And I always say to people look, Cisco is not a hyper scaler, but the big public cloud players, they're like giving you a gift. They spent almost actually over $100 billion last year on Capex. The big four. So you can build on that infrastructure. Cisco is all about hybrid cloud. So help us understand the strategy. There may be how you can leverage that build out and importantly what a customer is telling you they want out of hybrid cloud. >>Yeah, no, that's that's that's a perfect question to start with Dave. So yes, so the hybrid hyper scholars have invested heavily building out their assets. There's a great lot of innovation coming from that space. Um there's also a great innovation set of innovation coming from open source and and that's another source of uh a gift, in fact the I. T. Community. But when I look at my customers, they're saying, well how do I in the context of my business, implement a strategy that takes into consideration everything that I have to manage um in terms of my contemporary work clothes, in terms of my legacy, in terms of everything my developer community wants to do on devops and really harness that innovation that's built in the public cloud, that built an open source that built internally to me and that naturally leads them down the path of a hybrid cloud strategy. And Siskel's mission is to provide for that imperative, the simplest, more power, more powerful platform to deliver hybrid cloud. And that platform uh is inter site, we've been investing in Inner side, it's a it's a SAS um service um inner side delivers to them that bridge between their estates of today. There were clothes of today, the need for them to be guardians of enterprise grade resiliency with the agility uh that's needed for the future. The embracing of cloud, Native of new paradigms of deVOPS models, the embracing of innovation coming from public cloud and an open source and bridging those two is what inner side has been doing. That's kind of, that's kind of the crux of our strategy. Of course, we have the entire portfolio behind it to support any any version of that, Whether that is on prem in the cloud, hybrid, cloud, multi, cloud and so forth. >>But but if I understand it correctly from what I heard earlier today, the inter site is really a linchpin of that strategy, is it not? >>It really is and may take a second to uh to totally familiarize those who don't know inner side with what it is. We started building this platform quite a few years back and we we built a ground up to be an immensely scalable SAs super simple hybrid cloud platform and it's a platform that provides a slew of service is inherently. And then on top of that there are suites of services, the sweets of services that are tied to infrastructure, automation. Cisco, as well as Cisco partners, their suite of services that have nothing to do with Cisco um products from a hardware perspective and it's got to do with more cloud orchestration and cloud native and inner side and its suite of services um continue to kind of increase in pace and velocity of delivery video. It's just over the last two quarters we've announced a whole number of things will go a little bit deeper into some of those, but they span everything from infrastructure automation to kubernetes and delivering community than service to workload optimization and having visibility into your cloud estate. How much it's costing into your on premise state into your work clothes and how they're performing. It's got integrations with other tooling with both Cisco Abdi uh as well as non Cisco um, assets. And then and then it's got a whole slew of capabilities around orchestration because at the end of the day, the job of it is to deliver something that works and works at scale that you can monitor and make sure is resilient and that includes that. That includes a workflow and ability to say, you know, do this and do this and do this. Or it includes other ways of automation, like infrastructure as code and so forth. So it includes self service that so that expand that. But inside the world's simplest hybrid cloud platform, rapidly evolving rapidly delivering new services. And uh, we'll talk about some more of those days. >>Great. Thank you. Katie VJ Let's bring you into the discussion. You guys recently made an announcement with the ASCII corp. I was stoked because even though it seemed like a long time ago, pre covid, I mean in my predictions post, I said, ha she was a name to watch our data partners. Et are you look at the survey data and they really have become mainstream. You know, particularly we think very important in the whole multi cloud discussion. And as well, they're attractive to customers. They have open source offerings. You can very easily experiment, smaller organizations can take advantage, but if you want to upgrade to enterprise features like clustering or whatever, you can plug right in. Not a big complicated migration. So a very, very compelling story there. Why is this important? Why is this partnership important to Cisco's customers? >>Mhm. Absolutely. When the spot on every single thing that you said, let me just start by paraphrasing what ambition statement is in the cloud and compute group right ambition statement is to enable a cloud operating model for hybrid cloud. And what we mean by that is the ability to have extreme amounts of automation orchestration and observe ability across your hybrid cloud idea operations now. Uh So developers >>and applications >>team get a great amount of agility in public clouds and we're on a mission to bring that kind of agility and automation to the private cloud and to the data centers. And inter site is a quickie platform and lynchpin to enable that kind of operations. Uh, Cloud like operations in the in the private clouds and the key uh as you rightly said, harsher Carp is the, you know, they were the inventors of the concept of infrastructure at school and in terra form, they have the world's number one infrastructure as code platform. So it became a natural partnership for Cisco to enter into a technology partnership with Harsher Card to integrate inter site with hardship cops, terra form to bring the benefits of infrastructure as code to the to hybrid cloud operations. And we've entered into a very tight integration and uh partnership where we allow developers devops teams and infrastructure or administrators to allow the use of infrastructure as code in a SAS delivered manner for both public and private club. So it's a very unique partnership and a unique integration that allows the benefits of cloud managed. I see to be delivered to hybrid cloud operations and we've been very happy and proud to be partnering with Russia Carbonara. >>Yeah, telephone gets very high marks from customers. The a lot of value there, the inner side integration adds to that value. Let's stay on cloud Native for a minute. We all talk about cloud native cady was sort of mentioning before you got the the core apps uh you want to protect those, make sure their enterprise create but they gotta be cool as well for developers. You're connecting to other apps in the cloud or wherever. How are you guys thinking about this? Cloud native trend. What other movies are you making in this regard? >>I mean cloud Native is there is one of the paramount I. D. Trends of today and we're seeing massive amounts of adoption of cloud native architecture in all modern applications. Now. Cloud native has become synonymous with kubernetes these days and communities has emerged as a de facto cloud native platform for modern cloud native app development. Now, what Cisco has done is we have created a brand new SAs delivered kubernetes service that is integrated with inter site, we call it the inter site community service for a. Ks and this just gave a little over one month ago now, what interstate Kubernetes service does is it delivers a cloud managed and cloud delivered kubernetes service that can be deployed on any supportive target infrastructure. It could be a Cisco infrastructure, it could be a third party infrastructure or it could even be public club. But think of it as kubernetes anywhere delivered, as says, managed from inside. It's a very powerful capability that we've just released into inter site to enable the power of communities and cognitive to be used to be used anywhere. But today we made a very important aspect because we have today announced the brand new Cisco service mess manager. The Cisco service mesh manager, which is available as an extension to I K s are doing decide basically we see service measures as being the future of networking. Right in the past we had layer to networking and layer three networking and now with service measures, application networking and layer seven networking is the next frontier of of networking. But you need to think about networking for the application age very differently, how it is managed, how it is deployed, it needs to be ready, developer friendly and developer centric. And so what we have done is we've built out an application networking strategy and built out the service match manager as a very simple way to deliver application networking through the consumers, like like developers and application teams. This is built on an acquisition that Cisco made recently of Banzai Cloud. And we've taken the assets of Banzai Cloud and deliver the Cisco service mash Manager as an extension to KS. That brings the promise of future networking and modern networking to application and development gives >>God thank you BJ. And so Katie, let's let's let's wrap this up. I mean, there was a lot in this announcement today, a lot of themes around openness, heterogeneity and a lot of functionality and value. Give us your final thoughts. >>Absolutely. So couple of things to close on. First of all. Um, inner side is the simplest, most powerful hybrid cloud platform out there. It enables that that cloud operating model that VJ talked about but enables that across cloud. So it's sad, it's relatively easy to get into it and give it a spin so that I'd highly encouraged anybody who's not familiar with it to try it out and anybody who is familiar with it to look at it again, because they're probably services in there that you didn't notice or didn't know last time you looked at it because we're moving so fast. So that's the first thing, the second thing I close with is, um we've been talking about this bridge that's kind of bridging, bridging uh your your on prem your open source, your cloud estates. And it's so important to to make that mental leap because uh in past generation we used to talk about integrating technologies together and then with Public cloud, we started talking about move to public cloud, but it's really how do we integrate, how do we integrate all of that innovation that's coming from the hyper scale is everything they're doing to innovate superfast. All of that innovation is coming from open source, all of that innovation that's coming from from companies around the world including Cisco. How do we integrate that to deliver an outcome? Because at the end of the day, if you're a cloud of Steam, if you're an idea of Steam, your job is to deliver an outcome and our mission is to make it super simple for you to do that. That's the mission we're on and we're hoping that everybody that's excited as we are about how simple we made that. >>Great, thank you a lot in this announcement today, appreciate you guys coming back on and help us unpack you know, some of the details. Thanks so much. Great having you. >>Thank you. Dave. >>Thank you everyone for spending some time with us. This is Dave Volonte and you're watching the cube, the leader in tech event >>coverage. >>Mm mm.

Published Date : Jun 2 2021

SUMMARY :

the globe. to see you guys welcome. Great to see you. but the big public cloud players, they're like giving you a gift. and really harness that innovation that's built in the public cloud, that built an open source that built internally the day, the job of it is to deliver something that works and works at scale that you can monitor Why is this partnership important to Cisco's customers? When the spot on every single thing that you said, of infrastructure as code to the to hybrid cloud operations. the inner side integration adds to that value. the power of communities and cognitive to be used to be used anywhere. God thank you BJ. all of that innovation that's coming from from companies around the world including Cisco. Great, thank you a lot in this announcement today, appreciate you guys coming back on and help us unpack Thank you. Thank you everyone for spending some time with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

CiscoORGANIZATION

0.99+

KatiePERSON

0.99+

BarcelonaLOCATION

0.99+

DavePERSON

0.99+

Harsher CardORGANIZATION

0.99+

Vijay VenugopalPERSON

0.99+

last yearDATE

0.99+

first thingQUANTITY

0.99+

BJPERSON

0.99+

over $100 billionQUANTITY

0.99+

second thingQUANTITY

0.99+

Katie VJPERSON

0.98+

twoQUANTITY

0.98+

Russia CarbonaraORGANIZATION

0.98+

todayDATE

0.98+

ASCIIORGANIZATION

0.98+

bothQUANTITY

0.98+

VJPERSON

0.96+

CapexORGANIZATION

0.95+

FirstQUANTITY

0.95+

SteamORGANIZATION

0.95+

oneQUANTITY

0.95+

5 28OTHER

0.93+

VJ VenugopalPERSON

0.91+

SASORGANIZATION

0.91+

harsher CarpORGANIZATION

0.89+

Cisco AbdiORGANIZATION

0.87+

Banzai CloudORGANIZATION

0.87+

KSLOCATION

0.86+

Kaustubh DasPERSON

0.85+

one month agoDATE

0.83+

SiskelORGANIZATION

0.81+

I. T. CommunityORGANIZATION

0.8+

KTV JPERSON

0.79+

few years backDATE

0.74+

single thingQUANTITY

0.73+

cadyPERSON

0.72+

earlier todayDATE

0.7+

fourQUANTITY

0.7+

odayORGANIZATION

0.69+

overDATE

0.67+

mm.PERSON

0.67+

terra formORGANIZATION

0.66+

number oneQUANTITY

0.66+

coupleQUANTITY

0.64+

KubernetesTITLE

0.64+

sevenQUANTITY

0.54+

GodPERSON

0.54+

last two quartersDATE

0.51+

PresidentPERSON

0.48+

secondQUANTITY

0.46+

threeQUANTITY

0.31+

CISCO FUTURE CLOUD FULL V3


 

>>mhm, mm. All right. Mhm. Mhm, mm mm. Mhm. Yeah, mm. Mhm. Yeah, yeah. Mhm, mm. Okay. Mm. Yeah, Yeah. >>Mhm. Mhm. Yeah. Welcome to future cloud made possible by Cisco. My name is Dave Volonte and I'm your host. You know, the cloud is evolving like the universe is expanding at an accelerated pace. No longer is the cloud. Just a remote set of services, you know, somewhere up there. No, the cloud, it's extending to on premises. Data centers are reaching into the cloud through adjacent locations. Clouds are being connected together to each other and eventually they're gonna stretch to the edge and the far edge workloads, location latency, local laws and economics will define the value customers can extract from this new cloud model which unifies the operating experience independent of location. Cloud is moving rapidly from a spare capacity slash infrastructure resource to a platform for application innovation. Now, the challenge is how to make this new cloud simple, secure, agile and programmable. Oh and it has to be cloud agnostic. Now, the real opportunity for customers is to tap into a layer across clouds and data centers that abstracts the underlying complexity of the respective clouds and locations. And it's got to accommodate both mission critical workloads as well as general purpose applications across the spectrum cost, effectively enabling simplicity with minimal labor costs requires infrastructure i. E. Hardware, software, tooling, machine intelligence, AI and partnerships within an ecosystem. It's kind of accommodate a variety of application deployment models like serverless and containers and support for traditional work on VMS. By the way, it also requires a roadmap that will take us well into the next decade because the next 10 years they will not be like the last So why are we here? Well, the cube is covering Cisco's announcements today that connect next generation compute shared memory, intelligent networking and storage resource pools, bringing automation, visibility, application assurance and security to this new decentralized cloud. Now, of course in today's world you wouldn't be considered modern without supporting containers ai and operational tooling that is demanded by forward thinking practitioners. So sit back and enjoy the cubes, special coverage of Cisco's future cloud >>From around the globe. It's the Cube presenting future cloud one event, a world of opportunities brought to you by Cisco. >>We're here with Dejoy Pandey, a VP of emerging tech and incubation at Cisco. V. Joy. Good to see you. Welcome. >>Good to see you as well. Thank you Dave and pleasure to be here. >>So in 2020 we kind of had to redefine the notion of agility when it came to digital business or you know organizations, they had to rethink their concept of agility and business resilience. What are you seeing in terms of how companies are thinking about their operations in this sort of new abnormal context? >>Yeah, I think that's a great question. I think what what we're seeing is that pretty much the application is the center of the universe. And if you think about it, the application is actually driving brand recognition and the brand experience and the brand value. So the example I like to give is think about a banking app uh recovered that did everything that you would expect it to do. But if you wanted to withdraw cash from your bank you would actually have to go to the ATM and punch in some numbers and then look at your screen and go through a process and then finally withdraw cash. Think about what that would have, what what that would do in a post pandemic era where people are trying to go contact less. And so in a situation like this, the digitization efforts that all of these companies are going through and and the modernization of the automation is what is driving brand recognition, brand trust and brand experience. >>Yeah. So I was gonna ask you when I heard you say that, I was gonna say well, but hasn't it always been about the application, but it's different now, isn't it? So I wonder if you talk more about how the application is experience is changing. Yes. As a result of this new digital mandate. But how should organizations think about optimizing those experiences in this new world? >>Absolutely. And I think, yes, it's always been about the application, but it's becoming the center of the universe right now because all interactions with customers and consumers and even businesses are happening through that application. So if the application is unreliable or if the application is not available is untrusted insecure, uh, there's a problem. There's a problem with the brand, with the company and the trust that consumers and customers have with our company. So if you think about an application developer, the weight he or she is carrying on their shoulders is tremendous because you're thinking about rolling features quickly to be competitive. That's the only way to be competitive in this world. You need to think about availability and resiliency. Like you pointed out and experience, you need to think about security and trust. Am I as a customer or consumer willing to put my data in that application? So velocity, availability, Security and trust and all of that depends on the developer. So the experience, the security, the trust, the feature, velocity is what is driving the brand experience now. >>So are those two tensions that say agility and trust, you know, Zero Trust used to be a buzzword now it's a mandate. But are those two vectors counter posed? Can they be merged into one and not affect each other? Does the question makes sense? Right? Security usually handcuffs my speed. But how do you address that? >>Yeah that's a great question. And I think if you think about it today that's the way things are. And if you think about this developer all they want to do is run fast because they want to build those features out and they're going to pick and choose a piece and services that matter to them and build up their app and they want the complexities of the infrastructure and security and trust to be handled by somebody else is not that they don't care about it but they want that abstraction so that is handled by somebody else. And typically within an organization we've seen in the past where this friction between Netapp Sec ops I. T. Tops and and the cloud platform Teams and the developer on one side and these these frictions and these meetings and toil actually take a toll on the developer and that's why companies and apps and developers are not as agile as they would like to be. So I think but it doesn't have to be that way. So I think if there was something that would allow a developer to pick and choose, discover the apis that they would like to use connect those api is in a very simple manner and then be able to scale them out and be able to secure them and in fact not just secure them during the run time when it's deployed. We're right off the back when the fire up that I'd and start developing the application. Wouldn't that be nice? And as you do that, there is a smooth transition between that discovery connectivity and ease of consumption and security with the idea cops. Netapp psych ops teams and see source to ensure that they are not doing something that the organization won't allow them to do in a very seamless manner. >>I want to go back and talk about security but I want to add another complexity before we do that. So for a lot of organizations in the public cloud became a staple of keeping the lights on during the pandemic but it brings new complexities and differences in terms of latency security, which I want to come back to deployment models etcetera. So what are some of the specific networking challenges that you've seen with the cloud native architecture is how are you addressing those? >>Yeah. In fact, if you think about cloud, to me that is a that is a different way of seeing a distributed system. And if you think about a distributed system, what is at the center of the distributed system is the network. So my my favorite comment here is that the network is the wrong time for all distribute systems and modern applications. And that is true because if you think about where things are today, like you said, there's there's cloud assets that a developer might use in the banking example that I gave earlier. I mean if you want to build a contact less app so that you get verified, a customer gets verified on the app. They walk over to the ATM and they were broadcast without touching that ATM. In that kind of an example, you're touching the mobile Rus, let's say U S A P is you're touching cloud API is where the back end might sit. You're touching on primary PS maybe it's an oracle database or a mainframe even where transactional data exists. You're touching branch pipes were the team actually exists and the need for consistency when you withdraw cash and you're carrying all of this and in fact there might be customer data sitting in salesforce somewhere. So it's cloud API is a song premise branch. It's ass is mobile and you need to bring all of these things together and over time you will see more and more of these API is coming from various as providers. So it's not just cloud providers but saas providers that the developer has to use. And so this complexity is very, very real. And this complexity is across the wide open internet. So the application is built across this wide open internet. So the problems of discovery ability, the problems of being able to simply connect these apis and manage the data flow across these apis. The problems of consistency of policy and consumption because all of these areas have their own nuances and what they mean, what the arguments mean and what the A. P. I. Actually means. How do you make it consistent and easy for the developer? That is the networking problem. And that is a problem of building out this network, making traffic engineering easy, making policy easy, making scale out, scale down easy, all of that our networking problems. And so we are solving those problems uh Francisco. >>Yeah the internet is the new private network but it's not so private. So I want to go back to security. I often say that the security model of building a moat, you dig the moat, you get the hardened castle that's just outdated now that the queen is left her castle, I always say it's dangerous out there. And the point is you touched on this, it's it's a huge decentralized system and with distributed apps and data, that notion of perimeter security, it's just no longer valid. So I wonder if you could talk more about how you're thinking about this problem and you definitely address some of that in your earlier comments. But what are you specifically doing to address this and how do you see it evolving? >>Yeah, I mean, that's that's a very important point. I mean, I think if you think about again the wide open internet being the wrong time for all modern applications, what is perimeter security in this uh in this new world? I mean, it's to me it boils down to securing an API because again, going with that running example of this contact lists cash withdrawal feature for a bank, the ap wherever it's it's entre branch SAs cloud, IOS android doesn't matter that FBI is your new security perimeter. And the data object that is trying to access is also the new security perimeter. So if you can secure ap to ap communication and P two data object communication, you should be good. So that is the new frontier. But guess what software is buggy? Everybody's software not saying Cisco software, everybody's Softwares buggy. Uh software is buggy, humans are not reliable and so things mature, things change, things evolve over time. So there needs to be defense in depth. So you need to secure at the API layer had the data object layer, but you also need to secure at every layer below it so that you have good defense and depth if any layer in between is not working out properly. So for us that means ensuring ap to ap communication, not just during long time when the app has been deployed and is running, but during deployment and also during the development life cycle. So as soon as the developer launches an ID, they should be able to figure out that this api is security uses reputable, it has compliant, it is compliant to my to my organization's needs because it is hosted, let's say from Germany and my organization wants appears to be used only if they are being hosted out of Germany so compliance needs and and security needs and reputation. Is it available all the time? Is it secure? And being able to provide that feedback all the time between the security teams and the developer teams in a very seamless real time manner. Yes, again, that's something that we're trying to solve through some of the services that we're trying to produce in san Francisco. >>Yeah, I mean those that layered approach that you're talking about is critical because every layer has, you know, some vulnerability. And so you you've got to protect that with some depth in terms of thinking about security, how should we think about where where Cisco's primary value add is, I mean as parts of the interview has a great security business is growing business, Is it your intention to to to to add value across the entire value chain? I mean obviously you can't do everything so you've got a partner but so has the we think about Cisco's role over the next I'm thinking longer term over the over the next decade. >>Yeah, I mean I think so, we do come in with good strength from the runtime side of the house. So if you think about the security aspects that we haven't played today, uh there's a significant set of assets that we have around user security around around uh with with do and password less. We have significant assets in runtime security. I mean, the entire portfolio that Cisco brings to the table is around one time security, the secure X aspects around posture and policy that will bring to the table. And as you see, Cisco evolve over time, you will see us shifting left. I mean, I know it's an overused term, but that is where security is moving towards. And so that is where api security and data security are moving towards. So learning what we have during runtime because again, runtime is where you learn what's available and that's where you can apply all of the M. L. And I models to figure out what works what doesn't taking those learnings, Taking those catalogs, taking that reputation database and moving it into the deployment and development life cycle and making sure that that's part of that entire they have to deploy to runtime chain is what you will see. Cisco do overtime. >>That's fantastic phenomenal perspective video. Thanks for coming on the cube. Great to have you and look forward to having you again. >>Absolutely. Thank you >>in a moment. We'll talk hybrid cloud applications operations and potential gaps that need to be addressed with costume, Das and VJ Venugopal. You're watching the cube the global leader in high tech coverage. Mhm >>You were cloud. It isn't just a cloud. It's everything flowing through it. It's alive. Yeah, connecting users, applications, data and devices and whether it's cloud, native hybrid or multi cloud, it's more distributed than ever. One company takes you inside, giving you the visibility and the insight you need to take action. >>One company >>has the vision to understand it, all the experience, to securely connect at all on any platform in any environment. So you can work wherever work takes you in a cloud first world between your cloud and being cloud smart, there's a bridge. Cisco the bridge to possible. >>Okay. We're here with costume does, who is the Senior Vice President, General Manager of Cloud and compute at Cisco. And VJ Venugopal, who is the Senior Director for Product Management for cloud compute at Cisco. KTV. J. Good to see you guys welcome. >>Great to see you. Dave to be here. >>Katie, let's talk about cloud you And I last time we're face to face was in Barcelona where we love talking about cloud and I always say to people look, Cisco is not a hyper Scaler, but the big public cloud players, they're like giving you a gift. They spent almost actually over $100 billion last year on Capex. The big four. So you can build on that infrastructure. Cisco is all about hybrid cloud. So help us understand the strategy. There may be how you can leverage that build out and importantly what a customer is telling you they want out of hybrid cloud. >>Yeah, no that's that's that's a perfect question to start with. Dave. So yes. So the hybrid hyper scholars have invested heavily building out their assets. There's a great lot of innovation coming from that space. Um There's also a great innovation set of innovation coming from open source and and that's another source of uh a gift. In fact the I. T. Community. But when I look at my customers they're saying well how do I in the context of my business implement a strategy that takes into consideration everything that I have to manage um in terms of my contemporary work clothes, in terms of my legacy, in terms of everything my developer community wants to do on DEVOPS and really harnessed that innovation that's built in the public cloud, that built an open source that built internally to me, and that naturally leads them down the path of a hybrid cloud strategy. And Siskel's mission is to provide for that imperative, the simplest more power, more powerful platform to deliver hybrid cloud and that platform. Uh It's inter site we've been investing in. Inner side, it's a it's a SAS um service um inner side delivers to them that bridge between their estates of today that were closer today, the need for them to be guardians of enterprise grade resiliency with the agility uh that's needed for the future. The embracing of cloud. Native of new paradigms of deVOPS models, the embracing of innovation coming from public cloud and an open source and bridging those two is what inner side has been doing. That's kind of that's kind of the crux of our strategy. Of course we have the entire portfolio behind it to support any, any version of that, whether that is on prem in the cloud, hybrid, cloud, multi cloud and so forth. >>But but if I understand it correctly from what I heard earlier today, the inter site is really a linchpin of that strategy, is it not? >>It really is and may take a second to totally familiarize those who don't know inner side with what it is. We started building this platform quite a few years back and we we built a ground up to be an immensely scalable SAs, super simple hybrid cloud platform and it's a platform that provides a slew of service is inherently and then on top of that there are suites of services, the sweets of services that are tied to infrastructure, automation. Cisco, as well as Cisco partners. The streets of services that have nothing to do with Cisco um products from a hardware perspective. And it's got to do with more cloud orchestration and cloud native and inner side and its suite of services um continue to kind of increase in pace and velocity of delivery video. Just over the last two quarters we've announced a whole number of things will go a little bit deeper into some of those but they span everything from infrastructure automation to kubernetes and delivering community than service to workload optimization and having visibility into your cloud estate. How much it's costing into your on premise state into your work clothes and how they're performing. It's got integrations with other tooling with both Cisco Abdi uh as well as non Cisco um, assets and then and then it's got a whole slew of capabilities around orchestration because at the end of the day, the job of it is to deliver something that works and works at scale that you can monitor and make sure is resilient and that includes that. That includes a workflow and ability to say, you know, do this and do this and do this. Or it includes other ways of automation, like infrastructure as code and so forth. So it includes self service that so that expand that. But inside the world's simplest hybrid cloud platform, rapidly evolving rapidly delivering new services. And uh we'll talk about some more of those day. >>Great, thank you, Katie VJ. Let's bring you into the discussion. You guys recently made an announcement with the ASCIi corp. I was stoked because even though it seemed like a long time ago, pre covid, I mean in my predictions post, I said, ha, she was a name to watch our data partners. Et are you look at the survey data and they really have become mainstream? You know, particularly we think very important in the whole multi cloud discussion. And as well, they're attractive to customers. They have open source offerings. You can very easily experiment. Smaller organizations can take advantage. But if you want to upgrade to enterprise features like clustering or whatever, you can plug right in. Not a big complicated migration. So a very, very compelling story there. Why is this important? Why is this partnership important to Cisco's customers? Mhm. >>Absolutely. When the spot on every single thing that you said, let me just start by paraphrasing what ambition statement is in the cloud and computer group. Right ambition statement is to enable a cloud operating model for hybrid cloud. And what we mean by that is the ability to have extreme amounts of automation orchestration and observe ability across your hybrid cloud idea operations now. Uh So developers and applications team get a great amount of agility in public clouds and we're on a mission to bring that kind of agility and automation to the private cloud and to the data centers and inter site is a quickie platform and lynchpin to enable that kind of operations. Uh, Cloud like operations in the in the private clouds and the key uh As you rightly said, harsher car is the, you know, they were the inventors of the concept of infrastructure at school and in terra form, they have the world's number one infrastructure as code platform. So it became a natural partnership for Cisco to enter into a technology partnership with harsher card to integrate inter site with hardship cops, terra form to bring the benefits of infrastructure as code to the to hybrid cloud operations. And we've entered into a very tight integration and uh partnership where we allow developers devops teams and infrastructure or administrators to allow the use of infrastructure as code in a SAS delivered manner for both public and private club. So it's a very unique partnership and a unique integration that allows the benefits of cloud managed i E C. To be delivered to hybrid cloud operations. And we've been very happy and proud to be partnering with Russian government shutdown. >>Yeah, Terra form gets very high marks from customers. The a lot of value there. The inner side integration adds to that value. Let's stay on cloud native for a minute. We all talk about cloud native cady was sort of mentioning before you got the the core apps, uh you want to protect those, make sure their enterprise create but they gotta be cool as well for developers. You're connecting to other apps in the cloud or wherever. How are you guys thinking about this? Cloud native trend? What other movies are you making in this regard? >>I mean cloud native is there is one of the paramount I. D. Trends of today and we're seeing massive amounts of adoption of cloud native architecture in all modern applications. Now, Cloud Native has become synonymous with kubernetes these days and communities has emerged as a de facto cloud native platform for modern cloud native app development. Now, what Cisco has done is we have created a brand new SAs delivered kubernetes service that is integrated with inter site, we call it the inter site community service for A. Ks. And this just geared a little over one month ago. Now, what interstate kubernetes service does is it delivers a cloud managed and cloud delivered kubernetes service that can be deployed on any supported target infrastructure. It could be a Cisco infrastructure, it could be a third party infrastructure or it could even be public club. But think of it as kubernetes anywhere delivered as says, managed from inside. It's a very powerful capability that we've just released into inter site to enable the power of communities and clog native to be used to be used anywhere. But today we made a very important aspect because we are today announced the brand new Cisco service mess manager, the Cisco service mesh manager, which is available as an extension to the KS are doing decide basically we see service measures as being the future of networking right in the past we had layer to networking and layer three networking and now with service measures, application networking and layer seven networking is the next frontier of, of networking. But you need to think about networking for the application age very differently how it is managed, how it is deployed. It needs to be ready, developer friendly and developer centric. And so what we've done is we've built out an application networking strategy and built out the service match manager as a very simple way to deliver application networking through the consumers, like like developers and application teams. This is built on an acquisition that Cisco made recently of Banzai Cloud and we've taken the assets of Banzai Cloud and deliver the Cisco service mesh manager as an extension to KS. That brings the promise of future networking and modern networking to application and development gives >>God thank you. BJ. And so Katie, let's let's let's wrap this up. I mean, there was a lot in this announcement today, a lot of themes around openness, heterogeneity and a lot of functionality and value. Give us your final thoughts. >>Absolutely. So, couple of things to close on, first of all, um Inner side is the simplest, most powerful hybrid cloud platform out there. It enables that that cloud operating model that VJ talked about, but enables that across cloud. So it's sad, it's relatively easy to get into it and give it a spin so that I'd highly encouraged anybody who's not familiar with it to try it out and anybody who is familiar with it to look at it again, because they're probably services in there that you didn't notice or didn't know last time you looked at it because we're moving so fast. So that's the first thing. The second thing I close with is um, we've been talking about this bridge that's kind of bridging, bridging uh your your on prem your open source, your cloud estates. And it's so important to to make that mental leap because uh in past generation, we used to talk about integrating technologies together and then with public cloud, we started talking about move to public cloud, but it's really how do we integrate, how do we integrate all of that innovation that's coming from the hyper scale, is everything they're doing to innovate superfast, All of that innovation is coming from open source, all of that innovation that's coming from from companies around the world, including Cisco, How do we integrate that to deliver an outcome? Because at the end of the day, if you're a cloud of Steam, if you're an idea of Steam, your job is to deliver an outcome and our mission is to make it super simple for you to do that. That's the mission we're on and we're hoping that everybody that's excited as we are about how simple we made that. >>Great, thank you a lot in this announcement today, appreciate you guys coming back on and help us unpack you know, some of the details. Thank thanks so much. Great having you. >>Thank you >>Dave in a moment. We're gonna come back and talk about disruptive technologies and futures in the age of hybrid cloud with Vegas Rattana and James leach. You're watching the cube, the global leader in high tech coverage. >>What if your server box >>wasn't a box at >>all? What if it could do anything run anything? >>Be any box you >>need with massive scale precision and intelligence managed and optimized from the cloud integrated with all your clouds, private, public or hybrid. So you can build whatever you need today and tomorrow. The potential of this box is unlimited. Unstoppable unseen ever before. Unbox the future with Cisco UCS X series powered by inter site >>Cisco. >>The bridge to possible. Yeah >>we're here with Vegas Rattana who's the director of product management for Pcs at Cisco. And James Leach is the director of business development for U. C. S. At the Cisco as well. We're gonna talk about computing in the age of hybrid cloud. Welcome gentlemen. Great to see you. >>Thank you. >>Thank you because let's start with you and talk about a little bit about computing architectures. We know that they're evolving. They're supporting new data intensive and other workloads especially as high performance workload requirements. What's this guy's point of view on all this? I mean specifically interested in your thoughts on fabrics. I mean it's kind of your wheelhouse, you've got accelerators. What are the workloads that are driving these evolving technologies and how how is it impacting customers? What are you seeing? >>Sure. First of all, very excited to be here today. You're absolutely right. The pace of innovation and foundational platform ingredients have just been phenomenal in recent years. The fabric that's writers that drives the processing power, the Golden city all have been evolving just an amazing place and the peace will only pick up further. But ultimately it is all about applications and the way applications leverage those innovations. And we do see applications evolving quite rapidly. The new classes of applications are evolving to absorb those innovations and deliver much better business values. Very, very exciting time step. We're talking about the impact on the customers. Well, these innovations have helped them very positively. We do see significant challenges in the data center with the point product based approach of delivering these platforms, innovations to the applications. What has happened is uh, these innovations today are being packaged as point point products to meet the needs of a specific application and as you know, the different applications have no different needs. Some applications need more to abuse, others need more memory, yet others need, you know, more course, something different kinds of fabrics. As a result, if you walk into a data center today, it is very common to see many different point products in the data center. This creates a manageability challenge. Imagine the aspect of managing, you know, several different form factors want you to you purpose built servers. The variety of, you know, a blade form factor, you know, this reminds me of the situation we had before smartphones arrived. You remember the days when you when we used to have a GPS device for navigation system, a cool music device for listening to the music. A phone device for making a call camera for taking the photos right? And we were all excited about it. It's when a smart phones the right that we realized all those cool innovations could be delivered in a much simpler, much convenient and easy to consume through one device. And you know, I could uh, that could completely transform our experience. So we see the customers were benefiting from these innovations to have a way to consume those things in a much more simplistic way than they are able to go to that. >>And I like to look, it's always been about the applications. But to your point, the applications are now moving in a much faster pace. The the customer experience is expectation is way escalated. And when you combine all these, I love your analogy there because because when you combine all these capabilities, it allows us to develop new Applications, new capabilities, new customer experiences. So that's that I always say the next 10 years, they ain't gonna be like the last James Public Cloud obviously is heavily influencing compute design and and and customer operating models. You know, it's funny when the public cloud first hit the market, everyone we were swooning about low cost standard off the shelf servers in storage devices, but it quickly became obvious that customers needed more. So I wonder if you could comment on this. How are the trends that we've seen from the hyper scale, Is how are they filtering into on prem infrastructure and maybe, you know, maybe there's some differences there as well that you could address. >>Absolutely. So I'd say, first of all, quite frankly, you know, public cloud has completely changed the expectations of how our customers want to consume, compute, right? So customers, especially in a public cloud environment, they've gotten used to or, you know, come to accept that they should consume from the application out, right? They want a very application focused view, a services focused view of the world. They don't want to think about infrastructure, right? They want to think about their application, they wanna move outward, Right? So this means that the infrastructure basically has to meet the application where it lives. So what that means for us is that, you know, we're taking a different approach. We're we've decided that we're not going to chase this single pane of glass view of the world, which, frankly, our customers don't want, they don't want a single pane of glass. What they want is a single operating model. They want an operating model that's similar to what they can get at the public with the public cloud, but they wanted across all of their cloud options they wanted across private cloud across hybrid cloud options as well. So what that means is they don't want to just consume infrastructure services. They want all of their cloud services from this operating model. So that means that they may want to consume infrastructure services for automation Orchestration, but they also need kubernetes services. They also need virtualization services, They may need terror form workload optimization. All of these services have to be available, um, from within the operating model, a consistent operating model. Right? So it doesn't matter whether you're talking about private cloud, hybrid cloud anywhere where the application lives. It doesn't matter what matters is that we have a consistent model that we think about it from the application out. And frankly, I'd say this has been the stumbling block for private cloud. Private cloud is hard, right. This is why it hasn't been really solved yet. This is why we had to take a brand new approach. And frankly, it's why we're super excited about X series and inter site as that operating model that fits the hybrid cloud better than anything else we've seen >>is acute. First, first time technology vendor has ever said it's not about a single pane of glass because I've been hearing for decades, we're gonna deliver a single pane of glass is going to be seamless and it never happens. It's like a single version of the truth. It's aspirational and, and it's just not reality. So can we stay in the X series for a minute James? Uh, maybe in this context, but in the launch that we saw today was like a fire hose of announcements. So how does the X series fit into the strategy with inter site and hybrid cloud and this operating model that you're talking about? >>Right. So I think it goes hand in hand, right. Um the two pieces go together very well. So we have uh, you know, this idea of a single operating model that is definitely something that our customers demand, right? It's what we have to have, but at the same time we need to solve the problems of the cost was talking about before we need a single infrastructure to go along with that single operating model. So no longer do we need to have silos within the infrastructure that give us different operating models are different sets of benefits when you want infrastructure that can kind of do all of those configurations, all those applications. And then, you know, the operating model is very important because that's where we abstract the complexity that could come with just throwing all that technology at the infrastructure so that, you know, this is, you know, the way that we think about is the data center is not centered right? It's no longer centered applications live everywhere. Infrastructure lives everywhere. And you know, we need to have that consistent operating model but we need to do things within the infrastructure as well to take full advantage. Right? So we want all the sas benefits um, of a Ci CD model of, you know, the inter site can bring, we want all that that proactive recommendation engine with the power of A I behind it. We want the connected support experience went all of that. They want to do it across the single infrastructure and we think that that's how they tie together, that's why one or the other doesn't really solve the problem. But both together, that's why we're here. That's why we're super excited. >>So Vegas, I make you laugh a little bit when I was an analyst at I D C, I was deep in infrastructure and then when I left I was doing, I was working with application development heads and like you said, uh infrastructure, it was just a, you know, roadblock but but so the target speakers with Cisco announced UCS a decade ago, I totally missed it. I didn't understand it. I thought it was Cisco getting into the traditional server business and it wasn't until I dug in then I realized that your vision was really to transform infrastructure, deployment and management and change them all. I was like, okay, I got that wrong uh but but so let's talk about the the ecosystem and the joint development efforts that are going on there, X series, how does it fit into this, this converged infrastructure business that you've, you've built and grown with partners, you got storage partners like Netapp and Pure, you've got i SV partners in the ecosystem. We see cohesive, he has been a while since we we hung out with all these companies at the Cisco live hopefully next year, but tell us what's happening in that regard. >>Absolutely, I'm looking forward to seeing you in the Cisco live next year. You know, they have absolutely you brought up a very good point. You see this is about the ecosystem that it brings together, it's about making our customers bring up the entire infrastructure from the core foundational hardware all the way to the application level so that they can, you know, go off and running pretty quick. The converse infrastructure has been one of the corners 2.5 hour of the strategy, as you pointed out in the last decade. And and and I'm I'm very glad to share that converse infrastructure continues to be a very popular architecture for several enterprise applications. Seven today, in fact, it is the preferred architecture for mission critical applications where performance resiliency latency are the critical requirements there almost a de facto standards for large scale deployments of virtualized and business critical data bases and so forth with X series with our partnerships with our Stories partners. Those architectures will absolutely continue and will get better. But in addition as a hybrid cloud world, so we are now bringing in the benefits of canvas in infrastructure uh to the world of hybrid cloud will be supporting the hybrid cloud applications now with the CIA infrastructure that we have built together with our strong partnership with the Stories partners to deliver the same benefits to the new ways applications as well. >>Yeah, that's what customers want. They want that cloud operating model. Right, go ahead please. >>I was going to say, you know, the CIA model will continue to thrive. It will transition uh it will expand the use cases now for the new use cases that were beginning to, you know, say they've absolutely >>great thank you for that. And James uh have said earlier today, we heard this huge announcement, um a lot of lot of parts to it and we heard Katie talk about this initiative is it's really computing built for the next decade. I mean I like that because it shows some vision and you've got a road map that you've thought through the coming changes in workloads and infrastructure management and and some of the technology that you can take advantage of beyond just uh, you know, one or two product cycles. So, but I want to understand what you've done here specifically that you feel differentiates you from other competitive architectures in the industry. >>Sure. You know that's a great question. Number one. Number two, um I'm frankly a little bit concerned at times for for customers in general for our customers customers in general because if you look at what's in the market, right, these rinse and repeat systems that were effectively just rehashes of the same old design, right? That we've seen since before 2000 and nine when we brought you C. S to market these are what we're seeing over and over and over again. That's that's not really going to work anymore frankly. And I think that people are getting lulled into a false sense of security by seeing those things continually put in the market. We rethought this from the ground up because frankly future proofing starts now, right? If you're not doing it right today, future proofing isn't even on your radar because you're not even you're not even today proved. So we re thought the entire chassis, the entire architecture from the ground up. Okay. If you look at other vendors, if you look at other solutions in the market, what you'll see is things like management inside the chassis. That's a great example, daisy chaining them together >>like who >>needs that? Who wants that? Like that kind of complexity is first of all, it's ridiculous. Um, second of all, um, if you want to manage across clouds, you have to do it from the cloud, right. It's just common sense. You have to move management where it can have the scale and the scope that it needs to impact your entire domain, your world, which is much larger now than it was before. We're talking about true hybrid cloud here. Right. So we had to solve certain problems that existed in the traditional architecture. You know, I can't tell you how many times I heard you talk about the mid plane is a great example. You know, the mid plane and a chastity is a limiting factor. It limits us on how much we can connect or how much bandwidth we have available to the chassis. It limits us on air flow and other things. So how do you solve that problem? Simple. Just get rid of it. Like we just we took it out, right. It's not no longer a problem. We designed an architecture that doesn't need it. It doesn't rely on it. No forklift upgrades. So, as we start moving down the path of needing liquid cooling or maybe we need to take advantage of some new, high performance, low latency fabrics. We can do that with almost. No problem at all. Right, So, we don't have any forklift upgrades. Park your forklift on the side. You won't need it anymore because you can upgrade gradually. You can move along as technologies come into existence that maybe don't even exist. They they may not even be on our radar today to take advantage of. But I like to think of these technologies, they're really important to our customers. These are, you know, we can call them disruptive technologies. The reality is that we don't want to disrupt our customers with these technologies. We want to give them these technologies so they can go out and be disruptive themselves. Right? And this is the way that we've designed this from the ground up to be easy to consume and to take advantage of what we know about today and what's coming in the future that we may not even know about. So we think this is a way to give our customers that ultimate capability flexibility and and future proofing. >>I like I like that phrase True hybrid cloud. It's one that we've used for years and but to me this is all about that horizontal infrastructure that can support that vision of what true hybrid cloud is. You can support the mission critical applications. You can you can develop on the system and you can support a variety of workload. You're not locked into one narrow stovepipe and that does have legs, Vegas and James. Thanks so much for coming on the program. Great to see you. >>Yeah. Thank you. Thank you. >>When we return shortly thomas Shiva who leads Cisco's data center group will be here and thomas has some thoughts about the transformation of networking I. T. Teams. You don't wanna miss what he has to say. You're watching the cube. The global leader in high tech company. Okay, >>mm. Mhm, mm. Okay. Mhm. Yeah. Mhm. Yeah. >>Mhm. Yes. Yeah. Okay. We're here with thomas Shiva who is the Vice president of Product Management, A K A VP of all things data center, networking STN cloud. You name it in that category. Welcome thomas. Good to see you again. >>Hey Sam. Yes. Thanks for having me on. >>Yeah, it's our pleasure. Okay, let's get right into observe ability. When you think about observe ability, visibility, infrastructure monitoring problem resolution across the network. How does cloud change things? In other words, what are the challenges that networking teams are currently facing as they're moving to the cloud and trying to implement hybrid cloud? >>Yeah. Yeah, visibility as always is very, very important. And it's quite frankly, it's not just it's not just the networking team is actually the application team to write. And as you pointed out, the underlying impetus to what's going on here is the data center is where the data is. And I think we set us a couple years back and really what happens the applications are going to be deployed uh in different locations, right. Whether it's in a public cloud, whether it's on prayer, uh, and they are built differently right there, built as microservices, they might actually be distributed as well at the same application. And so what that really means is you need as an operator as well as actually a user better visibility. Where are my pieces and you need to be able to correlate between where the app is and what the underlying network is that is in place in these different locations. So you have actually a good knowledge while the app is running so fantastic or sometimes not. So I think that's that's really the problem statement. What what we're trying to go afterwards, observe ability. >>Okay, and let's double click on that. So a lot of customers tell me that you gotta stare at log files until your eyes bleed and you gotta bring in guys with lab coats who have phds to figure all this stuff out. So, so you just described, it's getting more complex, but at the same time you have to simplify things. So how how are you doing that, >>correct? So what we basically have done is we have this fantastic product that that is called 1000 Ice. And so what this does is basically as the name, which I think is a fantastic fantastic name. You have these sensors everywhere. Um, and you can have a good correlation on uh links between if I run from a site to aside from a site to a cloud, from a cloud to cloud and you basically can measure what is the performance of these links. And so what we're, what we're doing here is we're actually extending the footprint of these thousands agent. Right? Instead of just having uh inversion machine clouds, we are now embedding them with the Cisco network devices. Right? We announced this with the catalyst 9000 and we're extending this now to our 8000 catalyst product line for the for the SD were in products as well as to the data center products the next line. Um and so what you see is is, you know, half a saying, you have 1000 eyes, you get a million insights and you get a billion dollar of improvements uh for how your applications run. And this is really uh, the power of tying together the footprint of where the network is with the visibility, what is going on. So you actually know the application behavior that is attached to this network. >>I see. So okay. So as the cloud evolves and expands it connects your actually enabling 1000 eyes to go further, not just confined within a single data center location, but out to the network across clouds, et cetera, >>correct. Wherever the network is, you're going to have 1000 I sensor and you can't bring this together and you can quite frankly pick if you want to say, hey, I have my application in public cloud provider, a uh, domain one and I have another one domain to, I can't do monitor that link. I can also monitor have a user that has a campus location or branch location. I kind of put an agent there and then I can monitor the connectivity from that branch location all the way to the let's say corporations that data centre, our headquarter or to the cloud. And I can have these probes and just we have visibility and saying, hey, if there's a performance, I know where the issue is and then I obviously can use all the other foods that we have to address those. >>All right, let's talk about the cloud operating model. Everybody tells us it's really the change in the model that drives big numbers in terms of R. O. I. And I want you to maybe address how you're bringing automation and devops to this world of of hybrid and specifically how is Cisco enabling I. T. Organizations to move to a cloud operating model? Is that cloud definition expands? >>Yeah, no that's that's another interesting topic beyond the observe ability. So really, really what we're seeing and this is going on for uh I want to say a couple of years now, it's really this transition from operating infrastructure as a networking team more like a service like what you would expect from a cloud provider. Right? It's really around the network team offering services like a cloud provided us. And that's really what the meaning is of cloud operating model. Right? But this is infrastructure running your own data center where that's linking that infrastructure was whatever runs on the public club is operating and like a cloud service. And so we are on this journey for why? So one of the examples uh then we have removing some of the control software assets, the customers that they can deploy on prayer uh to uh an instance that they can deploy in a cloud provider and just busy, insane. She ate things there and then just run it that way. Right. And so the latest example for this is what we have our identity service engine that is now limited availability available on AWS and will become available in mid this year, both in Italy as unusual as a service. You can just go to market place, you can load it there and now you create, you can start running your policy control in a cloud, managing your access infrastructure in your data center, in your campus wherever you want to do it. And so that's just one example of how we see our customers network operations team taking advantage of a cloud operating model and basically employing their, their tools where they need them and when they need them. >>So what's the scope of, I hope I'm saying it right. Ice, right. I see. I think it's called ice. What's the scope of that like for instance, turn in effect my or even, you know, address simplify my security approach. >>Absolutely. That's now coming to what is the beauty of the product itself? Yes. What you can do is really is that there's a lot of people talking about else. How do I get to zero trust approach to networking? How do I get to a much more dynamic, flexible segmentation in my infrastructure. Again, whether this is only campus X as well as a data center and Ice help today, you can use this as a point to define your policies and then any connect from there. Right. In this particular case we would instant Ice in the cloud as a software load. You now can connect and say, hey, I want to manage and program my network infrastructure and my data center on my campus, going to the respective control over this DNA Center for campus or whether it is the A. C. I. Policy controller. And so yes, what you get as an effect out of this is a very elegant way to automatically manage in one place. What is my policy and then drive the right segmentation in your network infrastructure? >>zero. Trust that, you know, it was pre pandemic. It was kind of a buzzword. Now it's become a mandate. I wonder if we could talk about right. I mean I wonder if you talk about cloud native apps, you got all these developers that are working inside organizations. They're maintaining legacy apps. They're connecting their data to systems in the cloud there, sharing that data. I need these developers, they're rapidly advancing their skill sets. How is Cisco enabling its infrastructure to support this world of cloud? Native making infrastructure more responsive and agile for application developers? >>Yeah. So, you know, we're going to the top of his visibility, we talked about the operating model, how how our network operators actually want to use tools going forward. Now, the next step to this is it's not just the operator. How do they actually, where do they want to put these tools, how they, how they interact with these tools as well as quite frankly as how, let's say, a devops team on application team or Oclock team also wants to take advantage of the program ability of the underlying network. And this is where we're moving into this whole cloud native discussion, right? Which is really two angles, that is the cloud native way, how applications are being built. And then there is the cloud native way, how you interact with infrastructure. Right? And so what we have done is we're a putting in place the on ramps between clouds and then on top of it we're exposing for all these tools, a P I S that can be used in leverage by standard uh cloud tools or uh cloud native tools. Right. And one example or two examples we always have and again, we're on this journey for a while is both answerable uh script capabilities that exist from red hat as well as uh Ashitaka from capabilities that you can orchestrate across infrastructure to drive infrastructure, automation and what what really stands behind it is what either the networking operations team wants to do or even the ap team. They want to be able to describe the application as a code and then drive automatically or programmatically in situation of infrastructure needed for that application. And so what you see us doing is providing all these capability as an interface for all our network tools. Right. Whether it's this ice that I just mentioned, whether this is our D. C. And controllers in the data center, uh whether these are the controllers in the in the campus for all of those, we have cloud native interfaces. So operator or uh devops team can actually interact directly with that infrastructure the way they would do today with everything that lives in the cloud, with everything how they brought the application. >>This is key. You can't even have the conversation of op cloud operating model that includes and comprises on prem without programmable infrastructure. So that's that's very important. Last question, thomas our customers actually using this, they made the announcement today. There are there are there any examples of customers out there doing this? >>We do have a lot of customers out there that are moving down the past and using the D. D. Cisco high performance infrastructure, but also on the compute side as well as on an exercise one of the customers. Uh and this is like an interesting case. It's Rakuten uh record and is a large tackle provider, a mobile five G. Operator uh in Japan and expanding and is in different countries. Uh and so people something oh, cloud, you must be talking about the public cloud provider, the big the big three or four. But if you look at it, there's a lot of the tackle service providers are actually cloud providers as well and expanding very rapidly. And so we're actually very proud to work together with with Rakuten and help them building a high performance uh, data and infrastructure based on hard gig and actually phone a gig uh to drive their deployment to. It's a five G mobile cloud infrastructure, which is which is uh where the whole the whole world where traffic is going. And so it's really exciting to see this development and see the power of automation visibility uh together with the high performance infrastructure becoming reality and delivering actually services, >>you have some great points you're making there. Yes, you have the big four clouds, your enormous, but then you have a lot of actually quite large clouds. Telcos that are either approximate to those clouds or they're in places where those hyper scholars may not have a presence and building out their own infrastructure. So so that's a great case study uh thomas, hey, great having you on. Thanks so much for spending some time with us. >>Yeah, same here. I appreciate it. Thanks a lot. >>I'd like to thank Cisco and our guests today V Joy, Katie VJ, viscous James and thomas for all your insights into this evolving world of hybrid cloud, as we said at the top of the next decade will be defined by an entirely new set of rules. And it's quite possible things will evolve more quickly because the cloud is maturing and has paved the way for a new operating model where everything is delivered as a service, automation has become a mandate because we just can't keep throwing it labor at the problem anymore. And with a I so much more as possible in terms of driving operational efficiencies, simplicity and support of the workloads that are driving the digital transformation that we talk about all the time. This is Dave Volonte and I hope you've enjoyed today's program. Stay Safe, be well and we'll see you next time.

Published Date : May 27 2021

SUMMARY :

Yeah, mm. the challenge is how to make this new cloud simple, to you by Cisco. Good to see you. Good to see you as well. to digital business or you know organizations, they had to rethink their concept of agility and And if you think about it, the application is actually driving So I wonder if you talk more about how the application is experience is So if you think about an application developer, trust, you know, Zero Trust used to be a buzzword now it's a mandate. And I think if you think about it today that's the the public cloud became a staple of keeping the lights on during the pandemic but So the problems of discovery ability, the problems of being able to simply I often say that the security model of building a moat, you dig the moat, So that is the new frontier. And so you you've got to protect that with some I mean, the entire portfolio that Cisco brings to the Great to have you and look forward to having you again. Thank you gaps that need to be addressed with costume, Das and VJ Venugopal. One company takes you inside, giving you the visibility and the insight So you can work wherever work takes you in a cloud J. Good to see you guys welcome. Great to see you. but the big public cloud players, they're like giving you a gift. and really harnessed that innovation that's built in the public cloud, that built an open source that built internally the job of it is to deliver something that works and works at scale that you can monitor But if you want to upgrade to enterprise features like clustering or the key uh As you rightly said, harsher car is the, We all talk about cloud native cady was sort of mentioning before you got the the core the power of communities and clog native to be used to be used anywhere. and a lot of functionality and value. outcome and our mission is to make it super simple for you to do that. you know, some of the details. and futures in the age of hybrid cloud with Vegas Rattana and James leach. So you can build whatever you need today The bridge to possible. And James Leach is the director of business development for U. C. S. At the Cisco as well. Thank you because let's start with you and talk about a little bit about computing architectures. to meet the needs of a specific application and as you know, the different applications have And when you combine all these, I love your analogy there because model that fits the hybrid cloud better than anything else we've seen So how does the X series fit into the strategy So we have uh, you know, this idea of a single operating model that is definitely something it was just a, you know, roadblock but but so the target speakers has been one of the corners 2.5 hour of the strategy, as you pointed out in the last decade. Yeah, that's what customers want. I was going to say, you know, the CIA model will continue to thrive. and and some of the technology that you can take advantage of beyond just uh, 2000 and nine when we brought you C. S to market these are what we're seeing over and over and over again. can have the scale and the scope that it needs to impact your entire domain, on the system and you can support a variety of workload. Thank you. You don't wanna miss what he has to say. Yeah. Good to see you again. When you think about observe ability, And it's quite frankly, it's not just it's not just the networking team is actually the application team to write. So a lot of customers tell me that you a site to aside from a site to a cloud, from a cloud to cloud and you basically can measure what is the performance So as the cloud evolves and expands it connects your and you can quite frankly pick if you want to say, hey, I have my application in public cloud that drives big numbers in terms of R. O. I. And I want you to You can just go to market place, you can load it there and even, you know, address simplify my security approach. And so yes, what you get as an effect I mean I wonder if you talk And so what you see us doing is providing all these capability You can't even have the conversation of op cloud operating model that includes and comprises And so it's really exciting to see this development and So so that's a great case study uh thomas, hey, great having you on. I appreciate it. that are driving the digital transformation that we talk about all the time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

Dave VolontePERSON

0.99+

CiscoORGANIZATION

0.99+

JamesPERSON

0.99+

JapanLOCATION

0.99+

KatiePERSON

0.99+

DavePERSON

0.99+

ItalyLOCATION

0.99+

san FranciscoLOCATION

0.99+

SamPERSON

0.99+

BarcelonaLOCATION

0.99+

thomasPERSON

0.99+

two piecesQUANTITY

0.99+

1000 eyesQUANTITY

0.99+

GermanyLOCATION

0.99+

Dejoy PandeyPERSON

0.99+

thomas ShivaPERSON

0.99+

2020DATE

0.99+

VJ VenugopalPERSON

0.99+

two vectorsQUANTITY

0.99+

AWSORGANIZATION

0.99+

James LeachPERSON

0.99+

FirstQUANTITY

0.99+

singleQUANTITY

0.99+

RakutenORGANIZATION

0.99+

firstQUANTITY

0.99+

CIAORGANIZATION

0.99+

mid this yearDATE

0.99+

next yearDATE

0.99+

ASCIiORGANIZATION

0.99+

tomorrowDATE

0.99+

SteamORGANIZATION

0.99+

last yearDATE

0.99+

2.5 hourQUANTITY

0.99+

second thingQUANTITY

0.99+

two anglesQUANTITY

0.99+

FBIORGANIZATION

0.99+

first thingQUANTITY

0.99+

todayDATE

0.99+

1000QUANTITY

0.99+

NetappORGANIZATION

0.99+

bothQUANTITY

0.99+

Vegas RattanaORGANIZATION

0.99+

two tensionsQUANTITY

0.98+

twoQUANTITY

0.98+

Ricardo Rocha, CERN | KubeCon + CloudNativeCon Europe 2021 - Virtual


 

>>from around the globe. It's >>the cube >>with coverage of >>Kublai khan and >>Cloud Native Con, Europe 2021 virtual brought >>to you by red hat, >>the cloud Native >>Computing foundation and ecosystem partners. Hello, welcome back to the cubes coverage of Kublai khan. Cloud Native Con 2021 part of the CNC. S continuing cube partnership virtual here because we're not in person soon, we'll be out of the pandemic and hopefully in person for the next event. I'm john for your host of the key. We're here with ricardo. Roach computing engineers sir. In CUBA. I'm not great to see you ricardo. Thanks for remote ng in all the way across the world. Thanks for coming in. >>Hello, Pleasure. Happy to be here. >>I saw your talk with Priyanka on linkedin and all around the web. Great stuff as always, you guys do great work over there at cern. Talk about what's going on with you and the two speaking sessions you have it coop gone pretty exciting news and exciting sessions happening here. So take us through the sessions. >>Yeah. So actually the two sessions are kind of uh showing the two types of things we do with kubernetes. We we are doing we have a lot of uh services moving to kubernetes, but the first one is more on the services we have in the house. So certain is known for having a lot of data and requests, requiring a lot of computing capacity to analyze all this data. But actually we have also very large community and we have a lot of users and people interested in the stuff we do. So the first question will actually show how we've been uh migrating our group of infrastructure into the into communities and in this case actually open shift. And uh the challenge there is to to run a very large amount of uh global websites on coordinators. Uh we run more than 1000 websites and there will be a demonstration on how we do all the management of the website um life cycle, including upgrading and deploying new new websites and an operator that was developed for this purpose. And then more on the other side will give with a colleague also talk about machine learning. Machine learning has been a big topic for us. A lot of our workloads are migrating to accelerators and can benefit a lot from machine learning. So we're giving a talk about a new service that we've deployed on top of Cuban areas where we try to manage to uh lifecycle of machine learning workloads from data preparation all the way to serving the bottles, also exploring the communities features and integrating accelerators and a lot of accelerators. >>So one part of the one session, it's a large scale deployment kubernetes key to there and now the machine learning essentially service for other people to use that. Right? Like take me through the first large scale deployment. What's the key innovation there in your opinion? >>Yeah, I think compared to the infrastructure we had before, is this notion that we can develop an operator that will uh, manage resource, in this case a website. And this is uh, something that is not always obvious when people start with kubernetes, it's not just an orchestra, it's really the ap and the capability of managing a huge amount of resources, including custom resources. So the possibility to develop this operator and then uh, manage the lifecycle of uh, something that was defined in the house and that fits our needs. Uh, There are challenges there because we have a large amount of websites and uh, they can be pretty active. Uh, we also have to some scaling issues on the storage that serves these these websites and we'll give some details uh during the talk as well, >>so kubernetes storage, this is all kind of under the covers, making this easier. Um and the machine learning, it plays nicely in that what if you take us for the machine learning use case, what's going on there, wow, what was the discovery, How did you guys put that together? What's the key elements there? >>Right, so the main challenge there has been um that machine learning is is quite popular but it's quite spread as well, so we have multiple groups focusing on this, but there's no obvious way to centralize not only the resource usage and make it more efficient, but also centralize the knowledge of how these procedures can be done. So what we are trying to do is just offer a service to all our users where we help them with infrastructure so that they don't have to focus on that and they could focus just on their workloads and we do everything from exposing the data systems that we have in the house so that they can do access to the data and data preparation and then doing um some iteration using notebooks and then doing distributed training with potentially large amount of gps and that storage and serving up the models and all of this is uh is managed with the coordinates cluster underneath. Uh We had a lot of knowledge of how to handle kubernetes and uh all the features that everyone likes scalability. The reliability out of scaling is very important for this type of workload. This is, this is key. >>Yeah, it's interesting to see how kubernetes is maturing, um congratulations on the projects. Um they're going to probably continue to scale. Remember this reminds me of when I was uh you know coming into the business in the 98 late eighties early nineties with TCP I. P. And the S. I. Model, you saw the standards evolve and get settled in and then boom innovation everywhere. And that took about a year to digest state and scale up. It's happening much faster now with kubernetes I have to ask you um what's your experience with the question that people are looking to get answered? Which is as kubernetes goes, the next generation of the next step? Um People want to integrate. So how is kubernetes exposing a. P. I. S. To say integration points for tools and other things? Can you share your experience and where this is going, what's happening now and where it goes? Because we know there's no debate. People like the kubernetes aspect of it, but now it's integration is the conversation. Can you share your thoughts on that? >>I can try. Uh So it's uh I would say it's a moving target, but I would say the fact that there's such a rich ecosystem around kubernetes with all the cloud, David projects, uh it's it's uh like a real proof that the popularity of the A. P. I. And this is also something that we after we had the first step of uh deploying and understanding kubernetes, we started seeing the potential that it's not reaching only the infrastructure itself, it's reaching all the layers, all the stack that we support in house and premises. And also it's opening up uh doors to easily scale into external resources as as well. So what we've been trying to tell our users is to rely on these integrations as much as possible. So this means like the application lifecycle being managed with things like Helmand getups, but also like the monitoring being managed with Prometheus and once you're happy with your deployment in house we have ways to scale out to external resources including public clouds. And this is really like see I don't know a proof that all these A. P. I. S are not only popular but incredibly useful because there's such a rich ecosystem around it. >>So talk about the role of data in this obviously machine learning pieces something that everyone is interested in as you get infrastructure as code and devops um and def sec ops as everything's shifting left. I love that, love that narrative day to our priests. All this is all proving mature, mature ization. Um data is critical. Right? So now you get real time information, real time data. The expectations for the apps is to integrate the data. What's your view on how this is progressing from your standpoint because machine learning and you mentioned you know acceleration or being part of another system. Cashing has always done that would say databases. Right. So you've got now is databases get slower, caches are getting faster now they're all the ones so it's all changing. So what's your thoughts on this next level data equation into kubernetes? Because you know stateless is cool but now you've got state issues. >>Yeah so uh yeah we we've always had huge needs for for data we store and I I think we are over half an exhibit of data available on the premises but we we kind of have our own storage systems which are external and that's for for like the physics data, the raw data and one particular charity that we had with our workloads until recently is that we we call them embarrassing parallel in the sense that they don't really need uh very tight connectivity between the different workloads. So if it's people always say tens of thousands of jobs to do some analysis, they're actually quite independent, they will produce a lot more data but we can store them independently. Machine learning is is posing a challenge in the sense that this is a training tends to be a lot more interconnected. Um so it can be a benefit from from um systems that we are not so familiar with. So for us it's it's maybe not so much the cashing layers themselves is really understanding how our infrastructure needs to evolve on premises to support this kind of workloads. We had some smallish uh more high performance computing clusters with things like infinite and for low latency. But this is not the bulk of our workloads. This is not what we are experts on these days. This is the transition we are doing towards uh supporting this machine learning workers >>um just as a reference for the folks watching you mentioned embarrassing parallel and that's a quote that you I read on your certain tech blog. So if you go to tech blog dot web dot search dot ch or just search cern tech blog, you'll see the post there um and good stuff there and in there you go, you lay out a bunch of other things too where you start to see the deployment services and customer resource definitions being part of this, is it going to get to the point where automation is a bigger part of the cluster management setting stuff up quicker. Um As you look at some of the innovations you're doing with machines and Coubertin databases and thousands of other point things that you're working on there, I mean I know you've got a lot going on there, it's in the post but um you know, we don't want to have the problem of it's so hard to stand up and manage and this is what people want to make simpler. How do you how do you answer that when people say say we want to make it easier? >>Yeah. So uh for us it's it's really automate everything and up to now it has been automate the deployment in the kubernetes clusters right now we are looking at automating the kubernetes clusters themselves. So there's some really interesting projects, uh So people are used to using things like terra form to manage the deployment of clusters, but there are some projects like cross playing, for example, that allows us to have the clusters themselves being resources within kubernetes. Uh and this is something we are exploring quite a bit. Uh This allows us to also abstract the kubernetes clusters themselves uh as uh as carbonated resources. So this this idea of having a central cluster that will manage a much larger infrastructure. So this is something that we're exploring the getups part is really key for us to, it's something that eases the transition from from from people that are used already to manage large scale systems but are not necessarily experts on core NATO's. Uh they see that there's an easier past there if they if they can be introduced slowly through through the centralized configuration. >>You know, you mentioned cross plane, I had some on earlier, he's awesome dude, great guy and I was smiling because you know I still have you know flashbacks and trigger episodes from the Hadoop world, you know when it was such so promising that technology but it was just so hard to stand up and managed to be like really an expert to do that. And I think you mentioned cross plane, this comes up to the whole operator notion of operating the clusters, right? So you know, this comes back down to provisioning and managing the infrastructure, which is, you know, we all know is key, right? But when you start getting into multi cloud and multiple environments, that's where it becomes challenging. And I think I like what they're doing is that something that's on your mind to around hybrid and multi cloud? Can you share your thoughts on that whole trajectory? >>Absolutely. So I actually gave an internal seminar just last week describing what we've been playing with in this area and I showed some demo of using cross plane to manage clusters on premises but also manage clusters running on public clouds. A. W. S. Uh google cloud in nature and it's really like the goal there. There are many reasons we we want to explore external resources. We are kind of used to this because we have a lot of sites around the world that collaborate with us, but specifically for public clouds. Uh there are some some motivations there. The first one is this idea that we have periodic load spikes. So we knew we have international conferences, the number of analysis and job requests goes up quite a bit, so we need to be able to like scale on demand for short periods instead of over provisioning this uh in house. The second one is again coming back to machine learning this idea of accelerators. We have a lot of Cpus, we have a lot less gPS uh so it would be nice to go on fish uh for those in the public clouds. And then there's also other accelerators that are quite interesting, like CPUs and I p u s that will definitely play a role and we probably, or maybe we will never have among premises, will only be able to to use them externally. So in that, in that respect, actually coming back to your previous question, this idea of storage then becomes quite important. So what we've been playing with is not only managing this external cluster centrally, but also managing the wall infrastructure from a central place. So this means uh, making all the clusters, whatever they are look very, very much the same, including like the monitoring and the aggregation of the monitoring centrally. And then as we talked about storage, this idea of having local storage that that will be allow us to do really quick software distribution but also access to the data, >>what you guys are doing as we say, cool. And relevant projects. I mean you got the large scale deployments and the machine learning to really kind of accelerate which will drive a lot of adoption in terms of automation. And as that kicks in when you got to get the foundational work done, I see that clearly the right trajectory, you know, reminds me ricardo, um you know, again not do a little history lesson here, but you know, back when network protocols were moving from proprietary S N A for IBM deck net for digital back in the history the old days the os I Open Systems Interconnect Standard stack was evolving and you know when TCP I P came around that really opened up this interoperability, right? And SAM and I were talking about this kind of cross cloud connections or inter clouding as lou lou tucker. And I talked that open stack in 2013 about inter networking or interconnections and it's about integration and interoperability. This is like the next gen conversation that kubernetes is having. So as you get to scale up which is happening very fast as you get machine learning which can handle data and enable modern applications really it's connecting networks and connecting systems together. This is a huge architectural innovation direction. Could you share your reaction to that? >>Yeah. So actually we are starting the easy way, I would say we are starting with the workloads that are loosely coupled that we don't necessarily have to have this uh tighten inter connectivity between the different deployments, I would say that this is this is already giving us a lot because our like the bulk of our workloads are this kind of batch, embarrassing parallel, uh and we are also doing like co location when we have large workloads that made this kind of uh close inter connectivity then we kind of co locate them in the same deployment, same clouds in region. Um I think like what you describe of having cross clouds interconnectivity, this will be like a huge topic. It is already, I would say so we started investigating a lot of service measure options to try to learn what we can gain from it. There is clearly a benefit for managing services but there will be definitely also potential to allow us to kind of more easily scale out across regions. There's we've seen this by using the public cloud. Some things that we found is for example, this idea of infinite, infinite capacity which is kind of sometimes uh it feels kind of like that even at the scale we have for Cpus But when you start using accelerators, Yeah, you start negotiating like maybe use multiple regions because there's not enough capacity in a single region and you start having to talk to the cloud providers to negotiate this. And this makes the deployments more complicated of course. So this, this interconnectivity between regions and clouds will be a big thing. >>And, and again, low hanging fruit is just a kind of existing market but has thrown the vision out there mainly to kind of talk about what what we're seeing which is the world's are distributed computer. And if you have the standards, good things happen. Open systems, open innovating in the open really could make a big difference is going to be the difference between real value for the society of global society or are we going to get into the silo world? So I think the choice is the industry and I think, you know, Cern and C and C. F and Lennox Foundation and all the companies that are investing in open really is a key inflection point for us right now. So congratulations. Thanks for coming on the cube. Yeah, appreciate it. Thank you. Okay, Ricardo, rocha computing engineer cern here in the cube coverage of the CN Cf cube con cloud, native con europe. I'm john for your host of the cube. Thanks for watching.

Published Date : May 5 2021

SUMMARY :

from around the globe. I'm not great to see you ricardo. Happy to be here. what's going on with you and the two speaking sessions you have it coop gone pretty exciting news the two types of things we do with kubernetes. So one part of the one session, it's a large scale deployment kubernetes key to there and now So the possibility to Um and the machine learning, it plays nicely in that what if you take us for the machine learning use case, the data systems that we have in the house so that they can do access to the data and data preparation in the 98 late eighties early nineties with TCP I. P. And the S. I. Model, you saw the standards that the popularity of the A. P. I. And this is also something that we So talk about the role of data in this obviously machine learning pieces something that everyone is interested in as This is the transition we are doing towards So if you go to tech blog dot web dot search dot ch Uh and this is something we are exploring quite a bit. this comes back down to provisioning and managing the infrastructure, which is, you know, we all know is key, The first one is this idea that we have periodic load spikes. and the machine learning to really kind of accelerate which will drive a lot of adoption in terms of uh it feels kind of like that even at the scale we have for Cpus But when you open innovating in the open really could make a big difference is going to be the difference

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PriyankaPERSON

0.99+

Ricardo RochaPERSON

0.99+

2013DATE

0.99+

DavidPERSON

0.99+

IBMORGANIZATION

0.99+

two sessionsQUANTITY

0.99+

first questionQUANTITY

0.99+

CERNORGANIZATION

0.99+

two typesQUANTITY

0.99+

RicardoPERSON

0.99+

more than 1000 websitesQUANTITY

0.99+

last weekDATE

0.99+

CUBALOCATION

0.99+

98 late eightiesDATE

0.99+

NATOORGANIZATION

0.99+

Lennox FoundationORGANIZATION

0.98+

two speaking sessionsQUANTITY

0.98+

first oneQUANTITY

0.98+

thousandsQUANTITY

0.98+

Cloud Native ConEVENT

0.98+

second oneQUANTITY

0.97+

Cloud Native Con 2021EVENT

0.97+

first stepQUANTITY

0.97+

one sessionQUANTITY

0.96+

C. FORGANIZATION

0.96+

KubeConEVENT

0.95+

CORGANIZATION

0.95+

ricardoPERSON

0.95+

linkedinORGANIZATION

0.95+

tens of thousands of jobsQUANTITY

0.95+

johnPERSON

0.95+

PrometheusTITLE

0.95+

one partQUANTITY

0.94+

europeLOCATION

0.94+

about a yearQUANTITY

0.93+

cloud NativeORGANIZATION

0.9+

2021EVENT

0.89+

one particular charityQUANTITY

0.88+

pandemicEVENT

0.81+

red hatORGANIZATION

0.81+

single regionQUANTITY

0.81+

HelmandTITLE

0.81+

Kublai khanPERSON

0.8+

first largeQUANTITY

0.8+

CubanLOCATION

0.8+

Cern andORGANIZATION

0.79+

EuropeLOCATION

0.78+

P.OTHER

0.77+

CoubertinORGANIZATION

0.75+

early ninetiesDATE

0.7+

CloudNativeCon Europe 2021EVENT

0.7+

over halfQUANTITY

0.68+

formTITLE

0.68+

conCOMMERCIAL_ITEM

0.67+

S. I. ModelOTHER

0.67+

Kublai khanPERSON

0.65+

TCP I.OTHER

0.65+

CfCOMMERCIAL_ITEM

0.64+

deploymentQUANTITY

0.56+

servicesQUANTITY

0.53+

googleORGANIZATION

0.48+

SAMORGANIZATION

0.46+

P. I.OTHER

0.4+

native conCOMMERCIAL_ITEM

0.37+

Chris Lynch, AtScale | CUBE Conversation, March 2021


 

>>Hello, and welcome to this cube conversation. I'm Sean for, with the cube here in Palo Alto, California, actually coming out of the pandemic this year. Hopefully we'll be back to real life soon. Uh it's uh, in March, shouldn't it be? April spring, 2021. Got a great guest Chris Lynch, who is executive chairman, CEO of scale, who took over at the helm of this company about two and a half years ago, or so, um, lots of going on Chris. Great to see you, uh, remotely, uh, in Boston, we're here in Palo Alto. Great to see you. >>Great to see you as well, but hope to see you in person, this sprint. >>Yeah. I got to say people really missing real life. And I started to see events coming back to vaccines out there, but a lot going on. I mean, Dave and I Volante, I was just talking about how, um, you know, when we first met you and big data world was kicking ass and taking names a lot's changed at Duke went the way it went. Um, you know, Vertica coming, you led, did extremely well sold. HP continue to be a crown jewel for HPE. Now the world has changed in the data and with COVID more than ever, you starting to see more and more people really doubling down. You can see who the winners and losers are. You starting to see kind of the mega trend, and now you've got the edge and other things. So I want to get your take at scale, took advantage of that pivot. You've been in charge. Give us the update. What's the current strategy of that scale? >>Sure. Well, when I took the company over about two and a half years ago, it was very focused on accelerating the dupe instances. And, uh, as you mentioned earlier, the dupe is sort of plateaued, but the ability to take that semantic layer and deliver it in the cloud is actually even more relevant with the advent of snowflake and Databricks and the emergence of, uh, Google big query, um, and Azure as the analytic platforms, in addition to Amazon, which obviously was, was the first mover in the space. So I would say that while people present big day in as sort of a passing concept, I think it's been refined and matured and companies are now digitizing their environment to take advantage of being able to deliver all of this big data in a way that, um, they could get actionable insights, which I don't think has been the case through the early stages of the development of big data concepts. >>Yeah, Chris, we've always followed your career. You've been a strong operator, but also see things a little bit early, get on the wave, uh, and help helps companies turn around also on public, a great career. You've had, I got to ask you in your opinion and you, and you can make sense for customers and make sure customers see the value proposition. So I got to ask you in this new world of the semantic layer, you mentioned snowflake, Amazon and cloud scales. Huge. Why is the semantic layer important? What is it and why is it important for customers? What are they really buying with this? >>Well, they're buying a few things, the buying freedom and choice because we're multicloud, um, they're, they're buying the ability to evolve their environments versus your evolution versus revolution. When they think about how they move forward in the next generation of their enterprise architecture. And the reason that you need the semantic layer, particularly in the cloud is that we separate the source from the actual presentation of the data. So we allow data to stay where it is, but we create one logical view that was important for legacy data workloads, but it's even more important in a world of hybrid compute models in multi-vendor cloud models. So having one source of truth, consistency, consistent access, secure access, and actual insights to wall, and we deliver this with no code and we allow you to turbocharge the stacks of Azure and Amazon Redshift and Google big query while being able to use the data that you've created your enterprise. So, so there's a demand for big data and big data means being able to access all your data into one logical form, not pockets of data that are in the cloud that are behind the firewall that are constrained by, um, vendor lock-in, but open access to all of the data to make the best decisions. >>So if I'm an enterprise and I'm used to on-premise data warehouses and data management, you know, from whether it's playing with a dupe clusters or whatever, I see snowflake, I see the cloud scale. How do I get my teams kind of modernized if you had to kind of go in and say, cause most companies actually have a hard time doing that. They're like they got to turn their existing it into cloud powerhouses. That's what they want to do. So how do you get them there? What's the secret in your opinion, to take a team and a company that's used to doing it on prem, on premises to the cloud? >>Sure. It's a great question. So as I mentioned before, the difference between evolution and revolution today, without outscale to do what you're suggesting is a revolution. And you know, it's very difficult to perform heart surgery on the patient while he's running the Boston marathon. And that's the analog I would give you for trying to digitize your environment without this semantic layer that allows you to first create a logical layer, right? This information in a logical mapping so that you can gradually move data to the appropriate place. Without us. You're asked to go from, you know, one spot to another and do that while you're running your business. And that's what discourages companies or creates tremendous risk with digitizing your environment or moving to cloud. They have to be able to do it in a way that's non-disruptive to their business and seamless with respect to their current workflows. >>No, Chris, I got to ask you without, I know you probably not expecting this question, but um, most people don't know that you are also an investor before you as CEO, um, angel investor as well. You did an angel investment deal with a chemical data robot. We've had a good outcome. And so you've seen the wave, you've seen a kind of how the progress, you mentioned snowflake earlier. Um, as you look at those kinds of deals, as they've evolved, you know, you're seeing this acceleration with data science, what's your take on this because you know, those companies that have become successful or been acquired that you've invested in now, you're operating at scale as a company, you got to direct the company into the right direction. Where is that? Where are you taking this thing? >>Sure. It's a great, great question. So with respect to AI and ML and the investment that I made almost 10 years ago and data robot, um, I believe then, and I believe now more than ever that AI is going to be the next step function in industrial productivity. And I think it's going to change, you know, the composition of our lives. And, um, I think I have enough to have been around when the web was commercialized in the internet, the impact that's having had on the world. I think that impact pales in comparison to what AI, the application of AI to all walks of life has gone going to do. Um, I think that, um, within the next 24 months companies that don't have an AI strategy will be shorted on wall street. I think every phone, every, every vertical function in the marketplace is going to be impacted by AI. >>And, um, we're just seeing the infancy of mass adoption application when it comes to at scale. I think we're going to be right in the middle of that. We're about the democratization of those AI and machine learning models. One of the interesting things we developed it, this ML ops product, where we're able to allow you with your current BI tool, we're able to take machine learning models and just all the legacy BI data into those models, providing better models, more accurate, and precise models, and then re publish that data back out to the BI tool of your choice, whether it be Tableau, Microsoft power, BI Excel, we don't care. >>So I got to ask you, okay, the enterprises are easy targets, large enterprises, you know, virtualization of the, of this world that we're living with. COVID virtualization being more, you know, virtual events, virtual meetings, virtual remote, not, not true virtualization, as we know it, it virtualization, but like life of virtualization of life companies, small companies like the, even our size, the cube, we're getting more data. So you start to see people becoming more data full, not used to dealing with data city mission. They see opportunities to pivot, leverage the data and take advantage of the cloud scale. McKinsey, just put out a report that we covered. There's a trillion dollars of new Tam in innovation, new use cases around data. So a small company, the size of the cube Silicon angle could be out there and innovate and build a use case. This is a new dynamic. This is something that was seen, this mid-market opportunity where people are starting to realize they can have a competitive advantage and disrupt the big guys and the incumbents. How do you see this mid market opportunity and how does at-scale fit into that? >>So you're as usual you're spot on John. And I think the living breathing example of snowflake, they brought analytics to the masses and to small and medium enterprises that didn't necessarily have the technical resources to implement. And we're taking a page out of their book. We're beginning to deliver the end of this quarter, integrated solutions, that map SME data with public markets, data and models, all integrated in their favorite SAS applications to make it simple and easy for them to get EnLink insight and drive it into their business decisions. And we think we're very excited about it. And, you know, if, if we can be a fraction, um, if we can, if we get a fraction of the adoption that snowflake has will be very soon, we'll be very successful and very happy with the results this year. >>Great to see you, Chris, I want to ask you one final question. Um, as you look at companies coming out of the pandemic, um, growth strategies is going to be in play some projects going to be canceled. There's pretty obvious, uh, you know, evidence that, that has been exposed by working at remote and everyone working at home, you can start to see what worked, what wasn't working. So that's going to be clear. You're gonna start to see pattern of people doubling down on certain projects. Um, at scales, a company has a new trajectory for folks that kind of new the old company, or might not have the update. What is at scale all about what are what's the bumper sticker? What's the value proposition what's working that you're doubling down on. >>We want to deliver advanced multi-dimensional analytics to customers in the cloud. And we want to do that by delivering, not compromising on the complexity of analytics, um, and to do that, you have to deliver it, um, in a seamless and easy to use way. And we figure out a way to do that by delivering it through the applications that they know and love today, whether it be their Salesforce or QuickBooks or you name, the SAS picked that application, we're going to turbocharge them with big data and machine learning in a way that's going to enhance their operations without, uh, increase the complexity. So it's about delivering analytics in a way that customers can absorb big customers and small customers alike. >>While I got you here, one final final question, because you're such an expert at turnarounds, as well as growing companies that have a growth opportunity. There's three classes of companies that we see emerging from this new cloud scale model where data's involved or whatever new things out there, but mainly data and cloud scale. One is use companies that are either rejuvenating their business model or pivoting. Okay. So they're looking at cost optimization, things of that nature, uh, class number two innovation strategy, where they're using technology and data to build new use cases or changed existing use cases for kind of new capabilities and finally pioneers, pioneering new net, new paradigms or categories. So each one has its own kind of profile. All, all are winning with data as a former investor and now angel investor and someone who's seen turnarounds and growing companies that are on the innovation wave. What's your takeaway from this because it's pretty miraculous. If you think about what could happen in each one of those cases, there's an opportunity for all three categories with cloud and data. What's your personal take on that? >>So I think if you look at, um, ways we've seen in the past, you know, particularly the, you know, the internet, it created a level of disruption that croup that delivered basically a renewed, um, playing field so that the winners and losers really could be reset and be based on their ability to absorb and leverage the new technology. I think the same as an AI and ML. So I think it creates an opportunity for businesses that were laggerts to catch, operate, or even supersede the competitors. Um, I think it has that kind of an impact. So from my, my view, you're going to see as big data and analytics and artificial intelligence, you know, mature and coalesce, um, vertical integration. So you're going to see companies that are full stack businesses that are delivered through AI and cloud, um, that are completely new and created or read juvenile based on leveraging these new fundamentals. >>So I think you're going to see a set of new businesses and business models that are created by this ubiquitous access to analytics and data. And you're going to see some laggerts catch up that you're going to see some of the people that say, Hey, if it isn't broke, don't fix it. And they're going to go by the wayside and it's going to happen very, very quickly. When we started this business, John, the cycle of innovation was five it's now, you know, under a year, maybe, maybe even five months. So it's like the difference between college for some professional sports, same football game, the speed of the game is completely different. And the speed of the game is accelerating. >>That's why the startup actions hot, and that's why startups are going from zero to 60, if you will, uh, very quickly, um, highly accelerated great stuff. Chris Lynch veteran the industry executive chairman CEO of scale here on the cube conversation with John furrier, the host. Thank you for watching Chris. Great to see you. Thanks for coming on. >>Great to see you, John, take care. Hope to see you soon. >>Okay. Let's keep conversation. Thanks for watching.

Published Date : Mar 24 2021

SUMMARY :

Great to see you, And I started to see events coming back to vaccines out there, the dupe is sort of plateaued, but the ability to take that semantic layer So I got to ask you in this new this with no code and we allow you to turbocharge the stacks of Azure So how do you get them there? You're asked to go from, you know, one spot to another and do No, Chris, I got to ask you without, I know you probably not expecting this question, but um, the application of AI to all walks of life has gone going to do. and then re publish that data back out to the BI tool of your choice, So I got to ask you, okay, the enterprises are easy targets, large enterprises, you know, enterprises that didn't necessarily have the technical resources to implement. So that's going to be clear. and to do that, you have to deliver it, um, in a seamless and easy to use way. companies that are on the innovation wave. So I think if you look at, um, ways we've seen in the past, And they're going to go by the wayside and it's going to happen very, very quickly. executive chairman CEO of scale here on the cube conversation with John furrier, the host. Hope to see you soon. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris LynchPERSON

0.99+

April spring, 2021DATE

0.99+

BostonLOCATION

0.99+

March 2021DATE

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Palo AltoLOCATION

0.99+

SeanPERSON

0.99+

MarchDATE

0.99+

five monthsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

fiveQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

this yearDATE

0.99+

60QUANTITY

0.99+

OneQUANTITY

0.99+

zeroQUANTITY

0.99+

SASORGANIZATION

0.99+

one final questionQUANTITY

0.98+

pandemicEVENT

0.98+

GoogleORGANIZATION

0.98+

HPEORGANIZATION

0.98+

each oneQUANTITY

0.97+

HPORGANIZATION

0.97+

DukeORGANIZATION

0.97+

todayDATE

0.97+

TableauTITLE

0.96+

firstQUANTITY

0.95+

under a yearQUANTITY

0.94+

one final final questionQUANTITY

0.94+

QuickBooksTITLE

0.94+

oneQUANTITY

0.93+

COVIDOTHER

0.93+

threeQUANTITY

0.93+

about two and a half years agoDATE

0.92+

10 years agoDATE

0.91+

BI ExcelTITLE

0.91+

aboutDATE

0.91+

scaleORGANIZATION

0.91+

two and a half years agoDATE

0.89+

SalesforceTITLE

0.87+

Boston marathonEVENT

0.84+

RedshiftTITLE

0.84+

first moverQUANTITY

0.83+

one logical formQUANTITY

0.83+

one sourceQUANTITY

0.83+

McKinseyORGANIZATION

0.8+

trillion dollarsQUANTITY

0.8+

next 24 monthsDATE

0.8+

endDATE

0.8+

EnLinkORGANIZATION

0.79+

one spotQUANTITY

0.75+

this quarterDATE

0.71+

three classesQUANTITY

0.7+

AtScaleORGANIZATION

0.69+

AzureTITLE

0.67+

VerticaORGANIZATION

0.63+

innovationEVENT

0.59+

VolantePERSON

0.56+

AzureORGANIZATION

0.54+

MLORGANIZATION

0.53+

twoQUANTITY

0.41+

COVIDTITLE

0.33+

Pradeep Sindhu, Fungible | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. As I've said many times on the Cube for years, decades, even we've marched to the cadence of Moore's law, relying on the doubling of performance every 18 months or so. But no longer is this the mainspring of innovation for technology. Rather, it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build out of a massively distributed computer network. Very importantly, in the last several years, alternative processors have emerged to support offloading work and performing specific Test GP use of the most widely known example of this trend, with the ascendancy of in video for certain applications like gaming and crypto mining and, more recently, machine learning. But in the middle of last decade, we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years. As we move into the next era of Cloud. And with me is deep. Sindhu, who's this co founder and CEO of Fungible, a company specializing in the design and development of GPU deep Welcome to the Cube. Great to see you. >>Thank you, Dave. And thank you for having me. >>You're very welcome. So okay, my first question is, don't CPUs and GP use process data already? Why do we need a DPU? >>Um you know that that is a natural question to ask on. CPUs have been around in one form or another for almost, you know, 55 maybe 60 years. And, uh, you know, this is when general purpose computing was invented, and essentially all CPI use went to x 80 60 x 86 architecture. Uh, by and large arm, of course, is used very heavily in mobile computing, but x 86 primarily used in data center, which is our focus. Um, now, you can understand that that architectural off general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time, uh, improvements you refer the Moore's Law, which is really the improvements off the price performance off silicon over time. Um, that, combined with architectural improvements, was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. Uh, you're not going to get very much. You're not going to squeeze more blood out of that storm from the general purpose computer architectures. What has also happened over the last decade is that Moore's law, which is essentially the doubling off the number of transistors, um, on a chip has slowed down considerably on and to the point where you're only getting maybe 10 20% improvements every generation in speed off the grandest er. If that. And what's happening also is that the spacing between successive generations of technology is actually increasing from 2, 2.5 years to now three, maybe even four years. And this is because we are reaching some physical limits in Seamus. Thes limits are well recognized, and we have to understand that these limits apply not just to general purpose if use, but they also apply to GP use now. General purpose, if used, do one kind of confrontation. They really general on bacon do lots and lots of different things. It is actually a very, very powerful engine, Um, and then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of processor called the GPU, which specializes in executing vector floating point arithmetic operations much, much better than CPL. Maybe 2030 40 times better. Well, GPS have now been around for probably 15, 20 years, mostly addressing graphics computations. But recently, in the last decade or so, they have been used heavily for AI and analytics computations. So now the question is, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago, and I recognize I was still at Juniper Networks, which is another company that I found it. I recognize that in the data center, um, as the workload changes due to addressing Mawr and Mawr, larger and larger corpus is of data number one. And as people use scale out as the standard technique for building applications, what happens is that the amount of East West traffic increases greatly. And what happens is that you now have a new type off workload which is coming, and today probably 30% off the workload in a data center is what we call data centric. I want to give you some examples of what is the data centric E? >>Well, I wonder if I could interrupt you for a second, because Because I want you to. I want those examples, and I want you to tie it into the cloud because that's kind of the topic that we're talking about today and how you see that evolving. It's a key question that we're trying to answer in this program. Of course, Early Cloud was about infrastructure, a little compute storage, networking. And now we have to get to your point all this data in the cloud and we're seeing, by the way, the definition of cloud expand into this distributed or I think the term you use is disaggregated network of computers. So you're a technology visionary, And I wonder, you know how you see that evolving and then please work in your examples of that critical workload that data centric workload >>absolutely happy to do that. So, you know, if you look at the architectural off cloud data centers, um, the single most important invention was scale out scale out off identical or near identical servers, all connected to a standard i p Internet network. That's that's the architectural. Now, the building blocks of this architecture er is, uh, Internet switches, which make up the network i p Internet switches. And then the servers all built using general purpose X 86 CPUs with D ram with SSD with hard drives all connected, uh, inside the CPU. Now, the fact that you scale these, uh, server nodes as they're called out, um, was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose computer? But this architectures, Dave, is it compute centric architectures and the reason it's a compute centric architectures. If you open this a server node, what you see is a connection to the network, typically with a simple network interface card. And then you have CP use, which are in the middle of the action. Not only are the CPUs processing the application workload, but they're processing all of the aisle workload, what we call data centric workload. And so when you connect SSD and hard drives and GPU that everything to the CPU, um, as well as to the network, you can now imagine that the CPUs is doing to functions it z running the applications, but it's also playing traffic cop for the I O. So every Io has to go to the CPU and you're executing instructions typically in the operating system, and you're interrupting the CPU many, many millions of times a second now. General Purpose CPUs and the architecture of the CPS was never designed to play traffic cop, because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's. It's critical that in this new architecture, where there's a lot of data, a lot of East West traffic, the percentage of work clothes, which is data centric, has gone from maybe 1 to 2% to 30 to 40%. I'll give you some numbers, which are absolutely stunning if you go back to, say, 1987 and which is, which is the year in which I bought my first personal computer. Um, the network was some 30 times slower. Then the CPI. The CPI was running at 50 megahertz. The network was running at three megabits per second. Well, today the network runs at 100 gigabits per second and the CPU clock speed off. A single core is about 3 to 2.3 gigahertz. So you've seen that there is a 600 x change in the ratio off I'll to compute just the raw clock speed. Now you can tell me that. Hey, um, typical CPUs have lots of lots, of course, but even when you factor that in, there's bean close toe two orders of magnitude change in the amount of ill to compute. There is no way toe address that without changing the architectures on this is where the DPU comes in on the DPU actually solves two fundamental problems in cloud data centers on these air. Fundamental. There's no escaping it, no amount off. Clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architectures the interactions between server notes are very inefficient. Okay, that's number one problem number one. Problem number two is that these data center computations and I'll give you those four examples the network stack, the storage stack, the virtualization stack and the security stack. Those four examples are executed very inefficiently by CBS. Needless to say that that if you try to execute these on GPS, you'll run into the same problem, probably even worse because GPS are not good at executing these data centric computations. So when U. S o What we were looking to do it fungible is to solve these two basic problems and you don't solve them by by just using taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the for the last 40 years. So what we did was we created this new microprocessor that we call the DPU from ground doctor is a clean sheet design and it solve those two problems. Fundamental. >>So I want to get into that. But I just want to stop you for a second and just ask you a basic question, which is so if I understand it correctly, if I just took the traditional scale out, If I scale out compute and storage, you're saying I'm gonna hit a diminishing returns, It z Not only is it not going to scale linear linearly, I'm gonna get inefficiencies. And that's really the problem that you're solving. Is that correct? >>That is correct. And you know this problem uh, the workloads that we have today are very data heavy. You take a I, for example, you take analytics, for example. It's well known that for a I training, the larger the corpus of data relevant data that you're training on, the better the result. So you can imagine where this is going to go, especially when people have figured out a formula that, hey, the more data I collect, I can use those insights to make money. >>Yeah, this is why this is why I wanted to talk to you, because the last 10 years we've been collecting all this data. Now I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research and the first thing people said is they want to improve their infrastructure on. They want to do that by moving to the cloud, and they also there was a security angle there as well. That's a whole nother topic. We could discuss the other staff that jumped out at me. There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs with alternative processing technology. So that's sort of, you know, I know it's self serving, but z right on the conversation we're having. So I >>want to >>understand the architecture. Er, aan den, how you've approached this, You've you've said you've clearly laid out the X 86 is not going to solve this problem. And even GP use are not going to solve this problem. So help us understand the architecture and how you do solve this problem. >>I'll be I'll be very happy to remember I use this term traffic cough. Andi, I use this term very specifically because, uh, first let me define what I mean by a data centric computation because that's the essence off the problem resolved. Remember, I said two problems. One is we execute data centric work clothes, at least in order of magnitude, more efficiently than CPUs or GPS, probably 30 times more efficiently on. The second thing is that we allow notes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first, let's look at the data centric piece, the data centric piece, um, for for workload to qualify as being data centric. Four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads, so I'm not saying anything new. Secondly, uh, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently. Thousands of them. Yeah, that's number two. So a lot of multiplexing number three is that this workload is state fel. In other words, you have to you can't process back. It's out of order. You have to do them in order because you're terminating network sessions on the last one Is that when you look at the actual computation, the ratio off I Oto arithmetic is medium to high. When you put all four of them together, you actually have a data centric workout, right? And this workload is terrible for general purpose, C p s not only the general purpose, C p is not executed properly. The application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them, you're going to be in trouble. So what did we do? Well, what we did was our architecture consists off very, very heavily multi threaded, general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some some of those accelerators, um, de Emma accelerators, then radio coding accelerators, compression accelerators, crypto accelerators, um, compression accelerators, thes air, just something. And then look up accelerators. These air functions that if you do not specialized, you're not going to execute them efficiently. But you cannot just put accelerators in there. These accelerators have to be multi threaded to handle. You know, we have something like 1000 different threads inside our DPU toe address. These many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that that is very important to understand is that given the paucity off transistors, I know that we have hundreds of billions of transistors on a chip. But the problem is that those transistors are used very inefficiently today. If the architecture, the architecture of the CPU or GPU, what we have done is we've improved the efficiency of those transistors by 30 times. Yeah, so you can use >>the real estate. You can use their real estate more effectively, >>much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that, you know, we're gonna end up in the same bucket where General Focus CPS are today. We were trying to solve the specific problem off data centric computations on off improving the note to note efficiency. So let me go to Point number two, because that's equally important, because in a scale out architecture, the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that, you start to get back. It drops because there are some fundamental problems caused by congestion on the network, which are unsolved as we speak today. There only one solution, which is to use DCP well. DCP is a well known is part of the D. C. P I. P. Suite. DCP was never designed to handle the agencies and speeds inside data center. It's a wonderful protocol, but it was invented 42 year 43 years ago, now >>very reliable and tested and proven. It's got a good track record, but you're a >>very good track record, unfortunately, eats a lot off CPU cycles. So if you take the idea behind TCP and you say, Okay, what's the essence of TCP? How would you apply to the data center? That's what we've done with what we call F C P, which is a fabric control protocol which we intend toe open way. Intend to publish standards on make it open. And when you do that and you you embed F c p in hardware on top of his standard I P Internet network, you end up with the ability to run at very large scale networks where the utilization of the network is 90 to 95% not 20 to 25% on you end up with solving problems of congestion at the same time. Now, why is this important today that zall geek speak so far? But the reason this stuff is important is that it such a network allows you to disaggregate pool and then virtualized, the most important and expensive resource is in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like the Ram wants to be disaggregated in food. Well, if I put everything inside a general purpose server, the problem is that those resource is get stranded because they're they're stuck behind the CPI. Well, once you disaggregate those resources and we're saying hyper disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources >>and then you're gonna re aggregate them, right? I mean, that's >>obviously exactly and the network is the key helping. So the reason the company is called fungible is because we are able to disaggregate virtualized and then pull those resources and you can get, you know, four uh, eso scale out cos you know the large aws Google, etcetera. They have been doing this aggregation and pulling for some time, but because they've been using a compute centric architecture, er that this aggregation is not nearly as efficient as we could make on their off by about a factor of three. When you look at enterprise companies, they're off by any other factor of four. Because the utilization of enterprises typically around 8% off overall infrastructure, the utilization the cloud for A W S and G, C, P and Microsoft is closer to 35 to 40%. So there is a factor off almost, uh, 4 to 8, which you can gain by disaggregated and pulling. >>Okay, so I wanna interrupt again. So thes hyper scaler zehr smart. A lot of engineers and we've seen them. Yeah, you're right. They're using ah, lot of general purpose. But we've seen them, uh, move Make moves toward GP use and and embrace things like arm eso I know, I know you can't name names but you would think that this is with all the data that's in the cloud again Our topic today you would think the hyper scaler zehr all over this >>all the hyper scale is recognized it that the problems that we have articulated are important ones on they're trying to solve them. Uh, with the resource is that they have on all the clever people that they have. So these air recognized problems. However, please note that each of these hyper scale er's has their own legacy now they've been around for 10, 15 years, and so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some >>point. Have technical debt. You mean they >>have? I'm not going to say they have technical debt, but they have a certain way of doing things on. They are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you heard the term smart neck, and all your listeners must have heard that term. Well, a smart thing is not a deep you what a smart Nick is. It's simply taking general purpose arm cores put in the network interface on a PC interface and integrating them all in the same chip and separating them from the CPI. So this does solve the problem. It solves the problem off the data centric workload, interfering with the application work, work. Good job. But it does not address the architectural problem. How to execute data centric workloads efficiently. >>Yeah, it reminds me. It reminds me of you I I understand what you're saying. I was gonna ask you about smart. Next. It does. It's almost like a bridge or a Band Aid. It's always reminds me of >>funny >>of throwing, you know, a flash storage on Ah, a disc system that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. I don't know if it's a valid analogy, but we've seen this in computing for a long time. >>Yeah, this analogy is close because, you know. Okay, so let's let's take hyper scaler X. Okay, one name names. Um, you find that, you know, half my CPUs are twiddling their thumbs because they're executing this data centric workload. Well, what are you going to do? All your code is written in, uh, C c plus plus, um, on x 86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different Let's say we use arm simply because you know x 86 licenses are not available to people to build their own CPUs. So arm was available, so they put a bunch of encores. Let's stick a PC. I express and network interface on you. Port that quote from X 86 Tow arm. Not difficult to do, but it does yield you results on, By the way, if, for example, um, this hyper scaler X shall we call them if they're able to remove 20% of the workload from general purpose CPUs? That's worth billions of dollars. So of course you're going to do that. It requires relatively little innovation other than toe for quote from one place to another place. >>That's what that's what. But that's what I'm saying. I mean, I would think again. The hyper scale is why Why can't they just, you know, do some work and do some engineering and and then give you a call and say, Okay, we're gonna We're gonna attack these workloads together. You know, that's similar to how they brought in GP use. And you're right. It's it's worth billions of dollars. You could see when when the hyper scale is Microsoft and and Azure, uh, and and AWS both announced, I think they depreciated servers now instead of four years. It's five years, and it dropped, like a billion dollars to their bottom line. But why not just work directly with you guys. I mean, Z the logical play. >>Some of them are working with us. So it's not to say that they're not working with us. So you know, all of the hyper scale is they recognize that the technology that we're building is a fundamental that we have something really special, and moreover, it's fully programmable. So you know, the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit, which is on the boundary off a server, and the network is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architectural where the functionality is programmable? But it is also very high speed for this particular set of applications. So the analogy with GPS is nearly perfect because GP use, and particularly in video that's implemented or they invented coulda, which is a programming language for GPS on it made them easy to use mirror fully programmable without compromising performance. Well, this is what we're doing with DP use. We've invented a new architectures. We've made them very easy to program. And they're these workloads or not, Workload. The computation that I talked about, which is security virtualization storage and then network. Those four are quintessential examples off data centric, foreclosed on. They're not going away. In fact, they're becoming more and more and more important over time. >>I'm very excited for you guys, I think, and really appreciate deep we're gonna have you back because I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, crypto accelerators. I want to understand that. I know there's envy me in here. There's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now, uh, into this domain, this build out of this I like this term dis aggregated, massive disaggregated network s so hyper disaggregated. Even better. And I would say this on way. I gotta go. But what got us here the last decade is not the same is what's gonna take us through the next decade. Pretty Thanks. Thanks so much for coming on the cube. It's a great company. >>You have it It's really a pleasure to speak with you and get the message of fungible out there. >>E promise. Well, I promise we'll have you back and keep it right there. Everybody, we got more great content coming your way on the Cube on Cloud, This is David. Won't stay right there.

Published Date : Jan 22 2021

SUMMARY :

a company specializing in the design and development of GPU deep Welcome to the Cube. So okay, my first question is, don't CPUs and GP use process And for the longest time, uh, improvements you refer the Moore's Law, the definition of cloud expand into this distributed or I think the term you use is disaggregated change in the amount of ill to compute. But I just want to stop you for a second and just ask you a basic So you can imagine where this is going to go, There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs and how you do solve this problem. sessions on the last one Is that when you look at the actual computation, the real estate. centric computations on off improving the note to note efficiency. but you're a disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can and then pull those resources and you can get, you know, four uh, all the data that's in the cloud again Our topic today you would think the hyper scaler all the hyper scale is recognized it that the problems that we have articulated You mean they of course, you heard the term smart neck, and all your listeners must have heard It reminds me of you I I understand what you're saying. that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. Well, the easiest thing to do is to separate the cores that run this workload. you know, do some work and do some engineering and and then give you a call and say, And so the whole trick is how do you come up I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, You have it It's really a pleasure to speak with you and get the message of fungible Well, I promise we'll have you back and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20%QUANTITY

0.99+

DavePERSON

0.99+

SindhuPERSON

0.99+

90QUANTITY

0.99+

AWSORGANIZATION

0.99+

30%QUANTITY

0.99+

50 megahertzQUANTITY

0.99+

CBSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

30 timesQUANTITY

0.99+

80%QUANTITY

0.99+

1QUANTITY

0.99+

four yearsQUANTITY

0.99+

55QUANTITY

0.99+

15QUANTITY

0.99+

Pradeep SindhuPERSON

0.99+

DavidPERSON

0.99+

five yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

20QUANTITY

0.99+

600 xQUANTITY

0.99+

first questionQUANTITY

0.99+

next decadeDATE

0.99+

60 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

billion dollarsQUANTITY

0.99+

todayDATE

0.99+

30QUANTITY

0.99+

two thingsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

fourQUANTITY

0.99+

40%QUANTITY

0.99+

1987DATE

0.99+

1000 different threadsQUANTITY

0.99+

FirstQUANTITY

0.98+

FungibleORGANIZATION

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

8QUANTITY

0.98+

25%QUANTITY

0.98+

Four thingsQUANTITY

0.98+

second thingQUANTITY

0.98+

10QUANTITY

0.98+

35QUANTITY

0.98+

one solutionQUANTITY

0.97+

singleQUANTITY

0.97+

around 8%QUANTITY

0.97+

third elementQUANTITY

0.97+

SecondlyQUANTITY

0.97+

95%QUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

100 gigabits per secondQUANTITY

0.97+

hundreds of billions of transistorsQUANTITY

0.97+

2.3 gigahertzQUANTITY

0.97+

single coreQUANTITY

0.97+

2030DATE

0.97+

4QUANTITY

0.96+

CubanOTHER

0.96+

2%QUANTITY

0.96+

eachQUANTITY

0.95+

MoorePERSON

0.95+

last decadeDATE

0.95+

three megabits per secondQUANTITY

0.95+

10 20%QUANTITY

0.95+

42 yearDATE

0.94+

bothQUANTITY

0.94+

40 timesQUANTITY

0.93+

two fundamental problemsQUANTITY

0.92+

15 yearsQUANTITY

0.92+

Problem number twoQUANTITY

0.91+

two basic problemsQUANTITY

0.9+

43 years agoDATE

0.9+

86OTHER

0.9+

one placeQUANTITY

0.9+

one sideQUANTITY

0.89+

first personal computerQUANTITY

0.89+

Pradeep Sindhu CLEAN


 

>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.

Published Date : Jan 4 2021

SUMMARY :

of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

90QUANTITY

0.99+

PradeepPERSON

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

15 megahertzQUANTITY

0.99+

30 timesQUANTITY

0.99+

30%QUANTITY

0.99+

four yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20QUANTITY

0.99+

five yearsQUANTITY

0.99+

80%QUANTITY

0.99+

30QUANTITY

0.99+

Juniper NetworksORGANIZATION

0.99+

Pradeep SindhuPERSON

0.99+

GoogleORGANIZATION

0.99+

two problemsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

600XQUANTITY

0.99+

1987DATE

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

first questionQUANTITY

0.99+

two problemsQUANTITY

0.99+

1,000 different treadsQUANTITY

0.99+

oneQUANTITY

0.99+

30 timesQUANTITY

0.99+

60 yearsQUANTITY

0.99+

next decadeDATE

0.99+

eachQUANTITY

0.99+

second thingQUANTITY

0.99+

2.3 gigahertzQUANTITY

0.99+

2%QUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

40%QUANTITY

0.99+

thousandsQUANTITY

0.99+

two functionsQUANTITY

0.98+

25%QUANTITY

0.98+

todayDATE

0.98+

third elementQUANTITY

0.98+

FungibleORGANIZATION

0.98+

95%QUANTITY

0.98+

40 timesQUANTITY

0.98+

two ordersQUANTITY

0.98+

singleQUANTITY

0.98+

SecondlyQUANTITY

0.98+

last decadeDATE

0.98+

two thingsQUANTITY

0.98+

two basic problemsQUANTITY

0.97+

10, 20%QUANTITY

0.97+

a secondQUANTITY

0.97+

around 8%QUANTITY

0.97+

one solutionQUANTITY

0.97+

43 years agoDATE

0.97+

fourQUANTITY

0.97+

four examplesQUANTITY

0.96+

eightQUANTITY

0.96+

billions of dollarsQUANTITY

0.96+

100 gigabits per secondQUANTITY

0.96+

one sideQUANTITY

0.95+

35QUANTITY

0.94+

three megabits per secondQUANTITY

0.94+

GCPORGANIZATION

0.93+

AzureORGANIZATION

0.92+

two fundamental problemsQUANTITY

0.91+

hundreds of billions of transistorsQUANTITY

0.91+

two and a half yearsQUANTITY

0.91+

Problem number twoQUANTITY

0.9+

Power Panel with Tim Crawford & Sarbjeet Johal | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hello and welcome back to the cubes Virtual coverage of AWS reinvent 2020. Um, John for your host with a cube virtual were not there in person, but we're gonna do it our job with the best remote we possibly can. Where? Wall to wall coverage on the AWS reinvent site as well as on demand on the Cube. Three new 3 65 platform. We got some great power panel analysts here to dig in and discuss Partner Day for a W S what it means for the customer. What it means for the enterprise, the buyer, the people trying to figure out who to buy from and possibly new partners. How can they re engineer and reinvent their company to partner better with Amazon, take advantage of the benefits, but ultimately get more sales? We got Tim Crawford, star Beat Joel and Day Volonte, Friends of the Cube. We all know him on Twitter, You guys, the posse, the Cube policy. Thanks for coming on. I'm sure it's good guys entertaining and we're >>hanging out drinking beer. Oh, my God. That'd be awesome. You guys. >>Great to have you on. I wanted to bring you on because it's unique. Cross section of perspectives. And this isn't This is from the end user perspective. And, Tim, you've been talking about the c x o s for years. You expert in this? Sorry. You're taking more from a cloud perspective. You've seen the under the hood. What's happening? Let's all put it together. If your partner Okay, first question to the group. I'm a partner. Do I win with Amazon, or do I lose with Amazon? First question. >>Yeah, I'll jump in. I'll say, you know, regardless you win, you win with Amazon. I think there's a lot of opportunity for partners with Amazon. Um, you have to pick your battles, though. You have to find the right places where you can carve out a space that isn't too congested but also isn't really kind of fettered with a number of incumbents. And so if you're looking at the enterprise space, I think that there is a ton of potential because, let's face it, >>Amazon >>doesn't have all of the services packaged in a way that the enterprise can consume. And I think that leaves a lot of fertile ground for s eyes and I SVS to jump in and be able to connect those dots so I'd say it's win, win >>start be if you're like a so cohesively onstage. Jackson's coming out talking about China, the chips and data. If you're like a vendor and I s V you're a startup or your company trying to reinvent How do you see Amazon as a partner? >>Yeah, I see Amazon as a big market for me. You know, it increased my sort of tam, if you will. Uh, the one big sort off trend is that the lines between technology providers and service providers are blurred. Actually, it's flipping. I believe it will flip at some time. We will put consume technology from service providers, and they are becoming technology providers. Actually, they're not just being pipe and power kind of cloud. They are purely software, very high sort of highly constructed machinery, if you will. Behind the scenes with software. >>That's >>what Amazon is, uh, big machine. If you are, and you can leverage that and then you can help your customers achieve their business called as a partner. I think's the women and the roll off. Actually, Assize is changing, I believe a size. Well, I thought they were getting slow, sidetracked by the service providers. But now they have to actually change their old the way they they used to get these, you know, shrink wrap software, and then install and configure and all that stuff. Now it's in a cloud >>on >>they have to focus a little more on services, and and some of the s eyes are building tools for multi cloud consumption and all that. So things are changing under under this whole big shift to go out. >>I mean, I think if you're in S I and you're lifting and shifting, you make a few bucks and helping people do that deal with the tech. But I think we're the rial. Money is the business transformation, and you find the technology is there, it's it's another tool in the bag. But if you can change your operating model, that's gonna drive telephone numbers to the bottom line. That's a boardroom discussion, and that's where the real dollars are for s eyes. That's like that's why guys like Accent you're leading leading into the cloud Big time >>e think I think you're absolutely right, David. I think that's that's one aspect that we have to kind of call out is you can be one of those partners that is focused on the transaction and you'll be successful doing that. But you're absolutely right. If you focus on the long game. I think that is just like I said, completely fertile ground. And there are a lot of opportunities because historically Amazon was ah was a Lego parts, uh, type of cloud provider, right? They provided you with the basic building blocks, which is great for Web scale and startups not so good for enterprise. And so now Amazon is starting to put together in package part, so it's more consumable by enterprises. But you still need that help. And as Sarpy just mentioned, you also have to consider that Amazon is not the only aspect that you're gonna be using. You're gonna be using other providers to. And so I think this again is where partners they pick a primary, and then they also bring in the others where appropriate. >>All right, I want to get into this whole riff. I have a cherry chin on day one. Hey, came on the special fireside chat with me and we talked about, um, cloud errors before cloud Amazon. And now I'll call postcode because we're seeing this kind of whole new, you know, in the cloud kind of generation. And so he said, OK, this pre cloud you had Amazon generation, whereas lift and shift. Ah, lot of hybrid And you have everything is in the cloud like a snowflake kind of thing. And he kind of call it the reptiles versus the amphibians you're on. See your inland, your hybrid, and then you're you're in the water. I mean, so So he kind of went on, Took that another level, meaning that. Okay, this is always gonna be hybrid. But there's a unique differentiation for being all in the cloud. You're seeing different patterns. Amazon certainly has an advantage. See, Dev Ops guru, that's just mining the data of their entire platform and saying Okay, Yeah, do this. There's advantages for being in the cloud that aren't available. Hybrid. So amphibian on land and sea hybrid. And then in the cloud. How do you guys see that if you're a partner. You wanna be on the new generation. What's the opportunity to capture value? He has hybrid certainly coexist. But in the new era, >>remember Scott McNealy used to talk about car makers and car dealers. And of course, Sun's gone. But he used to say, We want to be a carmaker. Car dealers. They got big houses and big boats, but we're gonna be a carmaker. Oh, I think it's some similarities here. I mean, there's a lot of money to be made as a as a car dealer. But you see, companies like Dell, H P E. You know, they want to be carmakers. Obviously Google Microsoft. But there are gonna be a lot of successful really big carmakers in this game. >>Yeah, I believe I believe I always call it Amazon Is the makers cloud right, So they are very developer friendly. They were very developer friendly for startups. Uh, a stem said earlier, but now they are very developer, friendly and operations friendly. Now, actually, in a way for enterprises, I believe, and that the that well, the jerry tend to sort of Are you all all in cloud are sitting just in the dry land. Right now, I think every sort off organization is in a different sort off mature, at different maturity level. But I think we're going all going towards a technology consumption as a service. Mostly, I think it will be off Prem. It can be on Prem in future because off age and all that. And on that note, I think EJ will be dominated by Tier one cloud providers like crazy people who think edge will be nominally but telcos and all that. I think they're just, uh, if >>I made Thio, if I may interject for a second for the folks watching, that might not be old enough to know who Scott McNealy is. He's the founder of Sun Microsystems, which was bought by Oracle years ago. Yeah, basically, because many computer, there's a lot of young kids out there that even though Scott McNealy's But remember, >>do your homework, Scott, you have to know who Scott Scott McNealy >>also said, because Bill Gates was dominant. Microsoft owns the tires and the gas to, and they want to own the road. So remember Microsoft was dominating at that time. So, Tim Gas data is that I mean, Amazon might have everything there. >>I was gonna go back to the to the comment. You know, McNeely came out with some really, really good analogies over his tenure. Um, it's son and you know, son had some great successes. But unfortunately, Cloud is not as simplistic as buying a car and having the dealership and the ecosystem of gas and tires. And the rest you have to think about the toll journey. And that journey is incredibly complicated, especially for the enterprise that's coming from legacy footprints, monolithic application stacks and trying to understand how to make that transition. It's almost it's almost, in a way mawr analogous to your used to riding a bike, and now you're gonna operate a semi. And so how do you start to put all of the pieces into place to be able to make that transition? And it's not trivial. You have to figure out how your culture changes, how your processes changes. There are a lot of connected parts. It's not a simple as the ecosystem of tires and gas. We have to think about how that data stream fits in with other data streams where analytics are gonna be done. What about tying back to that system of record that is going to stay on the legacy platform. Oh, and by the way, some of that has to still stay on Prem. It can't move to the cloud yet. So we have this really complicated, diverse environment that we have to manage, and it's only getting more complicated. And I think that's where the opportunity comes in for the size and s visas. Step into that. Understand that journey, understand the transitions. I don't believe that enterprises, at least in the near term, let alone short term, will be all in cloud. I think that that's more of a fantasy than reality. There is a hybrid state that that is going to be transitory for some period of time, and that's where the big opportunity is. >>I think you're right on time. I think just to double down on that point, just to bring that to another level is Dave. Remember back in the days when PCs where the boom many computers with most clients there was just getting started? There was a whole hype cycle on hard drives, right? Hard drives were the thing. Now, if you look out today, there's more. Observe, ability, startups and I could count, right? So to Tim's point, this monolithic breakdown and component izing decomposing, monolithic APs or environments with micro services is complex. So, to me, the thing that I see is that that I could relate to is when I was breaking in in the eighties, you had the mainframes. Is being the youngun I'm like, Okay, mainframes, old monolithic client server is a different paradigm thing. You had, uh, PCs and Internet working. I think all that change is happening so fast right now. It's not like over 10 years to Tim's points, like mainframes to iPhones. It's happening in like three years. Imagine crunching all that complexity and change down to a short window. I think Amazon has kind of brought that. I'm just riffing on that, But >>yeah, you're absolutely right, John. But I think there's another piece and we can use a very specific example to show this. But another piece that we have to look at is we're trying to simplify that environment, and so a good place to simplify that is when we look at server lis and specifically around databases, you know, historically, I had to pick the database architecture that the applications would ride on. Then I have to have the infrastructure underneath and manage that appropriately so that I have both the performance a swell, a security as well as architecture. Er and I have to scale that as needed. Today, you can get databases of service and not have to worry about the underpinnings. You just worry about the applications and how those data streams connect to other data streams. And so that's the direction that I think things were going is, and we see this across the enterprise we're looking for. Those packaged package might be a generalized term, but we're looking for um, or packaged scenario and opportunity for enterprises rather than just the most basic building blocks. We have to start putting together the preformed applications and then use those as larger chunks. And >>this is the opportunity for a size I was talking before about business transformation. If you take, take Tim's database example, you don't need somebody anymore. Toe, you know, set up your database to tune it. I mean, that's becoming autonomous. But if you think about the way data pipelines work in the way organizations are structured where everything because it goes into this monolithic data lake or and and And it's like generic content coming in generic data where the business owner has to get in line and beg a data scientist or quality engineered or thio ingest a new data source. And it's just like the old data warehouse days where I think there's tremendous opportunities for s eyes to go in a completely re architect. The data model. Sergeant, This is something you and I were talking about on Twitter. It's That's why I like what snowflakes doing. It's kind of a AWS is trying to do with lasted glue views, but there's a whole business transformation opportunity for s eyes, which I just think is huge. Number l >>e all talk. Go ahead. Sorry. Yeah, >>I think we >>all talk, but we know we all agree on one thing that the future is hybrid for at least for next. You know, 10 years, if not more. Uh, hybrid is hard. The data proximity is, uh, very important. That means Leighton see between different workloads, right? That's super important. And I talk about this all the time and almost in every conversation I have about about. It's just scenario, is that there three types of applications every every enterprise systems or fractured systems, systems of engagement and the systems of innovation and my theory of cloud consumption tells me that sooner or later, systems off record. We'll move into SAS SAS world. That's that's how I see it. There's no other way around, I believe, and the systems off engagement or systems off differentiation something and call it. They will leverage a lot off platforms, the service and in that context context, I have said it many times the to be a best of the breed platform. As a service, you have to be best off the breed, um, infrastructure as a service provider. And that's Amazon. And that is that's also a zero to a certain extent, and then and and Google is trying to do that, too. So the feature sort off gap between number one cloud and two and three is pretty huge. I believe I think Amazon is doing great data democratization through several less. I just love serving less for that Several things over. Unless there is >>a winning formula is no doubt about several times I totally agree. But I think one of the things that I miss it has done is they've taken server lists. They brought their putting all the I as and the chips, and they're moving all the value up to the service layer, which gives them the advantage over others. Because everyone else is trying to compete down here. They're gonna be purpose built. If you look what Apple is doing with the chips and what the Amazon is doing, they're gonna kind of have this chip to chip scenario and then the middle. Where in between is the container ization, the micro services and Lambda? So if you're a developer, you approach is it's programmable at that point that could that could be a lock spec. I think for Amazon, >>it absolutely could be John. But I think there's another aspect here that we have to touch on, especially as we think about partners and where the opportunities come in. And that is that We often talk about non cloud to cloud right, how to get from on Prem to cloud. But the piece that you also have thio bring into the conversation is Theo edge to cloud continuum and So I think if you start to look at some of the announcements this week from AWS, you start looking at some of the new instance types uh, that are very ai focused. You look at the two new form factors for outposts, which allows you to bring cloud to a smaller footprint within an on premise premises, situation, uh, different local zones. And then Thea other piece that I think is really interesting is is their announcements around PCs and eks anywhere being able to take cloud in kubernetes, you know, across the board. And so the challenge here is, as I mentioned earlier, complexity is paramount. It's concern for enterprises just moving to cloud. You start layering in the edge to cloud continuum, and it just it gets exponentially more complicated. And so Amazon is not going to be the one to help you go through that. Not because they can't, but frankly, just the scale of help that is going to be needed amongst enterprises is just not there. And so this is really where I think the opportunity lies for the s eyes and I SVS and partners. You >>heard how Jassy defined hybrid John in the article that you wrote when you did your one on one with him, Tim and the in the analyst call, you answered my question and then I want to bring in Antonio near his comment. But Jassy basically said, Look, we see the cloud bring We're gonna bring a W s to the edge and we see data centers. This is another edge node and San Antonio Neary after HP is pretty good quarter uh came out and said, Well, we heard the public cloud provider talking about hybrid welcome, you know? >>Yeah, they were going and then getting here jumped on that big time. But we'll be looking hybrid. Tim nailed The complexity is the is the evil is friction is a friction area. If the complexity could be mastered by the edge provider closest to the customer, that's gonna be valuable, um, for partners. And then we can do that. Amazon's gonna have to continue to remove the friction and putting that together, which is why I'm nervous about their channel partners. Because if I'm a partner, I asked myself, How do I make money with Amazon? Right? At the end of the day, it's money making right. So how can I be successful? Um, not gonna sell more in the marketplace. Will the customer consumer through there? Is it friction or is a complex So this notion of complexity and friction becomes a double edged sword Tim on both sides. So we have five minutes left. Let's talk about the bottom side Complexity, >>friction. So you're absolutely right, John. And you know, the other thing that that I would say is for the partner, you have to look beyond what Amazon is selling today. Look at where the customers are going. And you know, David, I think you and I were both in an analyst session with Andy Jassy several years ago where one of the analysts asked the question. So you know, what's your perspective on Hybrid Cloud? In his response, candidly was, while we have this particular service and really, what he was talking to is a service that helps you on board to Amazon's public cloud. There was there was not an acknowledgment of hybrid cloud at the time, But look at how things have changed just in a short few years, and I understand where Jassy is coming from, but this is just exemplifies the fact that if you're a partner, you have to look beyond what Amazon is saying and think toe how the customer is evolving, how the enterprise is evolving and get yourself ahead of them. That will position you best for both today. And as you're building for the future. >>That's a great point, Dave. Complexity on buying. I'm a customer. You can throw me a marketplace all you want, but if I'm not gonna be tied into my procurement, how I'm consuming technology. Tim's point. Amazon isn't the only game in town. I got other suppliers. >>Yeah, well, certainly for some technology suppliers, they're basically could bring their on prem estate if it's big enough into the cloud. Uh, you know what is big enough? That's the big question here. You know, our guys like your red hats big enough. Okay, we know that Nutanix pure. They're sort of the next layer down. Can they do? They have enough of a customer base that they could bring into the cloud, create that abstraction layer, and then you got the born in the cloud guy Snowflake, Colombia or two good examples. Eso They've got the technology partners and then they're the size and consultants. And again, I see that is the really big opportunity is 10 points out? Amazon is acknowledging that hybrid Israel in in a newly defined way, they're going out to the edge, find you wanna call data center the edge. How are they going to support those installations? How are they gonna make sure that they're running properly? That they're connected to the business process? Those air That's s I whitespace. Huge. >>Guys, we have to wrap it up right now. But I just end on, you know, we'll get everyone go A little lightning around quick soundbite on the phrase with him, which stands for what's in it from me. So if I'm a partner, I'm a customer. I look at Amazon, I think. What's in it for me? Yeah. What a za customer like what do I get out of this? >>Yeah, having done, like more than 100 data center audits, and I'm seeing what mess up messes out there and having done quite a few migrations to cloud migrations of the messy messages piece, right? And it doesn't matter if you're migrating 10% or 20 or 30 it doesn't matter that how much you're migrating? It's a messy piece, and you cannot do with our partners that work. Actually, you need that. Know how you need to infuse that that education into into your organization, how to consume cloud, how toe make sense of it, how you change your processes and how you train your people. So it touches all the products, people and processes. So on three years, you gotta have partners on your side to make it >>so Hey, I'll go quick. And, Tim, you give you the last word. Complexity is cash. Chaos is cash. Follow the complexity. You'll make cash. >>Yeah, you said it, David. I think anyway, that you can help an enterprise simplify. And if you're the enterprise, if you're the customer, look for those partners. They're gonna help you simplify the journey over time. That's where the opportunity really lies. >>Okay, guys, Expert power panel here on Cuba live program, part of AWS reinvent virtual coverage, bringing you all the analysis from the experts. Digital transformations here. What's in it for me is a partner and customer. Help me make some money, master complexity and serve my customer. Mister Cube. Thanks for watching >>que Yeah, from around the globe. It's the cute

Published Date : Dec 3 2020

SUMMARY :

It's the Cube with digital coverage of You guys, the posse, the Cube policy. You guys. Great to have you on. You have to find the right places where you can carve out And I think that leaves a lot of fertile ground for s eyes and I SVS to the chips and data. Behind the scenes with software. and then you can help your customers achieve their business called they have to focus a little more on services, and and some of the s eyes are building tools for multi cloud But if you can change your operating model, that's gonna drive telephone numbers to the bottom line. And as Sarpy just mentioned, you also have to consider that Amazon is not What's the opportunity to capture value? I mean, there's a lot of money to be made as a as a car dealer. the jerry tend to sort of Are you all all in cloud are sitting I made Thio, if I may interject for a second for the folks watching, Microsoft owns the tires and the gas And the rest you have to think about the toll journey. Remember back in the days when PCs where the boom many computers with most clients there was just getting And so that's the direction that I think things were going is, And it's just like the old data warehouse e all talk. As a service, you have to be Where in between is the container ization, the micro services and Lambda? But the piece that you also have thio bring into the conversation is Theo edge to cloud continuum heard how Jassy defined hybrid John in the article that you wrote when you did your one on one If the complexity could be mastered by the edge provider closest to the customer, is for the partner, you have to look beyond what Amazon is selling today. You can throw me a marketplace all you want, but if I'm not gonna be tied into my procurement, I see that is the really big opportunity is 10 points out? But I just end on, you know, we'll get everyone go A So on three years, you gotta have partners on your side to Follow the complexity. I think anyway, that you can help an enterprise simplify. part of AWS reinvent virtual coverage, bringing you all the analysis from It's the cute

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

JassyPERSON

0.99+

AmazonORGANIZATION

0.99+

DellORGANIZATION

0.99+

Tim CrawfordPERSON

0.99+

JohnPERSON

0.99+

Sun MicrosystemsORGANIZATION

0.99+

AWSORGANIZATION

0.99+

TimPERSON

0.99+

10%QUANTITY

0.99+

McNeelyPERSON

0.99+

GoogleORGANIZATION

0.99+

ScottPERSON

0.99+

AppleORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

HPORGANIZATION

0.99+

Bill GatesPERSON

0.99+

DavePERSON

0.99+

Day VolontePERSON

0.99+

MicrosoftORGANIZATION

0.99+

H P E.ORGANIZATION

0.99+

Andy JassyPERSON

0.99+

five minutesQUANTITY

0.99+

OracleORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Scott McNealyPERSON

0.99+

LegoORGANIZATION

0.99+

first questionQUANTITY

0.99+

both sidesQUANTITY

0.99+

10 yearsQUANTITY

0.99+

20QUANTITY

0.99+

Tim GasPERSON

0.99+

TodayDATE

0.99+

10 pointsQUANTITY

0.99+

Scott McNealyPERSON

0.99+

todayDATE

0.99+

JacksonPERSON

0.99+

bothQUANTITY

0.99+

over 10 yearsQUANTITY

0.99+

30QUANTITY

0.99+

CubaLOCATION

0.99+

NutanixORGANIZATION

0.98+

First questionQUANTITY

0.98+

AIOps Virtual Forum 2020


 

>>From around the globe. It's the cube with digital coverage of an AI ops virtual forum brought to you by Broadcom. >>Welcome to the AI ops virtual forum. Finally, some Artan extended to be talking with rich lane now, senior analyst, serving infrastructure and operations professionals at Forrester. Rich. It's great to have you today. >>Thank you for having me. I think it's going to be a really fun conversation to have today. >>It is. We're going to be setting the stage for, with Richard, for the it operations challenges and the need for AI ops. That's kind of our objective here in the next 15 minutes. So rich talk to us about some of the problems that enterprise it operations are facing now in this year, that is 2020 that are going to be continuing into the next year. >>Yeah, I mean, I think we've been on this path for a while, but certainly the last eight months has, uh, has accelerated, uh, this problem and, and brought a lot of things to light that, that people were, you know, they were going through the day to day firefighting as their goal way of life. Uh, it's just not sustainable anymore. You a highly distributed environment or in the need for digital services. And, you know, one of them has been building for a while really is in the digital age, you know, we're providing so many, uh, uh, the, the interactions with customers online. Um, we've, we've added these layers of complexity, um, to applications, to infrastructure, you know, or we're in the, in the cloud or a hybrid or multi-cloud, or do you know you name it using cloud native technologies? We're using legacy stuff. We still have mainframe out there. >>Uh, you know, the, just the, the vast amount of things we have to keep track of now and process and look at the data and signals from, it's just, it's a really untenable for, for humans to do that in silos now, uh, in, in, you know, when you add to that, you know, when companies are so heavily invested in gone on the digital transformation path, and it's accelerated so much in the last, uh, year or so that, you know, we're getting so much of our business in revenue derived from these services that they become core to the business. They're not afterthoughts anymore. It's not just about having a website presence. It's, it's about deriving core business value from the services you're providing to your, through your customers. And a lot of cases, customers you're never going to meet or see at that. So it's even more important to be vigilant. >>And on top of the quality of that service that you're giving them. And then when you think about just the staffing issues we have, there's just not enough bodies to go around it in operations anymore. Um, you know, we're not going to be able to hire, you know, like we did 10 years ago, even. Uh, so that's where we need the systems to be able to bring those operational efficiencies to bear. When we say operational efficiencies, we don't mean, you know, uh, lessening head count because we can't do that. That'd be foolish. What we mean is getting the head count. We have back to burping on and higher level things, you know, working on, uh, technology refreshes and project work that that brings better digital services to customers and get them out of doing these sort of, uh, low, uh, complexity, high volume tasks that they're spending at least 20%, if not more on our third day, each day. So I think that the more we can bring intelligence to bear and automation to take those things out of their hands, the better off we are going forward. >>And I'm sure those workers are wanting to be able to have the time to deliver more value, more strategic value to the organization, to their role. And as you're saying, you know, was the demand for digital services is spiking. It's not going to go down and as consumers, if w if we have another option and we're not satisfied, we're going to go somewhere else. So, so it's really about not just surviving this time right now, it's about how do I become a business that's going to thrive going forward and exceeding expectations that are now just growing and growing. So let's talk about AI ops as a facilitator of collaboration, across business folks, it folks developers, operations, how can it facilitate collaboration, which is even more important these days? >>Yeah. So one of the great things about it is now, you know, years ago, have I gone years, as they say, uh, we would buy a tool to fit each situation. And, you know, someone that worked in network and others who will somebody worked in infrastructure from a, you know, Linux standpoint, have their tool, somebody who's from storage would have their tool. And what we found was we would have an incident, a very high impact incident occur. Everybody would get on the phone, 24 people all be looking at their siloed tool, they're siloed pieces of data. And then we'd still have to try to like link point a to B to C together, you know, just to institutional knowledge. And, uh, there was just ended up being a lot of gaps there because we couldn't understand that a certain thing happening over here was related to an advantage over here. >>Um, now when we bring all that data into one umbrella, one data Lake, whatever we want to call it, a lot of smart analytics to that data, uh, and normalize that data in a way we can contextualize it from, you know, point a to point B all the way through the application infrastructure stack. Now, the conversation changes now, the conversation changes to here is the problem, how are we going to fix it? And we're getting there immediately versus three, four or five hours of, uh, you know, hunting and pecking and looking at things and trying to try to extrapolate what we're seeing across disparate systems. Um, and that's really valuable. And in what that does is now we can change the conversation for measuring things. And in server up time and data center, performance metrics as to how are we performing as a business? How are we overall in, in real time, how are businesses being impacted by service disruption? >>We know how much money losing per minute hour, or what have you, uh, and what that translate lights into brand damage and things along those lines, that people are very interested in that. And, you know, what is the effect of making decisions either brief from a product change side? You know, if we're, we're, we're always changing the mobile apps and we're always changing the website, but do we understand what value that brings us or what negative impact that has? We can measure that now and also sales, marketing, um, they run a campaign here's your, you know, coupon for 12% off today only, uh, what does that drive to us with user engagement? We can measure that now in real time, we don't have to wait for those answers anymore. And I think, you know, having all those data and understanding the cause and effect of things increases, it enhances these feedback loops of we're making decisions as a business, as a whole to make, bring better value to our customers. >>You know, how does that tie into ops and dev initiatives? How does everything that we do if I make a change to the underlying architectures that help move the needle forward, does that hinder things, uh, all these things factor into it. In fact, there into the customer experience, which is what we're trying to do at the end of the day, w w whether operations people like it or not, we are all in the customer experience business now. And we have to realize that and work closer than ever with our business and dev partners to make sure we're delivering the highest level of customer experience we can. >>Uh, customer experience is absolutely critical for a number of reasons. I always kind of think it's inextricably linked with employee experience, but let's talk about long-term value because as organizations and every industry has pivoted multiple times this year and will probably continue to do so for the foreseeable future, for them to be able to get immediate value that let's, let's not just stop the bleeding, but let's allow them to get a competitive advantage and be really become resilient. What are some of the, uh, applications that AI ops can deliver with respect to long-term value for an organization? >>Yeah, and I think that it's, you know, you touched upon this a very important point that there is a set of short term goals you want to achieve, but they're really going to be looking towards 12, 18 months down the road. What is it going to have done for you? And I think this helps framing out for you what's most important because it'd be different for every enterprise. Um, and it also shows the ROI of doing this because there is some, you know, change is going to be involved with things you're gonna have to do. But when you look at the, the, the longer time horizon of what it brings to your business as a whole, uh it's to me, at least it all seems, it seems like a no brainer to not do it. Um, you know, thinking about the basic things, like, you know, faster remediation of, of, uh, client impacting incidents, or maybe, maybe even predictive of sort of detection of these incidents that will affect clients. >>So now you're getting, you know, at scale, you know, it's very hard to do when you have hundreds of thousands of optics of the management that relate to each other, but now you're having letting the machines and the intelligence layer find out where that problem is. You know, it's not the red thing, it's the yellow thing. Go look at that. Um, it's reducing the amount of finger pointing and what have you like resolved between teams now, everybody's looking at the same data, the same sort of, uh, symptoms and like, Oh yeah, okay. This is telling us, you know, here's the root cause you should investigate this huge, huge thing. Um, and, and it's something we never thought we'd get to where, uh, this, this is where we smart enough to tell us these things, but this, again, this is the power of having all the data under one umbrella >>And the smart analytics. >>Um, and I think really, you know, it's a boat. Uh, if you look at where infrastructure and operations people are today, and especially, you know, eight months, nine months, whatever it is into the pandemic, uh, a lot of them are getting really burnt out with doing the same repetitive tasks over and over again. Um, just trying to keep the lights on, you know, we need, we need to extract those things for those people, uh, just because it just makes no sense to do something over and over again, the same remediation step, just we should automate those things. So getting that sort of, uh, you know, drudgery off their hands, if you will, and, and get them into, into all their important things they should be doing, you know, they're really hard to solve problems. That's where the human shine, um, and that's where, you know, having a, you know, really high level engineers, that's what they should be doing, you know, and just being able to do things I >>Think in a much faster, >>In a more efficient manner, when you think about an incident occurring, right. In, in a level, one technician picks that up and he goes and triaged that maybe run some tests. He has a script, >>Uh, or she, uh, and, >>You know, uh, they open a ticket and they enrich the ticket. They call it some log files. They can look up for the servers on it. You're in an hour and a half into an incident before anyone's even looked at it. If we could automate all of that, >>Why wouldn't we, that makes it easier for everyone. Um, >>Yeah. And I really think that's where the future is, is, is, is bringing this intelligent automation to bear, to take, knock down all the little things that consume the really, the most amount of time. When you think about it, if you aggregate it over the course of a quarter or a year, a great deal of your time is spent just doing that minutiae again, why don't we automate that? And we should. So I really think that's, that's where you get to look long-term. I think also the sense of we're going to be able to measure everything in the sense of business KPIs versus just IT-centric KPIs. That's really where we going to get to in the digital age. And I think we waited too long to do that. I think our operations models were all voted. I think, uh, you know, a lot of, a lot of the KPIs we look at today are completely outmoded. They don't really change if you think about it. When we look at the monthly reports over the course of a year, uh, so let's do something different. And now having all this data and the smart analytics, we can do something different. Absolutely. I'm glad >>That you brought up kind of looking at the impact that AI ops can make on, on minutiae and burnout. That's a really huge problem that so many of us are facing in any industry. And we know that there's some amount of this that's going to continue for a while longer. So let's get our let's leverage intelligent automation to your point, because we can to be able to allow our people to not just be more efficient, but to be making a bigger impact. And there's that mental component there that I think is absolutely critical. I do want to ask you what are some of these? So for those folks going, all right, we've got to do this. It makes sense. We see some short-term things that we need. We need short-term value. We need long-term value as you've just walked us through. What are some of the obstacles that you'd say, Hey, be on the lookout for this to wipe it out of the way. >>Yeah. I, I think there's, you know, when you think about the obstacles, I think people don't think about what are big changes for their organization, right? You know, they're, they're going to change process. They're going to change the way teams interact. They're they're going to change a lot of things, but they're all for the better. So what we're traditionally really bad in infrastructure and operations is communication, marketing, a new initiative, right? We don't go out and get our peers agreement to it where the product owner is, you know, and say, okay, this is what it gets you. This is where it changes. People just hear I'm losing something, I'm losing control over something. You're going to get rid of the tools that I have, but I love I've spent years building out perfecting, um, and that's threatening to people and understandably so because people think if I start losing tools, I start losing head count. >>And then, whereas my department at that point, um, but that's not what this is all about. Uh, this, this isn't a replacement for people. This isn't a replacement for teams. This isn't augmentation. This is getting them back to doing the things they should be doing and less of the stuff they shouldn't be doing. And frankly, it's, it's about providing better services. So when in the end, it's counterintuitive to be against it because it's gonna make it operations look better. It's gonna make us show us that we are the thought leaders in delivering digital services that we can, um, constantly be perfecting the way we're doing it. And Oh, by the way, we can help the business be better. Also at the same time. Uh, I think some of the mistakes people really don't make, uh, really do make, uh, is not looking at their processes today, trying to figure out what they're gonna look like tomorrow when we bring in advanced automation and intelligence, uh, but also being prepared for what the future state is, you know, in talking to one company, they were like, yeah, we're so excited for this. >>Uh, we, we got rid of our old 15 year old laundering system and the same day we stepped a new system. Uh, one problem we had though, was we weren't ready for the amount of incidents that had generated on day one. And it wasn't because we did anything wrong or the system was wrong or what have you. It did the right thing actually, almost too. Well, what it did is it uncovered a lot of really small incidents through advanced correlations. We didn't know we had, so there were things lying out there that were always like, huh, that's weird. That system acts strange sometimes, but we can never pin it down. We found all of those things, which is good. It goes, but it kind of made us all kind of sit back and think, and then our readership are these guys doing their job. Right? >>And then we had to go through an evolution of, you know, just explaining we were 15 years behind from a visibility standpoint to our environment, but technologies that we deployed in applications had moved ahead and modernized. So this is like a cautionary tale of falling too far behind from a sort of a monitoring and intelligence and automation standpoint. Um, so I thought that was a really good story for something like, think about as Eagle would deploy these modern systems. But I think if he really, you know, the marketing to people, so they're not threatened, I think thinking about your process and then what's, what's your day one and then look like, and then what's your six and 12 months after that looks like, I think settling all that stuff upfront just sets you up for success. >>All right. Rich, take us home here. Let's summarize. How can clients build a business case for AI ops? What do you recommend? >>Yeah. You know, I actually get that question a lot. It's usually, uh, almost always the number one, uh, question in, in, um, you know, webinars like this and conversations that, that the audience puts in. So I wouldn't be surprised, but if that was true, uh, going forward from this one, um, yeah, people are like, you know, Hey, we're all in. We want to do this. We know this is the way forward, but the guy who writes the checks, the CIO, the VP of ops is like, you know, I I've signed lots of checks over the years for tools wise is different. Um, and when I guide people to do is to sit back and, and start doing some hard math, right. Uh, one of the things that resonates with the leadership is dollars and cents. It's not percentages. So saying, you know, it's, it brings us a 63% reduction and MTTR is not going to resonate. >>Uh, Oh, even though it's a really good number, you know, uh, I think what it is, you have to put it in terms of avoid, if we could avoid that 63%. Right. You know, um, what does that mean for our, our digital services as far as revenue, right. We know that every hour system down, I think, uh, you know, typically in the market, you see is about $500,000 an hour for enterprise. We'll add that up over the course of the year. What are you losing in revenue? Add to that brand damage loss of customers, you know, uh, Forrester puts out a really big, uh, casino, um, uh, customer experience index every year that measures that if you're delivering good Udall services, bad digital services, if you could raise that up, what does that return to you in revenue? And that's a key thing. And then you just look at the, the, uh, hours of lost productivity. >>I call it, I might call it something else, but I think it's a catchy name. Meaning if a core internal system is down say, and you know, you have a customer service desk of a thousand customer service people, and they can't do that look up or fix that problem for clients for an hour. How much money does that lose you? And you multiply it out. You know, average customer service desk person makes X amount an hour times this much time. This many times it happens. Then you start seeing the real, sort of a power of AI ops for this incident avoidance, or at least lowering the impact of these incidents. And people have put out in graphs and spreadsheets and all this, and then I'm doing some research around this actually to, to, to put out something that people can use to say, the project funds itself in six to 12 months, it's paid for itself. And then after that it's returning money to the business. Why would you not do that? And when you start framing the conversation, that way, the little light bulb turn on for the people that sign the checks. For sure. >>That's great advice for folks to be thinking about. I loved how you talked about the 63% reduction in something. I think that's great. What does it impact? How does it impact the revenue for the organization? If we're avoiding costs here, how do we drive up revenue? So having that laser focus on revenue is great advice for folks in any industry, looking to build a business case for AI ops. I think you set the stage for that rich beautifully, and you were right. This was a fun conversation. Thank you for your time. Thank you. And thanks for watching >>From around the globe with digital coverage. >>Welcome back to the Broadcom AI ops, virtual forum, Lisa Martin here talking with Eastman Nasir global product management at Verizon. We spent welcome back. >>Hi. Hello. Uh, what a pleasure. >>So 2020 the year of that needs no explanation, right? The year of massive challenges and wanting to get your take on the challenges that organizations are facing this year as the demand to deliver digital products and services has never been higher. >>Yeah. So I think this is something it's so close to all the far far, right? It's, uh, it's something that's impacted the whole world equally. And I think regardless of which industry you rent, you have been impacted by this in one form or the other, and the ICT industry, the information and communication technology industry, you know, Verizon being really massive player in that whole arena. It has just been sort of struck with this massive consummation we have talked about for a long time, we have talked about these remote surgery capabilities whereby you've got patients in Kenya who are being treated by an expert sitting in London or New York, and also this whole consciousness about, you know, our carbon footprint and being environmentally conscious. This pandemic has taught us all of that and brought us to the forefront of organization priorities, right? The demand. I think that's, that's a very natural consequence of everybody sitting at home. >>And the only thing that can keep things still going is this data communication, right? But I wouldn't just say that that is, what's kind of at the heart of all of this. Just imagine if we are to realize any of these targets of the world is what leadership is setting for themselves. Hey, we have to be carbon neutral by X year as a country, as a geography, et cetera, et cetera. You know, all of these things require you to have this remote working capabilities, this remote interaction, not just between humans, but machine to machine interactions. And this there's a unique value chain, which is now getting created that you've got people who are communicating with other people or communicating with other machines, but the communication is much more. I wouldn't even use the term real time because we've used real time for voice and video, et cetera. >>We're talking low latency, microsecond decision-making that can either cut somebody's, you know, um, our trees or that could actually go and remove the tumor, that kind of stuff. So that has become a reality. Everybody's asking for it, remote learning, being an extremely massive requirement where, you know, we've had to enable these, uh, these virtual classrooms ensuring the type of connectivity, ensuring the type of type of privacy, which is just so, so critical. You can't just have everybody in a go on the internet and access a data source. You have to be concerned about the integrity and security of that data as the foremost. So I think all of these things, yes, we have not been caught off guard. We were pretty forward-looking in our plans and our evolution, but yes, it's fast track the journey that we would probably believe we would have taken in three years. It has brought that down to two quarters where we've had to execute them. >>Right. Massive acceleration. All right. So you articulated the challenges really well. And a lot of the realities that many of our viewers are facing. Let's talk now about motivations, AI ops as a tool, as a catalyst for helping organizations overcome those challenges. >>So yeah. Now on that I said, you can imagine, you know, it requires microsecond decision-making which human being on this planet can do microsecond decision-making on complex network infrastructure, which is impacting end user applications, which have multitudes of effect. You know, in real life, I use the example of a remote surgeon. Just imagine that, you know, even because of you just use your signal on the quality of that communication for that microsecond, it could be the difference between killing somebody in saving somebody's life. And it's not predictable. We talk about autonomous vehicles. Uh, we talk about this transition to electric vehicles, smart motorways, et cetera, et cetera, in federal environment, how is all of that going to work? You have so many different components coming in. You don't just have a network and security anymore. You have software defined networking. That's coming, becoming a part of that. >>You have mobile edge computing that is rented for the technologies. 5g enables we're talking augmented reality. We're talking virtual reality. All of these things require that resources and why being carbon conscious. We told them we just want to build a billion data centers on this planet, right? We, we have to make sure that resources are given on demand and the best way of resources can be given on demand and could be most efficient is that the thing is being made at million microsecond and those resources are accordingly being distributed, right? If you're relying on people, sipping their coffees, having teas, talking to somebody else, you know, just being away on holiday. I don't think we're going to be able to handle that one that we have already stepped into. Verizon's 5g has already started businesses on that transformational journey where they're talking about end user experience personalization. >>You're going to have events where people are going to go, and it's going to be three-dimensional experiences that are purely customized for you. How, how does that all happen without this intelligence sitting there and a network with all of these multiple layers? So spectrum, it doesn't just need to be intuitive. Hey, this is my private IP traffic. This is public traffic. You know, it has to not be in two, or this is an application that I have to prioritize over another task to be intuitive to the criticality and the context of those transactions. Again, that's surgeons. So be it's much more important than postman setting and playing a video game. >>I'm glad that you think that that's excellent. Let's go into some specific use cases. What are some of the examples that you gave? Let's kind of dig deeper into some of the, what you think are the lowest hanging fruit for organizations kind of pan industry to go after. >>Excellent. Brian, and I think this, this like different ways to look at the lowest hanging fruit, like for somebody like revising who is a managed services provider, you know, very comprehensive medicines, but we obviously have food timing, much lower potentially for some of our customers who want to go on that journey. Right? So for them to just go and try and harness the power of the foods might be a bit higher hanging, but for somebody like us, the immediate ones would be to reduce the number of alarms that are being generated by these overlay services. You've got your basic network, then you've got your whole software defined networking on top of that, you have your hybrid clouds, you have your edge computing coming on top of that. You know? So all of that means if there's an outage on one device on the network, I want to make this very real for everybody, right? >>It's like device and network does not stop all of those multiple applications or monitoring tools from raising and raising thousands of alarm and everyone, one capacity. If people are attending to those thousands of alarms, it's like you having a police force and there's a burglary in one time and the alarm goes off and 50 bags. How, how are you kind of make the best use of your police force? You're going to go investigate 50 bags or do you want to investigate where the problem is? So it's as real as that, I think that's the first wins where people can save so much cost, which is coming from being wasted and resources running around, trying to figure stuff out immediately. I'm tied this with network and security network and security is something which has you did even the most, you know, I mean single screens in our engineering, well, we took it to have network experts, separate people, security experts, separate people to look for different things, but there are security events that can impact the performance of a network. >>And then just drop the case on the side of et cetera, which could be falsely attributed to the metric. And then if you've got multiple parties, which are then the chapter clear stakeholders, you can imagine the blame game that goes on finding fingers, taking names, not taking responsibility that don't has all this happened. This is the only way to bring it all together to say, okay, this is what takes priority. If there's an event that has happened, what is its correlation to the other downstream systems, devices, components, and these are applications. And then subsequently, you know, like isolating it to the right cost where you can most effectively resolve that problem. Thirdly, I would say on demand, virtualized resource, virtualized resources, the heart and soul, the spirit of status that you can have them on demand. So you can automate the allocation of these resources based on customer's consumption their peaks, their cramps, all of that comes in. >>You see, Hey, typically on a Wednesday, the traffic was up significantly for this particular application, you know, going to this particular data center, you could have this automated system, uh, which is just providing those resources, you know, on demand. And so it is to have a much better commercial engagement with customers and just a much better service assurance model. And then one more thing on top of that, which is very critical is that as I was saying, giving that intelligence to the networks to start having context of the criticality of a transaction, that doesn't make sense to them. You can't have that because for that, you need to have this, you know, monkey their data. You need to have multi-cam system, which are monitoring and controlling different aspects of your overall end user application value chain to be communicating with each other. And, you know, that's the only way to sort of achieve that goal. And that only happens with AI. It's not possible >>So it was when you clearly articulated some obvious, low hanging fruit and use cases that organizations can go after. Let's talk now about some of the considerations, you talked about the importance of a network and AI ops, the approach I assume, needs to be modular support needs to be heterogeneous. Talk to us about some of those key considerations that you would recommend. >>Absolutely. So again, basically starting with the network, because if there's, if the metrics sitting at the middle of all of this is not working, then things can communicate with each other, right? And the cloud doesn't work, nothing metal. That's the hardest part of this, but that's the frequency. When you talk about machine to machine communication or IOT, it's just the biggest transformation of the span of every company is going for IOT now to drive those costs, efficiencies, and had, something's got some experience, the integrity of the topic karma, right? The security, integrity of that. How do you maintain integrity of your data beyond just a secure network components? That is true, right? That's where you're getting to the whole arena blockchain technologies, where you have to use digital signatures or barcodes that machine then, and then an intelligence system is automatically able to validate and verify the integrity of the data and the commands that are being executed by those end-user told them what I need to tell them that. >>So it's IOT machines, right? That is paramount. And if anybody is not keeping that into their equation, that in its own self is any system that is therefore maintaining the integrity of your commands and your hold that sits on those, those machines. Right? Second, you have your network. You need to have any else platform, which is able to restless all the fast network information, et cetera. And coupled with that data integrity piece, because for the management, ultimately they need to have a coherent view of the analytics, et cetera, et cetera. They need to know where the problems are again, right? So let's say if there's a problem with the integrity of the commands that are being executed by the machine, that's a much bigger problem than not being able to communicate with that machine and the best thing, because you'd rather not talk to the machine or have to do anything if it's going to start doing wrong things. >>So I think that's where it is. It's very intuitive. It's not true. You have to have subsequently if you have some kind of faith and let me use that use case self autonomous vehicles. Again, I think we're going to see in the next five years, because he's smart with the rates, et cetera, it won't separate autonomous cars. It's much more efficient, it's much more space, et cetera, et cetera. So within that equation, you're going to have systems which will be specialists in looking at aspects and transactions related to those systems. For example, in autonomous moving vehicles, brakes are much more important than the Vipers, right? So this kind of intelligence, it will be multiple systems who have to sit, N nobody has to, one person has to go in one of these systems. I think these systems should be open source enough that they, if you were able to integrate them, right, if something's sitting in the cloud, you were able to integrate for that with obviously the regard of the security and integrity of your data that has to traverse from one system to the other extremely important. >>So I'm going to borrow that integrity theme for a second, as we go into our last question, and that is this kind of take a macro look at the overall business impact that AI ops can help customers make. I'm thinking of, you know, the integrity of teams aligning business in it, which we probably can't talk about enough. We're helping organizations really effectively measure KPIs that deliver that digital experience that all of us demanding consumers expect. What's the overall impact. What would you say in summary fashion? >>So I think the overall impact is a lot of costs. That's customized and businesses gives the time to the time of enterprises. Defense was inevitable. It's something that for the first time, it will come to life. And it's something that is going to, you know, start driving cost efficiencies and consciousness and awareness within their own business, which is obviously going to have, you know, it domino kind of an effect. So one example being that, you know, you have problem isolation. I talked about network security, this multi-layers architecture, which enables this new world of 5g, um, at the heart of all of it, it has to identify the problem to the source, right? Not be bogged down by 15 different things that are going wrong. What is causing those 15 things to go wrong, right? That speed to isolation in its own sense can make millions and millions of dollars to organizations after we organize it. Next one is obviously overall impacted customer experience. Uh, 5g was given out of your customers, expecting experiences from you, even if you're not expecting to deliver them in 2021, 2022, it would have customers asking for those experience or walking away, if you do not provide those experience. So it's almost like a business can do nothing every year. They don't have to reinvest if they just want to die on the line, businesses want remain relevant. >>Businesses want to adopt the latest and greatest in technology, which enables them to, you know, have that superiority and continue it. So from that perspective that continue it, he will read that they write intelligence systems that tank rationalizing information and making decisions supervised by people, of course were previously making some of those. >>That was a great summary because you're right, you know, with how demanding consumers are. We don't get what we want quickly. We churn, right? We go somewhere else and we could find somebody that can meet those expectations. So it has been thanks for doing a great job of clarifying the impact and the value that AI ops can bring to organizations that sounds really now is we're in this even higher demand for digital products and services, which is not going away. It's probably going to only increase it's table stakes for any organization. Thank you so much for joining me today and giving us your thoughts. >>Pleasure. Thank you. We'll be right back with our next segment. >>Digital applications and services are more critical to a positive customer and employee experience than ever before. But the underlying infrastructure that supports these apps and services has become increasingly complex and expanding use of multiple clouds, mobile and microservices, along with modern and legacy infrastructure can make it difficult to pinpoint the root cause when problems occur, it can be even more difficult to determine the business impact your problems that occur and resolve them efficiently. AI ops from Broadcom can help first by providing 360 degree visibility, whether you have hybrid cloud or a cloud native AI ops from Broadcom provides a clear line of sight, including apt to infrastructure and network visibility across hybrid environments. Second, the solution gives you actionable insights by correlating an aggregating data and applying AI and machine learning to identify root causes and even predict problems before users are impacted. Third AI ops from Broadcom provides intelligent automation that identifies potential solutions when problems occur applied to the best one and learns from the effectiveness to improve response in case the problem occurs. Again, finally, the solution enables organizations to achieve digit with jelly by providing feedback loops across development and operations to allow for continuous improvements and innovation through these four capabilities. AI ops from Broadcom can help you reduce service outages, boost, operational efficiency, and effectiveness and improve customer and employee experience. To learn more about AI ops from Broadcom, go to broadcom.com/ai ops from around the globe. >>It's the cube with digital coverage of AI ops virtual forum brought to you by Broadcom. >>Welcome back to the AI ops virtual forum, Lisa Martin here with Srinivasan, Roger Rajagopal, the head of product and strategy at Broadcom. Raj, welcome here, Lisa. I'm excited for our conversation. So I wanted to dive right into a term that we hear all the time, operational excellence, right? We hear it everywhere in marketing, et cetera, but why is it so important to organizations as they head into 2021? And tell us how AI ops as a platform can help. >>Yeah. Well, thank you. First off. I wanna, uh, I want to welcome our viewers back and, uh, I'm very excited to, uh, to share, um, uh, more info on this topic. You know, uh, here's what we believe as we work with large organizations, we see all our organizations are poised to get out of the, uh, the pandemic and look for a brood for their own business and helping customers get through this tough time. So fiscal year 2021, we believe is going to be a combination of, uh, you know, resiliency and agility at the, at the same time. So operational excellence is critical because the business has become more digital, right? There are going to be three things that are going to be more sticky. Uh, you know, remote work is going to be more sticky, um, cost savings and efficiency is going to be an imperative for organizations and the continued acceleration of digital transformation of enterprises at scale is going to be in reality. So when you put all these three things together as a, as a team that is, uh, you know, that's working behind the scenes to help the businesses succeed, operational excellence is going to be, make or break for organizations, >>Right with that said, if we kind of strip it down to the key capabilities, what are those key capabilities that companies need to be looking for in an AI ops solution? >>Yeah, you know, so first and foremost, AI ops means many things to many, many folks. So let's take a moment to simply define it. The way we define AI ops is it's a system of intelligence, human augmented system that brings together full visibility across app infra and network elements that brings together disparate data sources and provides actionable intelligence and uniquely offers intelligent automation. Now, the, the analogy many folks draw is the self-driving car. I mean, we are in the world of Teslas, uh, but you know, uh, but self-driving data center is it's too far away, right? Autonomous systems are still far away. However, uh, you know, application of AI ML techniques to help deal with volume velocity, veracity of information, uh, is, is critical. So that's how we look at AI ops and some of the key capabilities that we, uh, that we, uh, that we work with our customers to help them on our own for eight years. >>Right? First one is eyes and ears. What we call full stack observability. If you do not know what is happening in your systems, uh, you know, that that serve up your business services. It's going to be pretty hard to do anything, uh, in terms of responsiveness, right? So from stack observability, the second piece is what we call actionable insights. So when you have disparate data sources, tools, sprawls data coming at you from, uh, you know, uh, from a database systems, it systems customer management systems, ticketing systems. How do you find the needle from the haystack? And how do you respond rapidly from a myriad of problems as CEO of red? The third area is what we call intelligent automation. Well, identifying the problem to act on is important, and then acting on automating that and creating, uh, a recommendation system where, uh, you know, you can be proactive about it is even more important. And finally, all of this focuses on efficiency. What about effectiveness? Effectiveness comes when you create a feedback loop, when what happens in production is related to your support systems and your developers so that they can respond rapidly. So we call that continuous feedback. So these are the four key capabilities that, uh, you know, uh, you should look for in an AI ops system. And that's what we offer as well. >>Russia, there's four key capabilities that businesses need to be looking for. I'm wondering how those help to align business. And it it's, again like operational excellence. It's something that we talk about a lot is the alignment of business. And it a lot more challenging, easier said than done, right. But I want you to explain how can AI ops help with that alignment and align it outputs to business outcomes? >>Yeah. So, you know, one of the things, uh, I'm going to say something that is, uh, that is, uh, that is simple, but, but, but this harder, but alignment is not on systems alignment is with people, right? So when people align, when organizations align, when cultures align, uh, dramatic things can happen. So in the context of AI ops VC, when, when SRE is aligned with the DevOps engineers and information architects and, uh, uh, you know, it operators, uh, you know, they enable organizations to reduce the gap between intent and outcome or output and outcome that said, uh, you know, these personas need mechanisms to help them better align, right. Help them better visualize, see the, you know, what we call single source of truth, right? So there are four key things that I want to call out. When we work with large enterprises, we find that customer journey alignment with the, you know, what we call it systems is critical. >>So how do you understand your business imperatives and your customer journey goals, whether it is car to a purchase or whether it is, uh, you know, bill shock scenarios and Swan alignment on customer journey to your it systems is one area that you can reduce the gap. The second area is how do you create a scenario where your teams can find problems before your customers do right outage scenarios and so on. So that's the second area of alignment. The third area of alignment is how can you measure business impact driven services? Right? There are several services that an organization offers versus an it system. Some services are more critical to the business than others, and these change in a dynamic environment. So how do you, how do you understand that? How do you measure that and how, how do you find the gaps there? So that's the third area of alignment that we, that we help and last but not least there are, there are things like NPS scores and others that, that help us understand alignment, but those are more long-term. But in the, in the context of, uh, you know, operating digitally, uh, you want to use customer experience and business, uh, you know, a single business outcome, uh, as a, as a key alignment factor, and then work with your systems of engagement and systems of interaction, along with your key personas to create that alignment. It's a people process technology challenge. >>So, whereas one of the things that you said there is that it's imperative for the business to find a problem before a customer does, and you talked about outages there, that's always a goal for businesses, right. To prevent those outages, how can AI ops help with that? Yeah, >>So, you know, outages, uh, talk, you know, go to resiliency of a system, right? And they also go to, uh, uh, agility of the same system, you know, if you're a customer and if you're whipping up your mobile app and it takes more than three milliseconds, uh, you know, you're probably losing that customer, right. So outages mean different things, you know, and there's an interesting website called down detector.com that actually tracks all the old pages of publicly available services, whether it's your bank or your, uh, you know, tele telecom service or a mobile service and so on and so forth. In fact, the key question around outages for, from, uh, from, uh, you know, executives are the question of, are you ready? Right? Are you ready to respond to the needs of your customers and your business? Are you ready to rapidly resolve an issue that is impacting customer experience and therefore satisfaction? >>Are you creating a digital trust system where customers can be, you know, um, uh, you know, customers can feel that their information is secure when they transact with you, all of these, getting into the notion of resiliency and outages. Now, you know, one of the things that, uh, that I, I often, uh, you know, work with customers around, you know, would that be find as the radius of impact is important when you deal with outages? What I mean by that is problems occur, right? How do you respond? How quickly do you take two seconds, two minutes, 20 minutes, two hours, 20 hours, right? To resolve the problem that radius of impact is important. That's where, you know, you have to bring a gain people, process technology together to solve that. And the key thing is you need a system of intelligence that can aid your teams, you know, look at the same set of parameters so that you can respond faster. That's the key here. >>We look at digital transformation at scale. Raj, how does AI ops help influence that? >>You know, um, I'm going to take a slightly long-winded way to answer this question. See when it comes to digital transformation at scale, the focus on business purpose and business outcome becomes extremely critical. And then the alignment of that to your digital supply chain, right, are the, are the, are the key factors that differentiate winners in the, in their digital transformation game? Really, what we have seen, uh, with, with winners is they operate very differently. Like for example, uh, you know, Nike matures, its digital business outcomes by shoes per second, right? Uh, Apple by I-phones per minute, Tesla by model threes per month, are you getting this, getting it right? I mean, you want to have a clear business outcome, which is a measure of your business, uh, in effect, I mean, ENC, right? Which, which, uh, um, my daughter use and I use very well. >>Right. Uh, you know, uh, they measure by revenue per hour, right? I mean, so these are key measures. And when you have a key business outcome measure like that, you can everything else, because you know what these measures, uh, you know, uh, for a bank, it may be deposits per month, right now, when you move money from checking account to savings account, or when you do direct deposits, those are, you know, banks need liquidity and so on and so forth. But, you know, the, the key thing is that single business outcome has a Starburst effect inside the it organization that touches a single money moment from checking a call to savings account can touch about 75 disparate systems internally. Right? So those think about it, right? I mean, all, all we're doing is moving money from checking account a savings account. Now that goats into a it production system, there are several applications. >>There is a database, there is, there are infrastructures, there are load balancers that are webs. You know, you know, the web server components, which then touches your, your middleware component, which is a queuing system, right. Which then touches your transactional system. Uh, and, uh, you know, which may be on your main frames, what we call mobile to mainframe scenario, right? And we are not done yet. Then you have a security and regulatory compliance system that you have to touch a fraud prevention system that you have to touch, right? A state department regulation that you may have to meet and on and on and on, right? This is the chat that it operations teams face. And when you have millions of customers transacting, right, suddenly this challenge cannot be managed by human beings alone. So therefore you need a system of intelligence that augments human intelligence and acts as your, you know, your, your eyes and ears in a way to, to point pinpoint where problems are. >>Right. So digital transformation at scale really requires a very well thought out AI ops system, a platform, an open extensible platform that, uh, you know, uh, that is heterogeneous in nature because there's tools, products in organizations. There is a lot of databases in systems. There are millions of, uh, uh, you know, customers and hundreds of partners and vendors, you know, making up that digital supply chain. So, you know, AI ops is at the center of an enabling an organization achieve digital op you know, transformation at scale last but not least. You need continuous feedback loop. Continuous feedback loop is the ability for a production system to inform your dev ops teams, your finance teams, your customer experience teams, your cost modeling teams about what is going on so that they can so that they can reduce the intent, come gap. >>All of this need to come together, what we call BizOps. >>That was a great example of how you talked about the Starburst effect. I actually never thought about it in that way, when you give the banking example, but what you should is the magnitude of systems. The fact that people alone really need help with that, and why intelligent automation and AI ops can be transformative and enable that scale. Raj, it's always a pleasure to talk with you. Thanks for joining me today. And we'll be right back with our next segment. Welcome back to the AI ops virtual forum. We've heard from our guests about the value of AI ops and why and how organizations are adopting AI ops platforms. But now let's see AI ops inaction and get a practical view of AI ops to deep Dante. The head of AI ops at Broadcom is now going to take you through a quick demo. >>Hello. So they've gotta head off AI ops and automation here. What I'm going to do today is talk through some of the key capabilities and differentiators of Broadcom's CII ops solution in this solution, which can be delivered on cloud or on-prem. We bring a variety of metric alarm log and applauded data from multiple sources, EPM, NetApps, and infrastructure monitoring tools to provide a single point of observability and control. Let me start where our users mostly stock key enterprises like FSI, telcos retailers, et cetera, do not manage infrastructure or applications without having a business context. At the end of the day, they offer business services governed by SLS service level objectives and SLI service level indicators are service analytics, which can scale to a few thousand services, lets our customers create and monitor the services as per their preference. They can create a hierarchy of services based on their business practice. >>For example, here, the sub services are created based on functional subsistence for certain enterprises. It could be based on location. Users can import these services from their favorite CMDB. What's important to note that not all services are born equal. If you are a modern bank, you may want to prioritize tickets coming from digital banking, for example, and this application lets you rank them as per the KPI of your choice. We can source the availability, not merely from the state of the infrastructure, whether they're running or not. But from the SLS that represent the state of the application, when it comes to triaging issues related to the service, it is important to have a complete view of the topology. The typology can show both east-west elements from mobile to mainframe or not South elements in a network flow. This is particularly relevant for a large enterprise who could be running the systems of engagement on the cloud and system of records on mainframe inside the firewall here, you can see that the issue is related to the mainframe kick server. >>You can expand to see the actual alarm, which is sourced from the mainframe operational intelligence. Similarly, clicking on network will give the hub and spoke view of the network devices, the Cisco switches and routers. I can click on the effected router and see all the details Broadcom's solution stores, the ontological model of the typology in the form of a journal graph where one can not only view the current state of the typology, but the past as well, talking of underlying data sources, the solution uses best of the pre data stores for structured and unstructured data. We have not only leveraged the power of open source, but have actively contributed back to the community. One of the key innovations is evident in our dashboarding framework because we have enhanced the open source Grafana technology to support these diverse data sources here. You can see a single dashboard representing applications to infrastructure, to mainframe again, sourcing a variety of data from these sources. >>When we talk to customers, one of the biggest challenges that they face today is related to alarms because of a proliferation of tools. They are currently drowning in an ocean of hundreds and thousands of alarms. This increases the Elmont support cost to tens of dollars per ticket, and also affects LTO efficiency leading to an average of five to six hours of meantime to resolution here is where we have the state of the art innovation utilizing the power of machine learning and ontology to arrive at the root cause we not only clusterize alarms based on text, but employ the technique of 41st. We look at the topology then at the time window duplicate text based on NLP. And lastly learn from continuous training of the model to deduce what we call situations. This is an example of a situation. As you can see, we provide a time-based evidence of how things unfolded and arrive at a root cause. >>Lastly, the solution provides a three 60 degree closed loop remediation either through a ticketing system or by direct invocation of automation actions instead of firing hard-coded automation runbooks for certain conditions, the tool leverage is machine learning to rank automation actions based on past heuristics. That's why we call it intelligent automation to summarize AI ops from Broadcom helps you achieve operational excellence through full stack observability, coupled with AIML that applies across modern hybrid cloud environments, as well as legacy ones uniquely. It ties these insights with intelligent automation to improve customer experience. Thank you for watching from around the globe. It's the cube with digital coverage of AI ops virtual forum brought to you by Broadcom. >>Welcome to our final segment today. So we've discussed today. The value that AI ops will bring to organizations in 2021, we'll discuss that through three different perspectives. And so now we want to bring those perspectives together and see if we can get a consensus on where AI ops needs to go for folks to be successful with it in the future. So bringing back some folks Richland is back with us. Senior analysts, serving infrastructure and operations professionals at Forrester smartness here is also back in global product management at Verizon and Srinivasan, Reggie Gopaul head of product and strategy at Broadcom guys. Great to have you back. So let's jump in and rich, we're going to, we're going to start with you, but we are going to get all three of you, a chance to answer the questions. So we've talked about why organizations should adopt AI ops, but what happens if they choose not to what challenges would they face? Basically what's the cost of organizations doing nothing >>Good question, because I think in operations for a number of years, we've kind of stand stood, Pat, where we are, where we're afraid change things sometimes, or we just don't think about a tooling as often. The last thing to change because we're spending so much time doing project work and modernization and fighting fires on a daily basis. >>Problem is going to get worse. If we do nothing, >>You know, we're building new architectures like containers and microservices, which means more things to mind and keep running. Um, we're building highly distributed systems. We're moving more and more into this hybrid world, a multi-cloud world, uh, it's become over-complicate and I'll give a short anecdote. I think, eliminate this. Um, when I go to conferences and give speeches, it's all infrastructure operations people. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. They had, you know, three years ago, two years ago, and everyone's saying how many people have hired more staff in that time period, zero hands go up. That's the gap we have to fill. And we have to fill that through better automation, more intelligent systems. It's the only way we're going to be able to fill back out. >>What's your perspective, uh, if organizations choose not to adopt AI ops. Yeah. So I'll do that. Yeah. So I think it's, I would just relate it to a couple of things that probably everybody >>Tired off lately and everybody can relate to. And this would resonate that we have 5g, which is all set to transform the world. As we know it, I don't have a lot of communication with these smart cities, smart communities, IOT, which is going to make us pivotal to the success of businesses. And as you've seen with this call with, you know, transformation of the world, that there's a, there's a much bigger cost consciousness out there. People are trying to become much more, forward-looking much more sustainable. And I think at the heart of all of this, that the necessity that you have intelligent systems, which are bastardizing more than enough information that previously could've been overlooked because if you don't measure engagement, not going right. People not being on the same page of this using two examples or hundreds of things, you know, that play a part in things, but not coming together in the best possible way. So I think it has an absolute necessity to drive those cost efficiencies rather than, you know, left right and center laying off people who are like 10 Mattel to your business and have a great tribal knowledge of your business. So to speak, you can drive these efficiencies through automating a lot of those tasks that previously were being very manually intensive or resource intensive. And you could allocate those resources towards doing much better things, which let's be very honest going into 20, 21 after what we've seen with 2020, it's going to be mandate treat. >>And so Raj, I saw you shaking your head there when he was mom was sharing his thoughts. What are your thoughts about that sounds like you agree. Yeah. I mean, uh, you know, uh, to put things in perspective, right? I mean we're firmly in the digital economy, right? Digital economy, according to the Bureau of economic analysis is 9% of the U S GDP. Just, you know, think about it in, in, in, in, in the context of the GDP, right? It's only ranked lower, slightly lower than manufacturing, which is at 11.3% GDP and slightly about finance and insurance, which is about seven and a half percent GDP. So the digital economy is firmly in our lives, right. And as Huisman was talking about it, you know, software eats the world and digital, operational excellence is critical for customers, uh, to, uh, you know, to, uh, to drive profitability and growth, uh, in the digital economy. >>It's almost, you know, the key is digital at scale. So when, uh, when rich talks about some of the challenges and when Huseman highlights 5g as an example, those are the things that, that, that come to mind. So to me, what is the cost or perils of doing nothing? You know, uh, it's not an option. I think, you know, more often than not, uh, you know, C-level execs are asking head of it and they are key influencers, a single question, are you ready? Are you ready in the context of addressing spikes in networks because of the pandemic scenario, are you ready in the context of automating away toil? Are you ready to respond rapidly to the needs of the digital business? I think AI ops is critical. >>That's a great point. Roger, where does stick with you? So we got kind of consensus there, as you said, wrapping it up. This is basically a, not an option. This is a must to go forward for organizations to be successful. So let's talk about some quick wins, or as you talked about, you know, organizations and sea levels asking, are you ready? What are some quick wins that that organizations can achieve when they're adopting AI? >>You know, um, immediate value. I think I would start with a question. How often do your customers find problems in your digital experience before you do think about that? Right. You know, if you, if you, you know, there's an interesting web, uh, website, um, uh, you know, down detector.com, right? I think, uh, in, in Europe there is an equal amount of that as well. It ha you know, people post their digital services that are down, whether it's a bank that, uh, you know, customers are trying to move money from checking account, the savings account and the digital services are down and so on and so forth. So some and many times customers tend to find problems before it operations teams do. So a quick win is to be proactive and immediate value is visibility. If you do not know what is happening in your complex systems that make up your digital supply chain, it's going to be hard to be responsive. So I would start there >>Visibility this same question over to you from Verizon's perspective, quick wins. >>Yeah. So I think first of all, there's a need to ingest this multi-care spectrum data, which I don't think is humanly possible. You don't have people having expertise, you know, all the seven layers of the OSI model and then across network and security and at the application level. So I think you need systems which are now able to get that data. It shouldn't just be wasted reports that you're paying for on a monthly basis. It's about time that you started making the most of those in the form of identifying what are the efficiencies within your ecosystem. First of all, what are the things, you know, which could be better utilized subsequently you have the >>Opportunity to reduce the noise of a trouble tickets handling. It sounds pretty trivial, but >>An average you can imagine every trouble tickets has the cost in dollars, right? >>So, and there's so many tickets and there's art >>That get created on a network and across an end user application value, >>We're talking thousands, you know, across and end user >>Application value chain could be million in >>A year. So, and so many of those are not really, >>He, you know, a cause of concern because the problem is something. >>So I think that whole triage is an immediate cost saving and the bigger your network, the bigger >>There's a cost of things, whether you're a provider, whether you're, you know, the end customer at the end of the day, not having to deal with problems, which nobody can resolve, which are not meant to be dealt with. There's so many of those situations, right, where service has just been adopted, >>Which is just coordinate quality, et cetera, et cetera. So many reasons. So those are the, >>So there's some of the immediate cost saving them. They are really, really significant. >>Secondly, I would say Raj mentioned something about, you know, the user, >>Your application value chain, and an understanding of that, especially with this hybrid cloud environment, >>Et cetera, et cetera, right? The time it takes to identify a problem in an end user application value chain across the seven layers that I mentioned with the OSI reference model across network and security and the application environment. It's something that >>In its own self has massive cost to business, >>Right? That could be >>No sale transactions that could be obstructed because of this. There could be, and I'm going to use a really interesting example. >>We talk about IOT. The integrity of the IOT machine is exciting. >>Family is pivotal in this new world that we're stepping into. >>You could be running commands, >>Super efficient. He has, everything is being told to the machine really fast with sending yeah. >>Everything there. What if it's hacked? And if that's okay, >>Robotic arm starts to involve the things you don't want it to do. >>So there's so much of that. That becomes a part of this naturally. And I believe, yes, this is not just like from a cost >>standpoint, but anything going wrong with that code base, et cetera, et cetera. These are massive costs to the business in the form of the revenue. They have lost the perception in the market as a result, the fed, >>You know, all that stuff. So >>These are a couple of very immediate problems, but then you also have the whole player virtualized resources where you can automate the allocation, you know, the quantification of an orchestration of those virtualized resources, rather than a person having to, you know, see something and then say, Oh yeah, I need to increase capacity over here, because then it's going to have this particular application. You have systems doing this stuff and to, you know, Roger's point your customer should not be identifying your problems before you, because this digital is where it's all about perception. >>Absolutely. We definitely don't want the customers finding it before. So rich, let's wrap this particular question up with you from that senior analyst perspective, how can companies use make big impact quickly with AI ops? Yeah, >>Yeah, I think, you know, and it was been really summed up some really great use cases there. I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, you know, the mean time to resolve. We're pretty good at resolving the things. We just have to find the thing we have to resolve. That's always been the problem and using these advanced analytics and machine learning algorithms now across all machine and application data, our tendency is humans is to look at the console and say, what's flashing red. That must be what we have to fix, but it could be something that's yellow, somewhere else, six services away. And we have made things so complicated. And I think this is what it was when I was saying that we can't get there anymore on our own. We need help to get there in all of this stuff that the outline. >>So, so well builds up to a higher level thing of what is the customer experience about what is the customer journey? And we've struggled for years in the digital world and measuring that a day-to-day thing. We know an online retail. If you're having a bad experience at one retailer, you just want your thing. You're going to go to another retailer, brand loyalty. Isn't one of like it, wasn't a brick and mortal world where you had a department store near you. So you were loyal to that because it was in your neighborhood, um, online that doesn't exist anymore. So we need to be able to understand the customer from that first moment, they touch a digital service all the way from their, their journey through that digital service, the lowest layer, whether it be a database or the network, what have you, and then back to them again, and we're not understanding, is that a good experience? >>We gave them. How does that compare to last week's experience? What should we be doing to improve that next week? Uh, and I think companies are starting and then the pandemic certainly, you know, push this timeline. If you listened to the, the, the CEO of Microsoft, he's like, you know, 10 years of digital transformation written down. And the first several months of this, um, in banks and in financial institutions, I talked to insurance companies, aren't slowing down. They're trying to speed up. In fact, what they've discovered is that they're, you know, obviously when we were on lockdown or what have you, they use of digital servers is spiked very high. What they've learned is they're never going to go back down. They're never going to return to pretend endemic levels. So now they're stuck with this new reality. Well, how do we service those customers and how do we make sure we keep them loyal to our brand? >>Uh, so, you know, they're looking for modernization opportunities. A lot of that that's things have been exposed. And I think Raj touched upon this very early in the conversation is visibility gaps. Now that we're on the outside, looking in at the data center, we know we architect things in a very way. Uh, we better ways of making these correlations across the Sparrow technologies to understand where the problems lies. We can give better services to our customers. And I think that's really what we're going to see a lot of the innovation and the people really clamoring for these new ways of doing things that starting, you know, now, I mean, I've seen it in customers, but I think really the push through the end of this year to next year when, you know, economy and things like that straightened out a little bit more, I think it really, people are gonna take a hard look of where they are and is, you know, AI ops the way forward for them. And I think they'll find it. The answer is yes, for sure. >>So we've, we've come to a consensus that, of what the parallels are of organizations, basically the cost of doing nothing. You guys have given some great advice on where some of those quick wins are. Let's talk about something Raj touched on earlier is organizations, are they really ready for truly automated AI? Raj, I want to start with you readiness factor. What are your thoughts? >>Uh, you know, uh, I think so, you know, we place our, her lives on automated systems all the time, right? In our, in our day-to-day lives, in the, in the digital world. I think, uh, you know, our, uh, at least the customers that I talk to our customers are, uh, are, uh, you know, uh, have a sophisticated systems. Like for example, advanced automation is a reality. If you look at social media, AI and ML and automation are used to automate away, uh, misinformation, right? If you look at financial institutions, AI and ML are used to automate away a fraud, right? So I want to ask our customers why can't we automate await oil in it, operation systems, right? And that's where our customers are. Then the, you know, uh, I'm a glass half full, uh, cleanup person, right? Uh, this pandemic has been harder on many of our customers, but I think what we have learned from our customers is they've Rose to the occasion. >>They've used digital as a key needs, right? At scale. That's what we see with, you know, when, when Huseman and his team talk about, uh, you know, network operational intelligence, right. That's what it means to us. So I think they are ready, the intersection of customer experience it and OT, operational technology is ripe for automation. Uh, and, uh, you know, I, I wanna, I wanna sort of give a shout out to three key personas in this mix. It's about people, right? One is the SRE persona, you know, site, reliability engineer. The other is the information security persona. And the third one is the it operator automation engineer persona. These folks in organizations are building a system of intelligence that can respond rapidly to the needs of their digital business. We at Broadcom, we are in the business of helping them construct a system of intelligence that will create a human augmented solution for them. Right. So when I see, when I interact with large enterprise customers, I think they, they, you know, they, they want to achieve what I would call advanced automation and AI ML solutions. And that's squarely, very I ops is, you know, is going as it, you know, when I talk to rich and what, everything that rich says, you know, that's where it's going and that's what we want to help our customers to. So, which about your perspective of organizations being ready for truly automated AI? >>I think, you know, the conversation has shifted a lot in the last, in, in pre pandemic. Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd go to conferences and people come up and ask me like, this is all smoke and mirrors, right? These systems can't do this because it is such a leap forward for them, for where they are today. Right. We we've sort of, you know, in software and other systems, we iterate and we move forward slowly. So it's not a big shock. And this is for a lot of organizations that big, big leap forward where they're, they're running their operations teams today. Um, but now they've come around and say, you know what? We want to do this. We want all the automations. We want my staff not doing the low complexity, repetitive tasks over and over again. >>Um, you know, and we have a lot of those kinds of legacy systems. We're not going to rebuild. Um, but they need certain care and feeding. So why are we having operations? People do those tasks? Why aren't we automating those out? I think the other piece is, and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand that the operations models that we're operating under in, in INO and have been for the last 25 years are super outdated and they're fundamentally broken for the digital age. We have to start thinking about different ways of doing things and how do we do that? Well, it's, it's people, organization, people are going to work together differently in an AI ops world, um, for the better. Um, but you know, there's going to be the, the age of the 40 person bridge call thing. >>Troubleshooting is going away. It's going to be three, four, five focused engineers that need to be there for that particular incident. Um, a lot of process mailer process we have in our level, one level, two engineering. What have you running of tickets, gathering of artifacts, uh, during an incident is going to be automated. That's a good thing. We should be doing those, those things by hand anymore. So I'd say that the, to people's like start thinking about what this means to your organization. Start thinking about the great things we can do by automating things away from people, having to do them over and over again. And what that means for them, getting them matched to what they want to be doing is high level engineering tasks. They want to be doing monitorization, working with new tools and technologies. Um, these are all good things that help the organization perform better as a whole great advice and great kind of some of the thoughts that you shared rich for what the audience needs to be on the lookout. For one, I want to go over to you, give me your thoughts on what the audience that should be on the lookout for, or put on your agendas in the next 12 months. >>So there's like a couple of ways to answer that question. One thing would be in the form of, you know, what are some of the things they have to be concerned about in terms of implementing this solution or harnessing its power. The other one could be, you know, what are the perhaps advantages they should look to see? So if I was to talk about the first one, let's say that, what are some of the things I have to watch out for like possible pitfalls that everybody has data, right? So yeah, there's one strategy we say, okay, you've got the data, let's see what we can do with them. But then there's the exact opposite side, which has to be considered when you're doing that analysis. What are the use cases that you're looking to drive? Right. But then use cases you have to understand, are you taking a reactive use case approach? >>Are you taking active use cases, right? Or, yeah, that's a very, very important concentration. Then you have to be very cognizant of where does this data that you have, where does it reside? What are the systems and where does it need to go to in order for this AI function to happen and subsequently if there needs to be any backward communication with all of that data in a process manner. So I think these are some of the very critical points because you can have an AI solution, which is sitting in a customer data center. It could be in a managed services provider data center, like, right, right. It could be in a cloud data center, like an AWS or something, or you could have hybrid views, et cetera, all of that stuff. So you have to be very mindful of where you're going to get the data from is going to go to what are the use cases you're trying to get out to do a bit of backward forward. >>Okay, we've got this data thing and I think it's a journey. Nobody can come in and say, Hey, you've built this fantastic thing. It's like Terminator two. I think it's a journey where we built starting with the network. My personal focus always comes down to the network and with 5g so much, so much more right with 5g, you're talking low latency communication. That's like the true power of 5g, right? It's low latency, it's ultra high bandwidth, but what's the point of that low latency. If then subsequently the actions that need to be taken to prevent any problems in application, IOT applications, remote surgeries, uh, self driving vehicles, et cetera, et cetera. What if that's where people are sitting and sipping their coffees and trying to take action that needs to be in low latency as well. Right? So these are, I think some of the fundamental things that you have to know your data, your use cases, that location, where it needs to be exchanged, what are the parameters around that for extending that data? >>And I think from that point at one word, it's all about realizing, you know, sense of business outcomes. Unless AI comes in as a digital labor that shows you, I have, I have reduced your this amount of time and that's a result of big problems or identified problems for anything. Or I have saved you this much resource in a month, in a year or whatever timeline that people want to see it. So I think those are some of the initial starting points, and then it all starts coming together. But the key is it's not one system that can do everything. You have to have a way where, you know, you can share data once you've caught all of that data into one system. Maybe you can send it to another system at make more, take more advantage, right? That system might be an AI and IOT system, which is just looking at all of your street and make it sure that Hey parents. So it's still off just to be more carbon neutral and all that great stuff, et cetera, et cetera, >>Stuff for the audience to can cigarette rush, take us time from here. What are some of the takeaways that you think the audience really needs to be laser focused on as we move forward into the next year? You know, one thing that, uh, I think a key takeaway is, um, uh, you know, as we embark on 2021, closing the gap between intent and outcome and outputs and outcome will become critical, is critical. Uh, you know, especially for, uh, you know, uh, digital transformation at scale for organizations context in the, you know, for customer experience becomes even more critical as who Swan Huseman was talking, uh, you know, being network network aware network availability is, is a necessary condition, but not sufficient condition anymore. Right? The what, what, what customers have to go towards is going from network availability to network agility with high security, uh, what we call app aware networks, right? How do you differentiate between a trade, a million dollar trade that's happening between, uh, you know, London and New York, uh, uh, versus a YouTube video training that an employee is going through? Worse is a YouTube video that millions of customers are, are >>Watching, right? Three different context, three different customer scenarios, right? That is going to be critical. And last but not least feedback loop, uh, you know, responsiveness is all about feedback loop. You cannot predict everything, but you can respond to things faster. I think these are sort of the three, three things that, uh, that, uh, you know, customers aren't going to have to have to really think about. And that's also where I believe AI ops, by the way, AI ops and I I'm. Yeah. You know, one of the points that was smart and shout out to what he was saying was heterogeneity is key, right? There is no homogeneous tool in the world that can solve problems. So you want an open extensible system of intelligence that, that can harness data from disparate data sources provide that visualization, the actionable insight and the human augmented recommendation systems that are so needed for, uh, you know, it operators to be successful. I think that's where it's going. >>Amazing. You guys just provided so much content context recommendations for the audience. I think we accomplished our goal on this. I'll call it power panel of not only getting to a consensus of what, where AI ops needs to go in the future, but great recommendations for what businesses in any industry need to be on the lookout for rich Huisman Raj, thank you for joining me today. We want to thank you for watching. This was such a rich session. You probably want to watch it again. Thanks for your time. Thanks so much for attending and participating in the AI OBS virtual forum. We really appreciate your time and we hope you really clearly understand the value that AI ops platforms can deliver to many types of organizations. I'm Lisa Martin, and I want to thank our speakers today for joining. We have rich lane from Forrester who's fund here from Verizon and Raj from Broadcom. Thanks everyone. Stay safe..

Published Date : Dec 2 2020

SUMMARY :

ops virtual forum brought to you by Broadcom. It's great to have you today. I think it's going to be a really fun conversation to have today. that is 2020 that are going to be continuing into the next year. to infrastructure, you know, or we're in the, in the cloud or a hybrid or multi-cloud, in silos now, uh, in, in, you know, when you add to that, we don't mean, you know, uh, lessening head count because we can't do that. It's not going to go down and as consumers, you know, just to institutional knowledge. four or five hours of, uh, you know, hunting and pecking and looking at things and trying to try And I think, you know, having all those data and understanding the cause and effect of things increases, if I make a change to the underlying architectures that help move the needle forward, continue to do so for the foreseeable future, for them to be able and it also shows the ROI of doing this because there is some, you know, you know, here's the root cause you should investigate this huge, huge thing. So getting that sort of, uh, you know, In a more efficient manner, when you think about an incident occurring, You know, uh, they open a ticket and they enrich the ticket. Um, I think, uh, you know, a lot of, a lot of I do want to ask you what are some of these? it where the product owner is, you know, and say, okay, this is what it gets you. you know, in talking to one company, they were like, yeah, we're so excited for this. And it wasn't because we did anything wrong or the system And then we had to go through an evolution of, you know, just explaining we were 15 What do you recommend? the CIO, the VP of ops is like, you know, I I've signed lots of checks over We know that every hour system down, I think, uh, you know, is down say, and you know, you have a customer service desk of a thousand customer I think you set the stage for that rich beautifully, and you were right. Welcome back to the Broadcom AI ops, virtual forum, Lisa Martin here talking with Eastman Nasir Uh, what a pleasure. So 2020 the year of that needs no explanation, right? or New York, and also this whole consciousness about, you know, You know, all of these things require you to have this you know, we've had to enable these, uh, these virtual classrooms ensuring So you articulated the challenges really well. you know, even because of you just use your signal on the quality talking to somebody else, you know, just being away on holiday. So spectrum, it doesn't just need to be intuitive. What are some of the examples that you gave? fruit, like for somebody like revising who is a managed services provider, you know, You're going to go investigate 50 bags or do you want to investigate where And then subsequently, you know, like isolating it to the right cost uh, which is just providing those resources, you know, on demand. So it was when you clearly articulated some obvious, low hanging fruit and use cases that How do you maintain integrity of your you have your network. right, if something's sitting in the cloud, you were able to integrate for that with obviously the I'm thinking of, you know, the integrity of teams aligning business in it, which we probably can't talk So one example being that, you know, you know, have that superiority and continue it. Thank you so much for joining me today and giving us We'll be right back with our next segment. the solution gives you actionable insights by correlating an aggregating data and applying AI brought to you by Broadcom. Welcome back to the AI ops virtual forum, Lisa Martin here with Srinivasan, as a, as a team that is, uh, you know, that's working behind the scenes However, uh, you know, application of AI ML uh, you know, that that serve up your business services. But I want you to explain how can AI ops help with that alignment and align it outcome that said, uh, you know, these personas need mechanisms But in the, in the context of, uh, you know, So, whereas one of the things that you said there is that it's imperative for the business to find a problem before of the same system, you know, if you're a customer and if you're whipping up your mobile app I often, uh, you know, work with customers around, you know, We look at digital transformation at scale. uh, you know, Nike matures, its digital business outcomes by shoes per second, these measures, uh, you know, uh, for a bank, it may be deposits per month, Uh, and, uh, you know, which may be on your main frames, what we call mobile to mainframe scenario, There are millions of, uh, uh, you know, customers and hundreds The head of AI ops at Broadcom is now going to take you through a quick demo. I'm going to do today is talk through some of the key capabilities and differentiators of here, you can see that the issue is related to the mainframe kick server. You can expand to see the actual alarm, which is sourced from the mainframe operational intelligence. This increases the Elmont support cost to tens of dollars per virtual forum brought to you by Broadcom. Great to have you back. The last thing to change because we're spending so much time doing project work and modernization and fighting Problem is going to get worse. And I say, you know, how many people have three X, five X, you know, uh, things to monitor them. So I think it's, I would just relate it to a couple of things So to speak, you can drive these efficiencies through automating a lot of I mean, uh, you know, uh, to put things in perspective, I think, you know, more often than not, uh, you know, So we got kind of consensus there, as you said, uh, website, um, uh, you know, down detector.com, First of all, what are the things, you know, which could be better utilized Opportunity to reduce the noise of a trouble tickets handling. So, and so many of those are not really, not having to deal with problems, which nobody can resolve, which are not meant to be dealt with. So those are the, So there's some of the immediate cost saving them. the seven layers that I mentioned with the OSI reference model across network and security and I'm going to use a really interesting example. The integrity of the IOT machine is He has, everything is being told to the machine really fast with sending yeah. And if that's okay, And I believe, to the business in the form of the revenue. You know, all that stuff. to, you know, Roger's point your customer should not be identifying your problems before up with you from that senior analyst perspective, how can companies use I think with the, uh, you know, one of the biggest struggles we've always had in operations is isn't, So you were loyal to that because it was in your neighborhood, um, online that doesn't exist anymore. Uh, and I think companies are starting and then the pandemic certainly, you know, and is, you know, AI ops the way forward for them. Raj, I want to start with you readiness factor. I think, uh, you know, our, And that's squarely, very I ops is, you know, is going as it, Uh, I'd say at the end of last year, we're, you know, two years ago, people I'd and I'll, I'll, I'll send this out to any of the operations teams that are thinking about going down this path is that you have to understand So I'd say that the, to people's like start thinking about what this means One thing would be in the form of, you know, what are some of the things they have to be concerned So I think these are some of the very critical points because you can have an AI solution, you have to know your data, your use cases, that location, where it needs to be exchanged, You have to have a way where, you know, you can share data once you've uh, you know, uh, digital transformation at scale for organizations context recommendation systems that are so needed for, uh, you know, and we hope you really clearly understand the value that AI ops platforms can deliver to many

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

VerizonORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

LondonLOCATION

0.99+

two minutesQUANTITY

0.99+

EuropeLOCATION

0.99+

50 bagsQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

two hoursQUANTITY

0.99+

threeQUANTITY

0.99+

KenyaLOCATION

0.99+

RogerPERSON

0.99+

BrianPERSON

0.99+

CiscoORGANIZATION

0.99+

millionsQUANTITY

0.99+

20 minutesQUANTITY

0.99+

Roger RajagopalPERSON

0.99+

sixQUANTITY

0.99+

360 degreeQUANTITY

0.99+

11.3%QUANTITY

0.99+

2021DATE

0.99+

12%QUANTITY

0.99+

RajPERSON

0.99+

20 hoursQUANTITY

0.99+

15 thingsQUANTITY

0.99+

63%QUANTITY

0.99+

Reggie GopaulPERSON

0.99+

SrinivasanPERSON

0.99+

two secondsQUANTITY

0.99+

New YorkLOCATION

0.99+

eight yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

12QUANTITY

0.99+

second areaQUANTITY

0.99+

10 yearsQUANTITY

0.99+

2020DATE

0.99+

9%QUANTITY

0.99+

AWSORGANIZATION

0.99+

second pieceQUANTITY

0.99+

next weekDATE

0.99+

NikeORGANIZATION

0.99+

2022DATE

0.99+

third areaQUANTITY

0.99+

15 yearsQUANTITY

0.99+

fiveQUANTITY

0.99+

LisaPERSON

0.99+

SecondQUANTITY

0.99+

40 personQUANTITY

0.99+

six hoursQUANTITY

0.99+

thousandsQUANTITY

0.99+

24 peopleQUANTITY

0.99+

next yearDATE

0.99+

HusemanPERSON

0.99+

Swan HusemanPERSON

0.99+

hundredsQUANTITY

0.99+

Bureau of economic analysisORGANIZATION

0.99+

fourQUANTITY

0.99+

last weekDATE

0.99+

YouTubeORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

third dayQUANTITY

0.99+

AppleORGANIZATION

0.99+

six servicesQUANTITY

0.99+

three yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

Usman Nasir, Verizon | AIOps Virtual Forum 2020


 

>>from around the globe. It's the Cube with digital coverage of AI ops Virtual Forum Brought to you by Broadcom Welcome back to the Broadcom AI Ops Virtual Forum Lisa Martin here talking with Usman Naseer Global Product Management at Verizon we spend Welcome back. >>Uh huh. Hello, Good >>to see you. So 2020 The year of that needs no explanation. With the year of massive challenges, I wanted to get your take on the challenges that organizations are facing this year as the demand to deliver digital products and services has never been higher. >>Yeah, I e I think this is something is so close to all the part part right? It's something that's impacted the whole world equally. And I think regardless off which industry you win, you have been impacted by this in one form or the other and the i c t industry, the information and communication technology industry. You know, Verizon being really massive player in that whole arena, it has just been sort of struck with this massive confirmation that we have talked about for a long time. We have talked about these remote surgery capabilities whereby you got patients in Kenya were being treated by experts sitting in London or New York and also this whole consciousness about, you know, our carbon footprint and being environmentally conscious. This pandemic has taught a school of that and brought this to the forefront off organizational priority, right? The demand. I think that Zaveri natural consequence of everybody sitting at home. And the only thing that can keep things still going is the data communication, Right? But I would just say that that is what kind of at the heart of all of this. Just imagine if we are to realize any of these targets that the world is world leadership is setting for themselves. Hey, we have >>to be carbon >>neutral by Xia as a country as a geography, etcetera etcetera. You know, all of these things require you to have this remote working capability this remote interaction, not just between human but machine to machine interaction. And this is a unique value chain which is now getting created that you've got people we're communicating with other people or were communicating with other machines. But the communication is much more. I won't even use the term really time because we've used real time for voice and video, etcetera. We're talking low latency microsecond to see and making that can either cut somebody's, you know, um, our trees or that could actually go and remove the tumor, that kind of stuff. So that has become a reality. Everybody's asking for it. Remote learning, being an extremely massive requirement where, you know, we've had to enable these thes virtual classrooms ensuring the type of connectivity, ensuring the type of type of privacy which is just so, so critical. You can't just have everybody you know, Go on the internet and access the data source. You have to be. I'm sorry about the integrity and security of >>that. They've >>had the foremost. So I think all of these things, Yes. We have not been caught off guard. We were should be pretty forward looking in our, you know, plans in our evolution. But yes, it does this fast track a journey that we would probably the least we would have taken in three years. It has brought that down to two quarters where we had to execute them. >>Right? Massive acceleration. All right, so you articulated the challenges really well and a lot of the realities that many of our viewers air facing. Let's talk now about motivations ai ops as a tool as a catalyst for helping organizations overcome those challenges. >>So, yeah, now all that I said you can imagine, you know, it requires microsecond the sea and making which human being on this planet can do microsecond the sea and making on complex network infrastructure, which is impacting, and user applications which have multitudes off effect. You know, in real life, I used the example of a remote surgeon. Just imagine, if you know, even because you just lose your signal on the quality of that communication for that microsecond, it could be the difference between killing somebody in saving somebody's life. Is that particular? We talk about autonomous vehicles way talk about the transition to electric vehicles, smart motorways, etcetera, etcetera in federal environment. How is all of that going to work? You have so many different components coming in. You don't just have a natural can security anymore. You have software defined networking that's coming becoming a part of this. You have mobile edge computing that is rented for the technologies. Five g enables we're talking augmented reality. We're talking virtual reality all of these things require that resource is. And while we carbon conscious, we don't just wanna build a billionaire, a terrorist on the planet, right? We we have to make sure that resource is air given on demand and the best way of re sources can be given on demand and could be most efficient. Is that we're making is being made at million microsecond. And those resource is our accordingly being distribute. Right? If you're 10 flying on, people sipping their coffee is having teeth talking to somebody else. You know, just being away on holiday. I don't think we're gonna be able to handle that world that we have already stepped into. Risen's five g has already started businesses on the transformational journey where they're talking about end user experience, personalization. You're gonna have, you know, events where people are going to go. And it's going to be three dimensional experiences that are purely customized for you. How How does that all happen without this intelligence having their and a network with all of these multiple layers assaults spectrum, it doesn't just need to be intuitive. Hey, this is my private I p traffic. This is public traffic. You know it has to now be into or this is an application that to privatize over another has to be intuitive to the criticality in the context, off those transactions again that surgeons surgery is much more important than husband sitting and playing a video game. >>Yeah, I'm glad that you think that that's excellent. Let's go into some specific use cases. What are in some of the examples that you gave? Let's kind of dig deeper into some of that. What you think are the lowest hanging fruit for organizations, kind of pan industry to go after here. >>Excellent, right? And I think this just like different ways to look at the lowest timing food. Like for somebody like Verizon, who is the managed services provider, you know, very comprehensive medicines. But we obviously have food timing much lower than potentially for some of our customers who want to go on that journey, right? So for them to just >>go and try and >>harness the power of help, the food's might be a bit higher hanging. But for somebody like God, the immediate ones would be to reduce the number off alarms that are being generated by these overlays services. You've got your basic network. Then you've got your software defined networking. On top of that, you have your hybrid clouds. You have your edge computing coming on top of that, you know? So ALOF this means if there is an outrage on one device on the network, gonna make this very real for everybody, right? It's right out. I'm not divisive. Network does not stop all of those multiple applications for monitoring tools from raising havoc and raising thousands off alarms and everyone capacity. If people are attending to those thousands off alarms, it's like you having a police force. And there's a burglary in one bank and the alarm goes off in $50. How you gonna make the best use of your police force? You're gonna go investigate 50 banks? You wanna investigate one where the problem is. So it's as realize that and I think that's the first wind where people can save so much cost, which is currently being wasted. And resource is running around primary figure stuff up immediately. Anti this with network and security network and security is something which has eluded even the most. You know, amazing off brings in or engineering. Well, we took it. We have network expert, separate people. Security experts separate people to look for different things. But there are security events that can impact the performance of the network and then use your application, cetera, etcetera, which could be falsely attributed to the network. And then if you've got multiple parties, which are then which have to clear stakeholders, you can imagine the blame game that goes on pointing fingers, taking names, not taking responsibility. That is how all this happened. This is the only way to bring it all together to say Okay, this is what takes priority. If there's an event that has happened, what is its correlation to the other downstream systems, devices, components and user applications. And it subsequently, you know, like isolating into the right cause where you can most effectively resolve that problem. Certainly, I would say on demand virtualized resource virtualized resource is the heart and soul of the spirit of status that you can have them on them up so you can automate the allocation of these. Resource is based on, you know, customers consumption, their peaks, their crimes. All of that comes in. You see Hey, typically on a Wednesday, their traffic goes up significantly from this particular application. You know, going to this particular data center, you could have this automated this AI ops, which is just providing those resource, is, you know, on demand and tell us to have a much better commercial engagement with customers and just a much better service assurance model. And then one more thing on top of that, which is very critical, is that, as I was saying, giving that intelligence to the network to start having context of the criticality of a transaction that doesn't exist to it. You can't have that because for that you need to have this, you know, multi layer data. You need to have multiple system which are monitoring and controlling different aspects of your overall and user application value chain to be communicating with each other. And, you know, that's that's the only way to sort of achieve that goal. And that only happens with AI off. It's not possible with them. You can paradise Comdex. >>So Guzman, you clearly articulated some obvious low hanging for use cases that organizations can go after. Let's talk now about some of the considerations you talked about the importance of the network in AI ops. The approach, I assume, needs to be modular support needs to be heterogeneous. Talk to us about some of those key considerations that you would recommend >>absolutely. So again, basically starting with the network. Because if there is, if the network sitting at the middle of all of this is not working, then things from communicate with each other, right? And the cloud doesn't work. Nothing. None of this person has hit the hardest all of this. But then subsequently, when you talk about machine to machine communication or i o T. Which is the biggest transformation to spend, every company is going priority now to drive those class efficiencies enhancements. We've got some experience. The integrity off the tab becomes paramount, right? The security integrity of that. How do you maintain integrity off your detail beyond just the secured network components that Trevor right? That's where you get into the whole arena Blockchain technology where you have these digital signatures or barcodes that machine then and then an intelligent system is automatically able to validate and verify the integrity of the data and the commands that are being executed by those and you determine. But I think the terminal. So I o. T machines, right, that is paramount. And if anybody is not keeping that into their equation, that in its own self, is any eye off system that is therefore maintaining the integrity off your commands and your quote that sits on those those machines Right. Second, you have your network. You need to have any off platform, which is able to rationalize all the fat network information, etcetera. And couple that with that. The integrity peace. Because for the management, ultimately, they need to have a co haven't view off the analytics, etcetera, etcetera. They need to. They need to know where the problems are again, right? So let's see if there's a problem with the integrity off the commands that are being executed by a machine. That's a much bigger problems than not being able to communicate with that machine. And the first thing because you'd rather not talk to the machine or haven't do anything if it's going to start doing the wrong thing, So I think that's where it's just very intuitive. It's natural. You have to have subsequently if you have some kind of say and let me use that use case Off Autonomous comes again. I think we're going to see in the next five years it's much water rates, etcetera. It will set for autonomous because it's much more efficient. It's much more space, etcetera, etcetera. So whether that equation you're gonna have systems which will be specialist in looking at aspects and Trump's actions related to those systems, for example, an autonomous moving vehicle's brakes are much more important than the Vipers, Right? So this kind of intelligence, there will be multiple systems who have to sit and nobody has to. One person has to go and on these systems, I think these systems should be open source enough that you are able to integrate them, right? If something sitting in the cloud you were able to integrate for that with obviously the regard off the security and integrity off their data, that has two covers from one system to the extremely. >>So I'm gonna borrow that integrity theme for a second as we go into our last question. And that is this kind of take a macro. Look at the overall business impact that AI ops can help customers make. I'm thinking of, you know, the integrity of teams aligning business and I t. Which we probably can't talk about enough. We're helping organizations really effectively measure KP eyes that deliver that digital experience that all of us demanding consumers expect. What's the overall impact? What would you say in separation? >>So I think the overall impact is a lot. Of course, that customers and businesses give me term got prior to the term enterprises defense was inevitable. There's something that for the first time will come to light. And it's something that is going to, you know, start driving cost efficiencies and consciousness and awareness within their own business, which is obviously going to have, you know, abdominal kind of an effect. So what example being that, you know, you have a problem? Isolation? I talked about network security, this multilayered architectural which enables this new world of five g um, at the heart of all of it. It is to identify the problem to the source, right? Not be bogged down by 15 different things that are going wrong. What is causing those 15 things to go wrong, right that speed to isolation and its own self can make millions and millions off dollars to organizations every organization. Next one is obviously overall impacted customer experience. The five g waas. You can have your customers expecting experiences from you, even if you're not expecting to deliver them in 2021 2022. You'll have customers asking for those experiences or walking away if you do not provide those experiences. So for it's almost like a business can do nothing. Every year they don't have to reinvest if they just want to die on the wine. Businesses want to remain relevant. Businesses want to adopt the latest and greatest in technology, which enables them to, you know, have that superiority and continue it. So from that perspective that continue ity, we're ready that there are intelligence system sitting, rationalizing information and making this in supervised by people, of course, who were previously making some of those here. >>That was a great summary because you're right, you know, with how demanding consumers are. We don't get what we want. Quickly we turn right, we go somewhere else, and we could find somebody that can meet those expectations. So it was spent Thanks for doing a great job of clarifying the impact and the value that AI ops can bring to organizations. That sounds really now is we're in this even higher demand for digital products and services, which is not going away. It's probably going to only increase. It's table stakes for any organization. Thank you so much for joining me today and giving us your thoughts. >>Pleasure. Thank you. >>We'll be right back with our next segment.

Published Date : Nov 23 2020

SUMMARY :

AI ops Virtual Forum Brought to you by Broadcom Welcome With the year of massive challenges, I wanted to get your take on the challenges that organizations This pandemic has taught a school of that and brought this to the forefront off organizational You can't just have everybody you know, Go on the internet and access the data source. that. It has brought that down to two quarters where we had to execute them. and a lot of the realities that many of our viewers air facing. How is all of that going to work? What are in some of the examples that you gave? you know, very comprehensive medicines. You know, going to this particular data center, you could have this automated this AI ops, Let's talk now about some of the considerations you talked about the importance You have to have subsequently if you have some kind of say and let me use I'm thinking of, you know, the integrity of teams aligning business and I t. There's something that for the first time will come to light. Thank you so much for joining me today and giving us your thoughts. Thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LondonLOCATION

0.99+

VerizonORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

TrumpPERSON

0.99+

KenyaLOCATION

0.99+

50 banksQUANTITY

0.99+

Usman NasirPERSON

0.99+

New YorkLOCATION

0.99+

millionsQUANTITY

0.99+

15 thingsQUANTITY

0.99+

$50QUANTITY

0.99+

three yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

one bankQUANTITY

0.99+

GuzmanPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

one deviceQUANTITY

0.99+

Usman NaseerPERSON

0.99+

first windQUANTITY

0.99+

first timeQUANTITY

0.99+

todayDATE

0.98+

15 different thingsQUANTITY

0.98+

one systemQUANTITY

0.98+

10QUANTITY

0.98+

TrevorPERSON

0.97+

two quartersQUANTITY

0.96+

pandemicEVENT

0.95+

first thingQUANTITY

0.94+

RisenORGANIZATION

0.94+

WednesdayDATE

0.93+

2021 2022DATE

0.93+

million microsecondQUANTITY

0.9+

One personQUANTITY

0.87+

five gORGANIZATION

0.87+

Five gORGANIZATION

0.86+

ComdexORGANIZATION

0.86+

one formQUANTITY

0.85+

VipersORGANIZATION

0.82+

next five yearsDATE

0.82+

oneQUANTITY

0.81+

GodPERSON

0.79+

thousands offQUANTITY

0.79+

this yearDATE

0.79+

millions offQUANTITY

0.78+

secondQUANTITY

0.74+

one more thingQUANTITY

0.71+

threeQUANTITY

0.64+

AIOps Virtual ForumEVENT

0.61+

NextQUANTITY

0.55+

ZaveriORGANIZATION

0.5+

XiaORGANIZATION

0.45+

gORGANIZATION

0.37+

Usman Nasir V1


 

>> Narrator: From theCUBE Studios in Palo Alto, in Boston, connecting with thought leaders all around the world, this is theCUBE Conversation. >> Welcome back to the Broadcom AIOps Virtual Forum. Lisa Martin here talking with Usman Nasir, Global Product Management at Verizon. Usman, welcome back. >> Hi Lisa, hello, what a pleasure to be back. >> Good to see you. So 2020, the year of that needs no explanation, right? The year of massive challenges, I wanting to get your take on the challenges that organizations are facing this year as the demand to deliver digital products and services has never been higher. >> Yeah, so Lisa I think this is something that's so close to all our hearts, right? It's something that's impacted the whole world equally. And I think regardless of which industry you're in, you have been impacted by this in one form or the other. And the ICT industry, the information and communication technology industry. You know Verizon being really massive player in that whole arena. It has just been sort of struck with this massive concentration that we have talked about for a long time, we have talked about these remote surgery capabilities whereby you've got patients in Kenya were being treated by experts sitting in London or New York, and also this whole consciousness about our carbon footprint and being environmentally conscious, this pandemic has taught us all of that and brought this to the forefront of organizational priorities, right? The demand, I think that's a very natural consequence of everybody sitting at home. And the only thing that can keep things still going is this data communication, right? But I wouldn't just say that that is what's kind of at the heart of all of this. Just imagine, if we are to realize any of these targets that the world is, well leadership is setting for themselves, hey, we have to be carbon neutral by X year as a country, as a geography, et cetera, et cetera. You know, all of these things require you to have this remote working capability. This remote interaction, not just between humans, but machine to machine interactions. And this is a unique value chain, which is now getting created, that you've got people who are communicating with other people or communicating with other machines, but the communication is much more, I wouldn't even use the term real-time because we've used real-time for voice and video, et cetera. We're talking low latency, microsecond decision-making that can either cut somebody's you know, arteries or that could actually go and remove the tumor, that kind of stuff. So that has become a reality, everybody's asking for it. Remote learning, being an extremely massive requirement where, you know, we've had to enable these virtual classrooms. Ensuring the type of connectivity, ensuring the type of privacy, which is just so critical. You can't just have everybody in a go on the internet and access a data source. You have to be concerned about the integrity and security of that data as the foremost. So I think all of these things, yes, we have not been caught off guard, we we're pretty forward-looking in our plans and our evolution, but yes, it has this fast track a journey that we would probably believe we would have taken in three years. It has brought that down to two quarters where we talked to execution. >> Right, massive acceleration. All right, so you articulated the challenges really well, and a lot of the realities that many of our viewers are facing. Let's talk now about motivations AIOPs, as a tool, as a catalyst, for helping organizations overcome those challenges. >> So yeah, now, all that I said, you can imagine, it requires microsecond decision-making. Which human being on this planet can do microsecond decision-making on complex network infrastructure which is impacting end user applications which have multitudes of effect? You know, in real life, I use the example of a remote surgeon. Just imagine that even because of you just lose your signal on the quality of that communication, for that microsecond it could be the difference between killing somebody and saving somebody's life. It is that critical. We talk about autonomous vehicles. We talk about this transition to electric vehicles, smart motorways, et cetera, et cetera, in federal environment, how is all of that going to work? You have so many different components coming in, you don't just have a network and security anymore. You have software defined networking that's becoming a part of it. You have mobile edge computing that is rented for the technologies 5G enables. We're talking augmented reality. We're talking virtual reality. All of these things require that resources and why being carbon conscious, we don't just want to build a billion data centers on this planet, right? We have to make sure that resources are given on demand. And the best way of resources can be given on demand and could be most efficient, is that the decision making is being made at million microsecond and those resources are accordingly being distributed, right? If you're bent relying on people, sipping their coffees, having teas, talking to somebody else, you know just being away on a holiday, I don't think we're going to be able to handle that one that we have already stepped into. Verizon's 5G has already started businesses on that transformational journey, where they're talking about end user experience personalization. You're going to have events where people are going to go and it's going to be three-dimensional experiences that are purely customized for you, how does that all happen without this intelligence sitting there? And a network with all of these multiple layers of spectrum, it doesn't just need to be intuitive, hey, this is my private IP traffic, this is public traffic, you know, it has to now be in to or this is an application that I have to prioritize over another, task to be intuitive to the criticality and the context of those transactions. Again, that's surgeons, surgery it's much more important than husband sitting and playing a video game. >> I'm glad that you think that, that's excellent. Let's go into some specific use cases. What are, some of the examples that you gave, what's kind of dig deeper into some of the what you think are the lowest hanging fruit for organizations kind of pan industry to go after here? >> Excellent, right? And I think this like different ways to look at the lowest hanging fruit. Like for somebody like Verizon, who is the managed services provider, you know very comprehensive medicines, but we obviously have food timing much lower than potentially for some of our customers who want to go on that journey, right? So for them to just go and try and harness the power of their health, foods might be a bit higher hanging. But for somebody like us, the immediate ones would be to reduce the number of alarms that are being generated by these overlay services. You've got your basic network, then you've got your whole software defined networking on top of that, you have your hybrid clouds, you have your edge computing coming on top of that. So all of that means if there's an outage on one device on the network, I want to make this very real for everybody, right? It's a lot device and network does not stop all of those multiple applications or monitoring tools from raising havoc and raising thousands of alarm in their one capacity. If people are attending to those thousands of alarms, it's like you having a police force and there's a burglary in one bag and the alarm goes off in 50 bags, (laughing) how are you going to make the best use of your police force? You're going to go investigate 50 bags, or do you want to investigate one, where the problem is? So it's as real as that, I think that's the first wins where people can save so much costs, which is coming from being wasted and resources running around trying to figure stuff out. Immediately, I'm tied this with network and security. Network and security is something which has eluded even the most you know, I mean amazing of brains in our engineering. Well, we typically have network experts, separate people, security experts, separate people, to look for different things, but there are security events that can impact the performance of a network and then use your application center et cetera. Which could be falsely attributed to the network. And then if you've got multiple parties, which are then the top to clear stakeholders, you can the blame game that goes on, pointing fingers, taking names, not taking responsibility. That is how it's all this happened. This is the only way to bring it all together to say okay, this is what takes priority, if there's an event that has happened, what is its correlation to the other downstream systems, devices, components and these are applications? And then subsequently, you know like isolating it to the right cost, where you can most effectively resolve that problem. Thirdly, I would say on demand virtualized resource. Virtualized resources, the heart and soul, the spirit of status that you can have them on demand. So, you can automate the allocation of these resources based on customer's consumption, their peaks, their trends, all of that comes in and you see, hey, typically on a Wednesday, the traffic was not significantly for this particular application. You know, going to this particular data center, you could have this automated AIOPs, which is just providing those resources on demand. And so if it's to have a much better commercial engagement with customers and just a much better service assurance model. And then one more thing on top of that, which is very critical, is that as I was saying giving that intelligence to the networks to start having context of the criticality of a transaction. That doesn't make sense to me, you can't have that. Because of that you need to have this multi layer data. You need to have multiple system which are monitoring and controlling different aspects of your overall end user application value chain to be communicating with each other, and that's the only way to sort of achieve that goal and that only happens with AIOPs It's not possible with that You can't prioritize transactions. >> So Usman you clearly articulated some obvious low-hanging fruit use cases that organizations can go after. Let's talk now about some of the considerations you've talked about the importance of the network and AIOPs, the approach, I assume it needs to be modular, support needs to be heterogeneous. Talk to us about some of those key considerations that you would recommend. >> Absolutely, so again, basically starting with the network because it says, if the network sitting at the middle of all of this is not working, then things can communicate with each other, right? And the cloud doesn't work nothing. None of this is at the heart of all of this. But then subsequently when you talk about machine to machine communication or IOT, which is just the biggest transformation of every company is going for IoT now to drive those costs, efficiencies enhancement and customer experience, the integrity of data accounts parameter, right? The security integrity of that. How do you maintain integrity of your data beyond just the secure network components that is traversing, right? That's where you're getting to the whole arena of blockchain technology, where you have to use digital signatures or barcodes that machine then and then an intelligent system is automatically able to validate and verify the integrity of the data and the commands that are being executed by those end-user terminal or any terminal by those IoT machines, right? That is paramount. And if anybody is not keeping that into their equation, that's in its own self is any add off system that is there for maintaining the integrity of your commands and your code that sits on those machines, right? Second, you have your network, you need to have any else platform which is able to rationalize all of that network information, et cetera. And a couple of that with that data integrity piece, because for the management ultimately, they need to have a coherent view of the analytics, et cetera, et cetera. They need to know where the problems are again, right? So, let's say if there's a problem with the integrity of the command that are being executed by the machine, that's a much bigger problem than not being able to communicate with that machine in the first place. Because you'd rather not talk to the machine or have it do anything if it's going to start doing wrong things. So, I think that's where it is, it's very intuitive. It's natural, you have to have it. Subsequently, if you have some kind of faith, and let me use that use case of autonomous vehicles again. I think if we're going to see in the next five years that these smart waterways, et cetera. They're all set for autonomous vehicles. It's much more efficient. It's much more space, et cetera, et cetera. So, within that equation, you're going to have systems which will be specialists in looking at aspects and transactions related to those systems. For example, an autonomous moving vehicles brakes are much more important than the wipers, right? So this kind of intelligence, there will be multiple systems to have to say and nobody has to, one person have to go and own these systems. I think these systems should be open source now that they are able to integrate them, right? If something's sitting in the cloud you were able to integrate that with obviously the regard of the security and integrity of your data that has to traverse from one system to the other, extremely important. >> So I'm going to borrow that integrity theme for a second as we go into our last question. And that is this kind of take a macro look at the overall business impact that AIOPs can help customers make. I'm thinking of, you know the integrity of teams aligning business and IT which we probably can't talk about enough. We're helping organizations really effectively measure KPIs that deliver that digital experience that all of us demanding consumers expect. What's the overall impact? What would you say in summarization? >> So I think that will run in fact is a lot of costs that customers and businesses gives me term to the term enterprises, defense was inevitable. This is something that for the first time will come to life. And it's something that is going to start driving cost efficiencies and consciousness and awareness within their own business which is obviously going to have the domino kind of an effect. So, one example being that you have problem isolation. I talked about network security, this multi-layers architecture which enables this new world of 5G. At the heart of all of it, it is to identify the problems to the source, right? Not be bogged down by 15 different things that are going wrong. What is causing those 15 things to go wrong, right? That speed to isolation in its own sense can make millions and millions of dollars to organizations as we organize. Next one is obviously overall impacted customer experience. A 5G world, you're going to have your customers expecting experiences from you, even if you're not expecting to deliver them in 2021, 2022. You would have customers asking for those experience or walking away, if you do not provide those experience. So, it's almost like a business can do nothing every year. They don't have to reinvest if they just want to die on the wine business, as one to remain relevant. Businesses want to adopt the latest and greatest in technology, which enables them to have the periodicity and continuity. So from that perspective, that continuity will read that they are intelligent system getting rationalizing information and making the scene. Supervised by people, of course, who were previously making some of those of you. >> That was a great summary because you're right, you know with how demanding consumers are, we don't get what we want quickly. We churn, right? We go somewhere else and we could find somebody that can meet those expectations. So, it has been thanks for doing a great job of clarifying the impact and the value that AIOPs can bring to organizations. That sounds really now as we're in this even higher demand for digital products and services, which is not going away it's probably going to only increase it's table stakes for any organization. Thank you so much for joining me today and giving us your thoughts. >> Pleasure, thank you. >> We'll be right back with our next segment. (upbeat music)

Published Date : Nov 20 2020

SUMMARY :

leaders all around the world, Welcome back to the a pleasure to be back. as the demand to deliver digital products of that data as the foremost. and a lot of the realities is all of that going to work? some of the what you think giving that intelligence to the networks that you would recommend. that they are able to And that is this kind of take a macro look And it's something that is going to of clarifying the impact with our next segment.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerizonORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

UsmanPERSON

0.99+

LondonLOCATION

0.99+

50 bagsQUANTITY

0.99+

millionsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

KenyaLOCATION

0.99+

todayDATE

0.99+

15 thingsQUANTITY

0.99+

New YorkLOCATION

0.99+

2021DATE

0.99+

Usman NasirPERSON

0.99+

BostonLOCATION

0.99+

one bagQUANTITY

0.99+

2022DATE

0.99+

15 different thingsQUANTITY

0.99+

SecondQUANTITY

0.99+

2020DATE

0.99+

two quartersQUANTITY

0.99+

three yearsQUANTITY

0.99+

one deviceQUANTITY

0.98+

first winsQUANTITY

0.98+

thousands of alarmsQUANTITY

0.98+

first timeQUANTITY

0.98+

one exampleQUANTITY

0.97+

one personQUANTITY

0.96+

thousands of alarmQUANTITY

0.96+

one formQUANTITY

0.95+

one capacityQUANTITY

0.95+

million microsecondQUANTITY

0.92+

theCUBE StudiosORGANIZATION

0.92+

pandemicEVENT

0.92+

one systemQUANTITY

0.91+

WednesdayDATE

0.88+

secondQUANTITY

0.88+

next five yearsDATE

0.86+

a billion data centersQUANTITY

0.82+

first placeQUANTITY

0.81+

this yearDATE

0.81+

millions of dollarsQUANTITY

0.78+

one more thingQUANTITY

0.78+

XQUANTITY

0.77+

oneQUANTITY

0.76+

5GORGANIZATION

0.75+

NextQUANTITY

0.75+

Broadcom AIOpsORGANIZATION

0.68+

threeQUANTITY

0.59+

yearDATE

0.57+

theCUBEORGANIZATION

0.44+

Reliance Jio: OpenStack for Mobile Telecom Services


 

>>Hi, everyone. My name is my uncle. My uncle Poor I worked with Geo reminds you in India. We call ourselves Geo Platforms. Now on. We've been recently in the news. You've raised a lot off funding from one of the largest, most of the largest tech companies in the world. And I'm here to talk about Geos Cloud Journey, Onda Mantis Partnership. I've titled it the story often, Underdog becoming the largest telecom company in India within four years, which is really special. And we're, of course, held by the cloud. So quick disclaimer. Right. The content shared here is only for informational purposes. Um, it's only for this event. And if you want to share it outside, especially on social media platforms, we need permission from Geo Platforms limited. Okay, quick intro about myself. I am a VP of engineering a geo. I lead the Cloud Services and Platforms team with NGO Andi. I mean the geo since the beginning, since it started, and I've seen our cloud footprint grow from a handful of their models to now eight large application data centers across three regions in India. And we'll talk about how we went here. All right, Let's give you an introduction on Geo, right? Giorgio is on how we became the largest telecom campaign, India within four years from 0 to 400 million subscribers. And I think there are There are a lot of events that defined Geo and that will give you an understanding off. How do you things and what you did to overcome massive problems in India. So the slide that I want to talkto is this one and, uh, I The headline I've given is, It's the Geo is the fastest growing tech company in the world, which is not a new understatement. It's eggs, actually, quite literally true, because very few companies in the world have grown from zero to 400 million subscribers within four years paying subscribers. And I consider Geo Geos growth in three phases, which I have shown on top. The first phase we'll talk about is how geo grew in the smartphone market in India, right? And what we did to, um to really disrupt the telecom space in India in that market. Then we'll talk about the feature phone phase in India and how Geo grew there in the future for market in India. and then we'll talk about what we're doing now, which we call the Geo Platforms phase. Right. So Geo is a default four g lt. Network. Right. So there's no to geo three g networks that Joe has, Um it's a state of the art four g lt voiceover lt Network and because it was designed fresh right without any two D and three G um, legacy technologies, there were also a lot of challenges Lawn geo when we were starting up. One of the main challenges waas that all the smart phones being sold in India NGOs launching right in 2000 and 16. They did not have the voice or lt chip set embedded in the smartphone because the chips it's far costlier to embed in smartphones and India is a very price and central market. So none of the manufacturers were embedding the four g will teach upset in the smartphones. But geos are on Lee a volte in network, right for the all the network. So we faced a massive problem where we said, Look there no smartphones that can support geo. So how will we grow Geo? So in order to solve that problem, we launched our own brand of smartphones called the Life um, smartphones. And those phones were really high value devices. So there were $50 and for $50 you get you You At that time, you got a four g B storage space. A nice big display for inch display. Dual cameras, Andi. Most importantly, they had volte chip sets embedded in them. Right? And that got us our initial customers the initial for the launch customers when we launched. But more importantly, what that enabled other oh, EMS. What that forced the audience to do is that they also had to launch similar smartphones competing smartphones with voltage upset embedded in the same price range. Right. So within a few months, 3 to 4 months, um, all the other way EMS, all the other smartphone manufacturers, the Samsung's the Micromax is Micromax in India, they all had volte smartphones out in the market, right? And I think that was one key step We took off, launching our own brand of smartphone life that helped us to overcome this problem that no smartphone had. We'll teach upsets in India and then in order. So when when we were launching there were about 13 telecom companies in India. It was a very crowded space on demand. In order to gain a foothold in that market, we really made a few decisions. Ah, phew. Key product announcement that really disrupted this entire industry. Right? So, um, Geo is a default for GLT network itself. All I p network Internet protocol in everything. All data. It's an all data network and everything from voice to data to Internet traffic. Everything goes over this. I'll goes over Internet protocol, and the cost to carry voice on our smartphone network is very low, right? The bandwidth voice consumes is very low in the entire Lt band. Right? So what we did Waas In order to gain a foothold in the market, we made voice completely free, right? He said you will not pay anything for boys and across India, we will not charge any roaming charges across India. Right? So we made voice free completely and we offer the lowest data rates in the world. We could do that because we had the largest capacity or to carry data in India off all the other telecom operators. And these data rates were unheard off in the world, right? So when we launched, we offered a $2 per month or $3 per month plan with unlimited data, you could consume 10 gigabytes of data all day if you wanted to, and some of our subscriber day. Right? So that's the first phase off the overgrowth and smartphones and that really disorders. We hit 100 million subscribers in 170 days, which was very, very fast. And then after the smartphone faith, we found that India still has 500 million feature phones. And in order to grow in that market, we launched our own phone, the geo phone, and we made it free. Right? So if you take if you took a geo subscription and you carried you stayed with us for three years, we would make this phone tree for your refund. The initial deposit that you paid for this phone and this phone had also had quite a few innovations tailored for the Indian market. It had all of our digital services for free, which I will talk about soon. And for example, you could plug in. You could use a cable right on RCR HDMI cable plug into the geo phone and you could watch TV on your big screen TV from the geophones. You didn't need a separate cable subscription toe watch TV, right? So that really helped us grow. And Geo Phone is now the largest selling feature phone in India on it. 100 million feature phones in India now. So now now we're in what I call the geo platforms phase. We're growing of a geo fiber fiber to the home fiber toe the office, um, space. And we've also launched our new commerce initiatives over e commerce initiatives and were steadily building platforms that other companies can leverage other companies can use in the Jeon o'clock. Right? So this is how a small startup not a small start, but a start of nonetheless least 400 million subscribers within four years the fastest growing tech company in the world. Next, Geo also helped a systemic change in India, and this is massive. A lot of startups are building on this India stack, as people call it, and I consider this India stack has made up off three things, and the acronym I use is jam. Trinity, right. So, um, in India, systemic change happened recently because the Indian government made bank accounts free for all one billion Indians. There were no service charges to store money in bank accounts. This is called the Jonathan. The J. GenDyn Bank accounts. The J out off the jam, then India is one of the few countries in the world toe have a digital biometric identity, which can be used to verify anyone online, which is huge. So you can simply go online and say, I am my ankle poor on duh. I verify that this is indeed me who's doing this transaction. This is the A in the jam and the last M stands for Mobil's, which which were held by Geo Mobile Internet in a plus. It is also it is. It also stands for something called the U. P I. The United Unified Payments Interface. This was launched by the Indian government, where you can carry digital transactions for free. You can transfer money from one person to the to another, essentially for free for no fee, right so I can transfer one group, even Indian rupee to my friend without paying any charges. That is huge, right? So you have a country now, which, with a with a billion people who are bank accounts, money in the bank, who you can verify online, right and who can pay online without any problems through their mobile connections held by G right. So suddenly our market, our Internet market, exploded from a few million users to now 506 106 100 million mobile Internet users. So that that I think, was a massive such a systemic change that happened in India. There are some really large hail, um, numbers for this India stack, right? In one month. There were 1.6 billion nuclear transactions in the last month, which is phenomenal. So next What is the impact of geo in India before you started, we were 155th in the world in terms off mobile in terms of broadband data consumption. Right. But after geo, India went from one 55th to the first in the world in terms of broadband data, largely consumed on mobile devices were a mobile first country, right? We have a habit off skipping technology generation, so we skip fixed line broadband and basically consuming Internet on our mobile phones. On average, Geo subscribers consumed 12 gigabytes of data per month, which is one of the highest rates in the world. So Geo has a huge role to play in making India the number one country in terms off broad banded consumption and geo responsible for quite a few industry first in the telecom space and in fact, in the India space, I would say so before Geo. To get a SIM card, you had to fill a form off the physical paper form. It used to go toe Ah, local distributor. And that local distributor is to check the farm that you feel incorrectly for your SIM card and then that used to go to the head office and everything took about 48 hours or so, um, to get your SIM card. And sometimes there were problems there also with a hard biometric authentication. We enable something, uh, India enable something called E K Y C Elektronik. Know your customer? We took a fingerprint scan at our point of Sale Reliance Digital stores, and within 15 minutes we could verify within a few minutes. Within a few seconds we could verify that person is indeed my hunk, right, buying the same car, Elektronik Lee on we activated the SIM card in 15 minutes. That was a massive deal for our growth. Initially right toe onboard 100 million customers. Within our and 70 days. We couldn't have done it without be K. I see that was a massive deal for us and that is huge for any company starting a business or start up in India. We also made voice free, no roaming charges and the lowest data rates in the world. Plus, we gave a full suite of cloud services for free toe all geo customers. For example, we give goTV essentially for free. We give GOTV it'll law for free, which people, when we have a launching, told us that no one would see no one would use because the Indians like watching TV in the living rooms, um, with the family on a big screen television. But when we actually launched, they found that GOTV is one off our most used app. It's like 70,000,080 million monthly active users, and now we've basically been changing culture in India where culture is on demand. You can watch TV on the goal and you can pause it and you can resume whenever you have some free time. So really changed culture in India, India on we help people liver, digital life online. Right, So that was massive. So >>I'm now I'd like to talk about our cloud >>journey on board Animal Minorities Partnership. We've been partners that since 2014 since the beginning. So Geo has been using open stack since 2014 when we started with 14 note luster. I'll be one production environment One right? And that was I call it the first wave off our cloud where we're just understanding open stack, understanding the capabilities, understanding what it could do. Now we're in our second wave. Where were about 4000 bare metal servers in our open stack cloud multiple regions, Um, on that around 100,000 CPU cores, right. So it's a which is one of the bigger clouds in the world, I would say on almost all teams, with Ngor leveraging the cloud and soon I think we're going to hit about 10,000 Bama tools in our cloud, which is massive and just to give you a scale off our network, our in French, our data center footprint. Our network introduction is about 30 network data centers that carry just network traffic across there are there across India and we're about eight application data centers across three regions. Data Center is like a five story building filled with servers. So we're talking really significant scale in India. And we had to do this because when we were launching, there are the government regulation and try it. They've gotten regulatory authority of India, mandates that any telecom company they have to store customer data inside India and none of the other cloud providers were big enough to host our clothes. Right. So we we made all this intellectual for ourselves, and we're still growing next. I love to show you how we grown with together with Moran says we started in 2014 with the fuel deployment pipelines, right? And then we went on to the NK deployment. Pipelines are cloud started growing. We started understanding the clouds and we picked up M C p, which has really been a game changer for us in automation, right on DNA. Now we are in the latest release, ofem CPM CPI $2019 to on open stack queens, which on we've just upgraded all of our clouds or the last few months. Couple of months, 2 to 3 months. So we've done about nine production clouds and there are about 50 internal, um, teams consuming cloud. We call as our tenants, right. We have open stack clouds and we have communities clusters running on top of open stack. There are several production grade will close that run on this cloud. The Geo phone, for example, runs on our cloud private cloud Geo Cloud, which is a backup service like Google Drive and collaboration service. It runs out of a cloud. Geo adds G o g S t, which is a tax filing system for small and medium enterprises, our retail post service. There are all these production services running on our private clouds. We're also empaneled with the government off India to provide cloud services to the government to any State Department that needs cloud services. So we were empaneled by Maiti right in their ego initiative. And our clouds are also Easter. 20,000 certified 20,000 Colin one certified for software processes on 27,001 and said 27,017 slash 18 certified for security processes. Our clouds are also P our data centers Alsop a 942 be certified. So significant effort and investment have gone toe These data centers next. So this is where I think we've really valued the partnership with Morantes. Morantes has has trained us on using the concepts of get offs and in fries cold, right, an automated deployments and the tool change that come with the M C P Morantes product. Right? So, um, one of the key things that has happened from a couple of years ago to today is that the deployment time to deploy a new 100 north production cloud has decreased for us from about 55 days to do it in 2015 to now, we're down to about five days to deploy a cloud after the bear metals a racked and stacked. And the network is also the physical network is also configured, right? So after that, our automated pipelines can deploy 100 0 clock in five days flight, which is a massive deal for someone for a company that there's adding bear metals to their infrastructure so fast, right? It helps us utilize our investment, our assets really well. By the time it takes to deploy a cloud control plane for us is about 19 hours. It takes us two hours to deploy a compu track and it takes us three hours to deploy a storage rack. Right? And we really leverage the re class model off M C. P. We've configured re class model to suit almost every type of cloud that we have, right, and we've kept it fairly generous. It can be, um, Taylor to deploy any type of cloud, any type of story, nor any type of compute north. Andi. It just helps us automate our deployments by putting every configuration everything that we have in to get into using infra introduction at school, right plus M. C. P also comes with pipelines that help us run automated tests, automated validation pipelines on our cloud. We also have tempest pipelines running every few hours every three hours. If I recall correctly which run integration test on our clouds to make sure the clouds are running properly right, that that is also automated. The re class model and the pipelines helpers automate day to operations and changes as well. There are very few seventh now, compared toa a few years ago. It very rare. It's actually the exception and that may be because off mainly some user letter as opposed to a cloud problem. We also have contributed auto healing, Prometheus and Manager, and we integrate parameters and manager with our even driven automation framework. Currently, we're using Stack Storm, but you could use anyone or any event driven automation framework out there so that it indicates really well. So it helps us step away from constantly monitoring our cloud control control planes and clothes. So this has been very fruitful for us and it has actually apps killed our engineers also to use these best in class practices like get off like in France cord. So just to give you a flavor on what stacks our internal teams are running on these clouds, Um, we have a multi data center open stack cloud, and on >>top of that, >>teams use automation tools like terra form to create the environments. They also create their own Cuba these clusters and you'll see you'll see in the next slide also that we have our own community that the service platform that we built on top of open stack to give developers development teams NGO um, easy to create an easy to destroy Cuban. It is environment and sometimes leverage the Murano application catalog to deploy using heats templates to deploy their own stacks. Geo is largely a micro services driven, Um um company. So all of our applications are micro services, multiple micro services talking to each other, and the leverage develops. Two sets, like danceable Prometheus, Stack stone from for Otto Healing and driven, not commission. Big Data's tax are already there Kafka, Patches, Park Cassandra and other other tools as well. We're also now using service meshes. Almost everything now uses service mesh, sometimes use link. Erred sometimes are experimenting. This is Theo. So So this is where we are and we have multiple clients with NGO, so our products and services are available on Android IOS, our own Geo phone, Windows Macs, Web, Mobile Web based off them. So any client you can use our services and there's no lock in. It's always often with geo, so our sources have to be really good to compete in the open Internet. And last but not least, I think I love toe talk to you about our container journey. So a couple of years ago, almost every team started experimenting with containers and communities and they were demand for as a platform team. They were demanding community that the service from us a manage service. Right? So we built for us, it was much more comfortable, much more easier toe build on top of open stack with cloud FBI s as opposed to doing this on bare metal. So we built a fully managed community that a service which was, ah, self service portal, where you could click a button and get a community cluster deployed in your own tenant on Do the >>things that we did are quite interesting. We also handle some geo specific use cases. So we have because it was a >>manage service. We deployed the city notes in our own management tenant, right? We didn't give access to the customer to the city. Notes. We deployed the master control plane notes in the tenant's tenant and our customers tenant, but we didn't give them access to the Masters. We didn't give them the ssh key the workers that the our customers had full access to. And because people in Genova learning and experimenting, we gave them full admin rights to communities customers as well. So that way that really helped on board communities with NGO. And now we have, like 15 different teams running multiple communities clusters on top, off our open stack clouds. We even handle the fact that there are non profiting. I people separate non profiting I peoples and separate production 49 p pools NGO. So you could create these clusters in whatever environment that non prod environment with more open access or a prod environment with more limited access. So we had to handle these geo specific cases as well in this communities as a service. So on the whole, I think open stack because of the isolation it provides. I think it made a lot of sense for us to do communities our service on top off open stack. We even did it on bare metal, but that not many people use the Cuban, indeed a service environmental, because it is just so much easier to work with. Cloud FBI STO provision much of machines and covering these clusters. That's it from me. I think I've said a mouthful, and now I love for you toe. I'd love to have your questions. If you want to reach out to me. My email is mine dot capulet r l dot com. I'm also you can also message me on Twitter at my uncouple. So thank you. And it was a pleasure talking to you, Andre. Let let me hear your questions.

Published Date : Sep 14 2020

SUMMARY :

So in order to solve that problem, we launched our own brand of smartphones called the So just to give you a flavor on what stacks our internal It is environment and sometimes leverage the Murano application catalog to deploy So we have because it was a So on the whole, I think open stack because of the isolation

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2015DATE

0.99+

IndiaLOCATION

0.99+

2014DATE

0.99+

two hoursQUANTITY

0.99+

$50QUANTITY

0.99+

3QUANTITY

0.99+

12 gigabytesQUANTITY

0.99+

three yearsQUANTITY

0.99+

MorantesORGANIZATION

0.99+

70,000,080 millionQUANTITY

0.99+

AndrePERSON

0.99+

three hoursQUANTITY

0.99+

SamsungORGANIZATION

0.99+

2000DATE

0.99+

70 daysQUANTITY

0.99+

GenovaLOCATION

0.99+

five daysQUANTITY

0.99+

2QUANTITY

0.99+

zeroQUANTITY

0.99+

0QUANTITY

0.99+

170 daysQUANTITY

0.99+

100 million subscribersQUANTITY

0.99+

Onda Mantis PartnershipORGANIZATION

0.99+

first phaseQUANTITY

0.99+

100 millionQUANTITY

0.99+

15 minutesQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

firstQUANTITY

0.99+

16DATE

0.99+

four yearsQUANTITY

0.99+

4 monthsQUANTITY

0.99+

one personQUANTITY

0.99+

49 pQUANTITY

0.99+

100 million customersQUANTITY

0.99+

todayDATE

0.99+

one billionQUANTITY

0.99+

Two setsQUANTITY

0.99+

155thQUANTITY

0.99+

oneQUANTITY

0.99+

one key stepQUANTITY

0.99+

last monthDATE

0.99+

first countryQUANTITY

0.98+

3 monthsQUANTITY

0.98+

around 100,000 CPU coresQUANTITY

0.98+

JoePERSON

0.98+

100QUANTITY

0.98+

27,001QUANTITY

0.98+

OneQUANTITY

0.98+

15 different teamsQUANTITY

0.98+

Android IOSTITLE

0.98+

one monthQUANTITY

0.98+

FranceLOCATION

0.98+

506 106 100 millionQUANTITY

0.98+

GeoORGANIZATION

0.98+

Elektronik LeeORGANIZATION

0.98+

FBIORGANIZATION

0.98+

one groupQUANTITY

0.98+

1.6 billion nuclear transactionsQUANTITY

0.98+

AndiPERSON

0.97+

Geo Mobile InternetORGANIZATION

0.97+

five storyQUANTITY

0.97+

PrometheusTITLE

0.97+

Greg Smith, Madhukar Kumar & Thomas Cornely, Nutanix | Global .NEXT Digital Experience 2020


 

>> From around the globe it's theCUBE with coverage of the GLOBAL.NEXT DIGITAL EXPERIENCE brought to you by Nutanix. >> Hi and welcome back, we're wrapping up our coverage of the Nutanix .Next Global Digital Experience, I'm Stu Miniman and I'm happy to welcome to the program, help us as I said wrap things up. We're going to be talking about running better, running faster and running anywhere. A theme that we've heard in the keynotes and throughout the two day event of the show. We have three VPs to help go through all the pieces coming up on the screen with first of all we have Greg Smith who's the vice president of product technical marketing right next to him is Madhukar Kumar, who is the vice president of product and solutions marketing and on the far end, the senior vice president Thomas Cornely, he is the senior vice president, as I said for product portfolio management. Gentlemen, thank you so much for joining us. >> Good to be here Stu. >> Alright, so done next to show we really enjoy, of course this the global event so not just the US and the European and Asia but what really gets to see across the globe and a lot going on. I've had the pleasure of watching Nutanix since the early days, been to most of the events and the portfolio is quite a bit bigger than just the original HCI solution. Thomas since you've got to portfolio management is under your purview, before we get into summarizing all of the new pieces and the expansion of the cloud and software and everything just give us if you could that overview of the portfolio as it's coming into the show. >> Yeah absolutely Stu. I mean as you said we've been doing this now for 10 plus years and we've grown the portfolio we developed products over the years and so what we rolled out at this conference is a new way and to talk about what we do at Nutanix and what we deliver in terms of set of offerings and we talk about the 4 D's. We start with our digital hyper converged infrastructure cartridges, dual core HCI stack that you can run on any server and that stack these two boards are data center services which combines our storage solutions, our business computing and data recovery solution and security solutions on DevOps services, which is our database automation services, our application delivery automation services and now our new common and that's one of the service offerings and then our desktop services catridges which is our core VDI offering and offering our discipline and service offerings. So put all these together this is what we talk about in the 4 D's, which is on Nutanix cloud platform that you can run on premises and now on any job. >> Well thank you Thomas for laying the ground work for us, Greg we're going to come to you first that run better theme as Thomas said and as we know HCI is at the core a lot of discussions this year of course, the ripple effect of the global pandemic has more people working remotely that's been a tailwind for many of the core offerings, but help us understand, how's that building out some of the new things that we should look at in the HCI. >> Yeah ,thanks too for Nutanix and our customers a lot of it begins with HCI, right. And what we've seen in the past year is really aggressive adoption for HCI, particularly in core data center and private cloud operations and customers are moving to HCI in our not only for greater simplicity, but to get faster provisioning and scaling. And I think from a workload perspective, we see two things, that ACI is being called upon to deliver even more demanding apps those with a really very low latency such as large scale database deployments. And we also see that HCI is expected to improve the economics of IT and the data center and specifically by increasing workload density. So we have a long history, a storied history of continually improving HCI performance. In fact every significant software release we've optimized the core data path and we've done it again. We've done it again with our latest HCI software release that we announced just this week as our next. The first enhancement we made was in 518, was to reduce the CPU overhead and latency for accessing storage devices such as SSD and NBME and we've done this by managing storage space on physical devices in the HCI software. So rather than rely on slower internal file systems and this new technology is called block store and our customers can take advantage of block store simply by upgrading to the new software released and we're seeing immediate performance gains of 20 to 25% for IOPS and latency. And then we built on top of that, we've added software support for Intel Optane by leveraging user space library, specifically SPDK or storage performance development kit. And SPDK allows Nutanix to access devices from user space and avoid expensive interrupts and systems calls. So with this support along with block store we're seeing application performance gains about this 56% or more. So we're just building our own a legacy of pushing performance and software and that's the real benefit of moving to HCI. >> And just to add to that too when it comes to run better I think one of the things that we think of running better is automation and operation then when it comes to automation and operation there are a couple of ways I would say significant announcements that we also did to. One is around Comm as a service. Comm is one of those products that our customers absolutely love and now with Comm as a service you have a SaaS plane, so you can just without installing anything or configuring anything you could just take advantage of that. And the other thing we also announced is something called Nutanix central and Nutanix central gives you the way to manage all your applications on Nutanix across all of your different clusters and infrastructure from a single place as well. So two big parts of a run better as well. >> Well, that's great and I've really, is that foundational layer, Madhukar if we talk about expanding out, running faster the other piece we've talked about for a few years is step one is you modernize the platform and then step two is really you have to modernize your application. So maybe help us understand that changing workload cloud native is that discussion that we've been having a few years now, what are you hearing from your customers and what new pieces do you have to expand and enable that piece of the overall stack? >> Yeah, so I think what you mentioned which is around cloud data the big piece over there is around Cybernetics's and they already had a carbon, so with carbon a lot of the things of complexities around managing cybernetics is all taken care of, but there are higher level aspects on it like you have to have observability, you have to have log, you have to have managed the ingress ,outgress which has a lot of complexity involved with, so if you're really just looking for building of applications what we found is that a lot of our customers are looking for a way to be able to manage that on their own. So what we announced which is carbon platform service enables you to do exactly that. So if you're really concerned about creating cloud native applications without really worrying too much about how do I configure the cybernetics clusters? How do I manage Histio? How do I manage all of that carbon platform service that actually encapsulates all of that to a sass plate So you can go in and create your cloud native application as quickly and as fast as possible, but just in a typical Nutanix style we wanted to give that freedom of choice to our customers as well. So if you do end up utilizing this what you can also choose is the end point where you want these application to run and you could choose any of the public clouds or the hyper scaler or you could use a Nutanix or an IOT as an endpoint as well. So that was one of the big announcements we've made. >> Great, Greg and Madhukar before we go on, it's one of the things that I think is a thread throughout but maybe doesn't get highlighted as much but security of course is been front and center for a while, but here in 2020 is even more emphasized things like ransomware, of course even more so today than it has been for a couple of years. So maybe could it just address where we are with security and any new pieces along there that we should understand? >> Yeah, I can start with that if I could. So we've long had security in our platform specifically micro-segmentation, fire walling individual workloads to provide least privilege access and what we've announced this week at .Next is we've extended that capability, specifically we've leveraged some of the capabilities in Xi beam and this is our SAS based service to really build a single dashboard for security operators. So with security central, again a cloud based SAS app, Nutanix customers can get a single pane from which they can monitor the entire security posture of their infrastructure and applications, it gives you asset reporting, asset inventory reporting, you can get automated compliance checks or HIPAA or PCI and others. So we've made security really easy in keeping with the Nutanix theme and it's a security central is a great tool for that security operations team so it's a big step for Nutanix and security. >> Yeah. >> To actually add on this one, one bit piece of security central is to make it easier, right. To see your various network bills and leverage the flow micro segmentation services and configure them on your different virtual machines, right? So it's really a key enabler here to kind of get a sense of what's going on in your environment and best configure and best protect and secure infrastructure. >> Thomas is exactly right. In fact, one of the things I wanted to chime into and maybe Greg you could speak a little bit more about it. One of my favorite announcements that we heard or at least I heard was the virtualized networking and coming from a cloud native world, I think that's a big deal. The ability to go create your virtual private cloud or VPCs and subnets and then be able to do it across multiple clouds. That's, something I think has been long time coming, so I was personally very, very pleased to hear that as well. Greg, do you want to add a little bit more? >> Yeah, that's a good point I'm glad you brought that up, when we talk about micro-segmentation that's one form of isolation, but what we've announced is virtual networking. So we really adopted some cloud principles, specifically virtual private clouds constructs that we can now bring into private cloud data centers. So this gives our customers the ability to define and deploy virtual networks or overlays this sort of sit on top of broadcast domains and VLANs and it provides isolation for different environments. So a number of great use cases, we see HCI specifically being relied upon for fast provisioning in a new environment. But today the network has always been sort of an impediment to that we're sort of stuck with physical network plants, switches and routers. So what virtual networking allows us to do is through APIs, is to create an isolated network a virtual private cloud on a self service basis. This is great for organizations that increasing operating as service providers and they need that tenant level segmentation. It's also good for developers who need isolated workspace and they want to spin it up quickly. So we see a lot of great use cases for virtual networks and it just sort of adds to our full stack so we've software defined compute, we've software defined storage, now we're completing that with software defining networking. >> And if I have it right in my notes the virtual networking that's in preview today correct? >> Yes, we announce it this week and we are announcing upcoming availability, so we have number of customers who are already working with us to help define it and ready to put it into their environments. The virtual private network is upcoming from Nutanix. >> Yeah, so I absolutely I've got, Mudhakar, I've got a special place in my heart for the networking piece that's where a lot of my background is, but there was a different announcement that got a little bit more of my attention and Thomas we're going to turn to you to talk a little bit more about clusters. I got to speak with Monica and Tarkin, ahead of the conference when you had the announcement with AWS, for releasing Nutanix clusters and this is something we've been watching for a bit, when you talk about the multicloud messaging and how you're taking the Nutanix software and extending it even further that run anywhere that you have talk about in the conference. So Thomas if you could just walk us through the announcements as I said something we've been excited, I've been watching this closely for the last couple of years with Nutanix and great to see some of the pieces really starting to accelerate. >> Well absolutely and as you said this is something that's been core to the strategy in terms of getting and enabling customers to go and do more with hybrid cloud and public cloud and if you go back a few weeks when we announced clusters on AWS this was a few weeks back now, we talked of HCI is a prerequisite to getting the most of your hybrid cloud infrastructure, which is the new HCI in our mind and what we covered at .Next was this great announcement with Microsoft Azure, right, and just leveraging their technologies bringing some of their control plan onto our cloud platform but also now adding clusters on Azure and announcing that we'll be doing this in a few months. Enabling the customers to go and take the same internet cloud platform the same consistent set of operations and technology services from data center services, DevOps services and desktop services and deploying those anywhere on premises, on AWS or on Microsoft Azure and again for us cloud is not a destination. This is not a now we just accomplished something. This is a new way of operating, right? And so it's touching, giving customers options in terms of where they want to go to count so we keep on adding new counts as we go but also it's a new form of consuming infrastructure, right? From an economist perspective you probably know, you don't extend it you're pressing into the moving to is fiction based offering on all of our solutions and our entire portfolio and as we go and enable these clusters offering, we're not making consumptions more granular to non customers do not consume our software on an hourly basis or a monthly basis. So again this is kind of that next step of cloud is not just technology, it's not a destination it's a new way of operating and consuming technology. >> Why think about the flexibility that this brings to existing new techs customers it gives them enormous choices in terms of new infrastructure and whether they set up new clusters. So think about in text a customer today. They may have data centers throughout the US or in Europe and in Asia Pacific, but now they have a choice rather than employ their Nutanix environment, in an existing data center or Colo, they can put it into AWS and they can manage it exactly the same. So it just provides near infinite choice for our customers of how they deploy HCI and our full software stack. In addition to the consumption that Thomas talked about, consumption choices. >> Yeah, just to add to that again I should have said this is also one of my favorite announcements as well, yesterday. We Greg, myself, Thomas, we were talking to some industry analysts and they were talking about, Hey, you know how there is a need for pods where you have compute, you have network and you have storage altogether, and now people want to run it across multiple different destination but they have to have the freedom of choice. Today using one different kind of hardware tomorrow you want to use something else. They should be portability for that, so with clusters, I think what we have been able to do is to take that concept and apply it across public cloud. So the same whether you want to call it a pod or whatever but compute, storage, networking. Now you have the freedom of choice of choosing a public cloud as an end point where you want to run it. So absolutely one of those I would say game-changing announcements that we have made more recently. >> Yeah-- >> To close that loop actually and talk about portability as enabling quality of occupations. But also one thing that's really unique in terms of how we're delivering this to customers is probability of licenses. The fact that you have a subscription term license for on premises you can very easily now repay the license if you decide to move a workload and move a cluster from one premises to your count of choice, that distance is also affordable. But so again, full flexibility for these customers, freedom of choice from a technology perspective but also a business perspective. >> Well, one of the things I think that really brings home how real this solution is, it's not just about location, Thomas as you said, it's not about a destination, but it's about what you can do with those workloads. So one of the use cases I saw during the conference was talking about a very long partner of a Nutanix Citrix and how that plays out in this clusters type of environment so maybe if you could just illustrate that as one of those proof points is how customers can leverage the variety of choice. >> Yeah, we're very excited about this one, right? Because given what we're currently going through as a humanity right now, across the world with COVID situation, and the fact that we all have now to start looking at working from home, enabling scaling of existing infrastructure and doing it without having to go and rethink your design enabling this clusters in our Citrix solution is just paramount. Because what it will ask you to do is if you say you started and you had an existing VDI solution on premises using Citrix, extending that now and you putting new capacity in every location where you can go and spin this up in any AWS region or Azure region, no one has to go and the same images, the same processes, the same operations of your original desktop infrastructure would apply regardless of where you're moving now your workforce to work remotely. And this is again it's about making this very easy and keeping that consistency operations, from managing the desktops to managing that core infrastructure that is now enabled by using different clusters on Azure or AWS. >> Well, Thomas back in a previous answer, I thought you were teeing something up when you said we will be entering a new era. So when you talk about workloads that are going to the cloud, you talk about modernization probably the hottest area that we have conversations with practitioners on is what's happening in the database world. Of course, there's migrations, there's lots of new databases on there, and Nutanix era is helping in that piece. So maybe if we could as kind of a final workload talk about how that's expanding and what updates you have for the database. >> Absolutely and so I mean Eras is one of our key offerings when it comes to a database automation and really enabling teams to start delivering database as a service to their own and users. We just announced Era 2.0 which is now taking Era to a whole other level, allowing you to go and manage your devices on cross clusters. And this is very topical in this current use case, because we're talking of now I can use era to go in as your database that might be running on premises for production and using Era to spin up clones for test drive for any team anywhere potentially in cloud then using clusters on the all kind of environments. So those use cases of being which more leverage the power of the core is same structure of Nutanix for storage management for efficiency but also performance and scaling doing that on premises and in unique cloud region that you may want to leverage, using Era for all the automation and ensuring that you keep on with your best practices in terms of deploying and hacking your databases is really critical. So Era 2.0 great use cases here to go and just streamline how you onboard databases on top of HCI whether you're doing HCI on premises or HCI in public town, and getting automation of those operations at any scale. >> Yeah, hey Tom has mentioned a performance and Era has been a great extension to the portfolio sitting on top of our HCI. As you know Stu database has long been a popular workload to run it all HCI, particularly Nutanix and it extends from scalability performance. A lot of I talked about earlier in terms of providing that really low latency to support the I-Ops, to support the transactions per second, that are needed these very demanding databases. Our customers have had great success running SAP, HANA, Oracle SQL server. So I think it's a combination of Era and what we're doing as Thomas described as well as just getting a rock solid foundational HCI platform to run it on and so that's what we're very excited about to go forward in the database world. >> Wonderful, well look, we covered a lot of ground here. I know we probably didn't hit everything there but it's been amazing to watch Nutanix really going from simplicity at its core and software driving it to now that really spiders out and touches a lot of pieces. So I'll give you each just kind of final word as you having conversations with your customers, how do they think of Nutanix today and expect that we have a little bit of diversity and the answers but it's one of those questions I think the last couple of years you've asked when people register for .Next. So it's, I'm curious to hear what you think on that. Maybe Greg if we start with you and kind of go down the line. >> Yeah, for me what sums it up is Nutanix makes IT simple, It makes IT invisible and it allows professionals to move away from the care and feeding structure and really spend more time with the applications and services that power their business. >> And I agree with Greg I think the two things that always come up, one is the freedom of choice, the ability for our customers to be able to do so many different things, have so many more choices and we continue to do that every time we add something new or we announce something new and then just to add onto what Greg said is to try and make the complexities invisible, so if there are multiple layers, abstract them out so that our customers are really focused on doing things that really matter versus trying to manage all the other underlying layers, which adds more complexity. >> Yeah You could just kind of send me to it up right. In the end, internet is becoming much more than HCI, as hyper converged infrastructure this is not taking it to another level with the hybrid cloud infrastructure and when you look at what's been built over the last few years from the portfolio points that we now have, I think it was just growing recognition that internet actually delivers this cloud platform that you can all average to go and get to a consistency of services, operations and business operations in any location, on premises through our network constant providers through our Nutanix cloud offerings and hyper scaler with Nutanix clusters. So I think things are really changing, the company is getting to a whole other level and I couldn't be more excited about what's coming out now the next few years as we keep on building and scaling our cloud platform. >> And I'll just add my perspective as a long time watcher of Nutanix. For so long IT was the organization where you typically got an answer of no, or they were very slow to be able to react on it. It was actually a quote from Alan Cohen at the first .Next down in Miami he said, "we take need to take those nos "and those slows and get them to say go." So the ultimate, what we need is of course reacting to the business, taking those people, eliminating some of the things that were burdensome or took up too much time and you're freeing them up to be able to really create value for the business. Want to thank Greg, Madhukar, Thomas, thank you so much for helping us wrap up, theCUBE is always thrilled to be able to participate in .Next great community customers really engaged and great to talk with all three of you. >> Thank you. >> Alright so that's a rack for theCUBES coverage of the Nutanix Global.Next digital experience. Go to thecube.com. thecube.net is the website where you can go see all of the previous interviews we've done with the executives, the partners, the customers. I'm Stu Miniman and as always thank you for watching theCUBE.

Published Date : Sep 9 2020

SUMMARY :

brought to you by Nutanix. and on the far end, and the portfolio is quite a bit bigger and that's one of the service offerings and as we know HCI is at the core and that's the real and Nutanix central gives you the way is really you have to and you could choose and any new pieces along there and this is our SAS based service and leverage the flow and then be able to do it and it just sort of adds to our full stack and ready to put it and great to see some of the pieces Well absolutely and as you said that this brings to and you have storage altogether, now repay the license if you decide and how that plays out in this clusters and the fact that we all have now to start and what updates you have and ensuring that you keep on and so that's what and kind of go down the line. and services that power their business. and then just to add onto what Greg said and get to a consistency of services, and great to talk with all three of you. and as always thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg SmithPERSON

0.99+

ThomasPERSON

0.99+

Alan CohenPERSON

0.99+

GregPERSON

0.99+

20QUANTITY

0.99+

AWSORGANIZATION

0.99+

Madhukar KumarPERSON

0.99+

Stu MinimanPERSON

0.99+

Asia PacificLOCATION

0.99+

NutanixORGANIZATION

0.99+

USLOCATION

0.99+

EuropeLOCATION

0.99+

MonicaPERSON

0.99+

MicrosoftORGANIZATION

0.99+

MadhukarPERSON

0.99+

MiamiLOCATION

0.99+

TomPERSON

0.99+

TarkinPERSON

0.99+

56%QUANTITY

0.99+

two dayQUANTITY

0.99+

2020DATE

0.99+

10 plus yearsQUANTITY

0.99+

yesterdayDATE

0.99+

TodayDATE

0.99+

MudhakarPERSON

0.99+

25%QUANTITY

0.99+

Thomas CornelyPERSON

0.99+

two thingsQUANTITY

0.99+

tomorrowDATE

0.99+

oneQUANTITY

0.99+

todayDATE

0.98+

this weekDATE

0.98+

this yearDATE

0.98+

AsiaLOCATION

0.98+

CitrixTITLE

0.98+

OneQUANTITY

0.98+

ColoLOCATION

0.97+

firstQUANTITY

0.97+

Era 2.0TITLE

0.97+

StuPERSON

0.97+

IntelORGANIZATION

0.96+

eachQUANTITY

0.96+

two boardsQUANTITY

0.96+

two bigQUANTITY

0.96+

EuropeanLOCATION

0.96+

step twoQUANTITY

0.96+

Joe Fitzgerald, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>>from around the globe. >>It's the Cube with >>coverage of Coop Khan and Cloud Native Con Europe 2020 Virtual brought to you by Red Hat Cloud, >>Native Computing Foundation and >>Ecosystem Partners. Hi. And welcome back. I'm stew Minuteman. And this is the cube coverage of que con cognitive con 2020. The Europe virtual addition Course kubernetes won the container wars as we went from managing a few containers that managing clusters, too many customers managing multiple clusters and that and get more complicated. So to help understand those challenges and how solutions are being put out to solve them, having a welcome back to the from one of our cube alumni do if it Gerald is the vice president and general manager of the management business unit at Red Hat. Joe, good to see you again. Thanks so much for joining us >>two. Thanks for having me back. >>All right, so at Red Hat Summit, one of the interesting conversation do you and I add, was talking about advanced cluster management or a CME course. That was some people and some technology that came over to Red hat from IBM post acquisition. So it was tech preview give us the update. What's the news? And, you know, just level set for the audience. You know what cluster management is? >>Sure, So advanced Cluster manager or a CMS, We actually falling, basically, is a way to manage multiple clusters. Ross, even different environments, right? As people have adopted communities and you know, we have at several 1000 customers running open shift on their starting to push it in some very, very big ways. And so what they run into is a stay scale. They need better ways to manage. It would make those environments, and a CMS is a huge way to help manage those environments. It was early availability back at Summit end of April, and in just a few months now it's generally available. We're super excited about that. >>Well, that that Congratulations on moving that from technical preview to general availability so fast. What can you tell us? How many customers have you had used this? What have you learned in talking to them about this solution? >>So, first of all, we're really pleasantly surprised by the amount of people that were interested in the tech preview. Integrity is not a product that's ready to use in production yet so a lot of times accounts are not interested in. They want to wait for the production version. We had over 100 customers in our tech review across. Not only geography is all over the world Asia, America, Europe, us across all different verticals. There's a tremendous amount of interest in it. I think that just shows you know, how applicable it is to these environments of people trying to manage. So tremendous had update. We got great feedback from that. And in just a few months, we incorporate that feedback into the now generally available product. So great uptick during the tech created >>Excellent Bring assigned side a little bit, you know, When would I use this solution? If I just have a single cluster, Does it make sense for May eyes? Is it only for multi clusters? You know, what's the applicability of the offering? Yes, sir, even for >>single clusters that the things that ACM really does fall into three major areas right allows closer lifecycle management. Of course, that would mean that you have more than one cluster ondas people grow. They do for a number of reasons. Also, policy based management the ability to enforced and fig policies and enforce compliance across even your single cluster to make sure that stays perfect in terms of settings and configuration and things like that. Any other application. Lifecycle management The ability to deploy applications in more advanced way, even if you're on a single cluster, gets even better for multi cluster. But you can deploy your APS to just the clusters that are tagged a certainly, but lots of capabilities, even for application, even a single cluster. So we find even people that are running a single cluster need it askew, deployed more more clusters. You're definitely >>that's great. Any you mentioned you had feedback from customers. What are the things that I guess would be the biggest pain points that this solves for them that they were struggling with in the past? Well, >>first of being able to sort of Federated Management multiple clusters, right, as opposed to having to manage each cluster individually, but the ability to do policy based configuration management to just express the way you want things to stay, have them stay that way to adopt a more of a getups ethnology in terms of how they're managing their your open ships environments. There's lots more feedback, but those were some of the ones that seem to be fairly common, repetitive across the country. >>Yeah, and you know, Joe, you've also gotten automation in the management suite. How do I think about this? How does this fit into the broader management automation that customers were using? Well, >>I think as people in employees environments. And it was a long conversation about platform right? But there's a lot of things that have to go with the platform and red hats actually in very good about that, in terms of providing all the things you necessary that you would find necessary to make the five form successful in your environment. Right? So I was seen by four. We need storage, then development environments management, the automation ability to train on it. We have our open innovation labs. There's lots of things that are beyond the platform that people acquire in order to be successful. In the case of management automation, ACM was a huge advancement. Terms had managed these environments, but we're not done. We're gonna continue to ADM or automation integration with things like answerable mawr, integration with observe ability and analytics so far from done. But we want to make sure that open ship stays the best managed environment that's out there. I also do want to make a call out to the fact that you know, this team has been working on this technology for the past couple of years. And so, you know, it's only been a red hat for five months. This technology is actually very mature, but it is quite an accomplishment for any company to take a new team in a new technology. And in five months, do what Red Hat does to it in terms of making it consumable for the enterprise. So then kudos continue. Really not >>well. And I know a piece of that is, you know, moving that along to be open source. So, you know, where are we with the solution? Now that is be a How does that fit in tow being open? Source. >>Eso supports that are open source Already. When the process of open sourcing the rest of it, as you've seen over time read, it has a perfect record here of acquiring technologies that were either completely closed Source Open core in some cases where part it was open. It was closed. But that was the case with Ansell a few years ago. But basically our strategy is everything has to be open source. That takes time in the process of going through all of the processes necessary to open source parts of ACM on. We think that will find lots of interest in the community around the different projects inside of >>Yeah. How about what? One of the bigger concerns talking to customers in general about kubernetes even Mawr in 2020 is. What about security? How does a CME help customers make sure that their environment to secure? >>Yeah, so you know, configuration policies and forcing you can actually sent with ACM that you want things to be a certain way that somebody changes them that automatically either warn you about them or enforcement would set them back. So it's got some very strong security chops in terms of keeping the configurations just the way you want. That gets harder as you get more and more clusters. Imagine trying to keep everything but the same levels, settings, software, all the parts and pieces so affected you have ACM that can do this across any and all of your clusters really took the burden off people trying to maintain secure environments, >>okay, and so generally available. Now, anything you can share about how this solution is priced, how it fits in tow. The broader open shift offerings, >>Yes. Oh, so it's an add on for open shift is priced very similarly to open shift in terms of the, you know, core pricing. One thing I do want to mention about ACM, which maybe doesn't come out just by a description product is the fact that a scene was built from scratch for communities, environments and optimize for open shift. We're seeing a lot of competition out there that's taking products that were built for other environments, trying to sort of been member coerce them into managing kubernetes environments. We don't think people are going to be successful at that. Haven't been successful to date. So one things that we find as sort of a competitive differentiator for ACM and market is the fact that it was built from scratch designed for communities environments. So it is really well designed for the environment it's trying to manage, and we think that's gonna keep your competitive edge? >>Well, always. Joe. When you have a new architecture, you advantage of things. Any examples that you have is what, what a new architecture like this can do that that an older architecture might struggle with or not believe. Be able to do even though when you look at the product sheet, the words sound similar. But when you get underneath the covers, it's just not a good architect well fit. >>Yeah, so it's very similar sort of the shift from physical to virtual. You can't have a paradigm shift in the infrastructure and not have a sort of a corresponding paradigm shift in management tool. So the way you monitor these environments, where you secure them the way they scale and expand, we do resource management, security. All those things are vastly different in this environment compared to, let's say, a virtual more physical environment. So this has improved many times in the past. You know, paradigm shift in the infrastructure or the application environment will drive a commensurate paradigm shift in management. That's what you're seeing here. So that's why we thought it was super important to have management that was built for these environments. by design. So it's not trying to do sort of unnatural things north manage the environment. >>Yeah, I wondered. I love to hear just a little bit your philosophy as to what's needed in this space. You know, I look back to previous generations, look at virtualization. You know, Microsoft did very well at managing their environment, the M where did the same for their environments. But, you know, we've had generations of times where solutions have tried to be management of everything, and that could be challenging. So, you know, what's Red Hat in a CM's position and what do we need in the community space, you know, today and for the next couple of years. >>So kubernetes itself is the automation platform you talked about, you know, early on in the second. So you know, Cooper navies itself provides, you know, a lot of automation around container management. What a CME does is build a top it out and then capture, you know, data and events and configuration items in the environment and then allows you to define policies. People want to move away from manual processes. Certainly, but they wanna be able to get to a more state full expression of the way things should be. You want to be able to use more about, you know, sort of get up, you know, kind of philosophy where they say, this is how I want things today. Check the version in, keep it at that level. If it changes, put it back. Tell me about it. But sort of the era of chasing. You know, management with people is changing. You're seeing a huge premium now on probation. So automation at all levels. And I think this is where a cm's automation on top of open shift automation on down the road, combined with things like ansell, will provide the most automated environment you can have for these container platforms. Um, so it's definitely changing your seeing observe ability, ai ops getups type of philosophies Coming in these air very different manager in the past helps you seeing innovation across the whole management landscape in the communities environment because they are so different. The physics of them are different than the previous environments. We think with ACM answerable or insights product and some over analytics that we've got the right thing for this environment >>and can give us a little bit of a look forward, you know? How often should we expect to see updates on this? Of course. You mentioned getting feedback from the community from the technical preview to G A. So give us a little bit. Look, you know, what should we be expecting to see from a CME down the right the So >>the ACM team is far from done, right? So they're going to continue to rev, you know, just like we read open shift, that very, very fast base we're gonna be reading ACM and fast face. Also, you see a lot of integration between ACM. A lot of the partners were already working with in the application monitoring space and the analytics space security automation I would expect to see in the uncivil fest time frame, which is mid October, will cease, um, integration with danceable on ACM around things. That insult does very well combined with what ACM does. A sand will continue to push out on Mawr cluster management, more policy based management and certainly advancing the application life cycles that people are very interested in ruined faster. They want to move faster with a higher degree of certainty in their application. Employments on ACM is right there. >>It just final question for you, Joe, is, you know, just in the broader space, looking at management in this kind of cube con cloud, native con ecosystem final words, you want customers to understand where we are today and where we need to go down the road. >>So I think the you know, the market and industry has decided communities is the platform of future right? And certainly we were one of the earliest to invest in container management platforms with open shift were one of the first to invest in communities. We have thousands of customers running open shift back Russell Industries on geography is so we bet on that a long time ago. Now we're betting on the management automation of those environments and bringing them to scale. And the other thing I think that redhead is unique on is that we think that people gonna want to run their kubernetes environments across all different kinds of environments, whether it's on premise visible in virtual multiple public clouds, where we have offerings as well as at the edge. Right. So this is gonna be an environment that's going to be very, very ubiquitous. Pervasive, deported scale. And so the management of a nation has become a necessity. And so but had investing in the right areas to make sure that enterprises continues communities particularly open shift in all the environments that they want at the scale. >>All right. Excellent. Well, Joe, I know we'll be catching up with you and your team for answerable fest. Ah, coming in the fall. Thanks so much for the update. Congratulations to you in the team on the rapid progression of ACM now being G A. >>Thanks to appreciate it, we'll see you soon. >>All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on still minimum and thanks, as always, for watching the Cube.

Published Date : Aug 18 2020

SUMMARY :

Joe, good to see you again. Thanks for having me back. All right, so at Red Hat Summit, one of the interesting conversation do you and I add, As people have adopted communities and you know, we have at several 1000 customers running open shift What have you learned in talking to I think that just shows you know, how applicable it Also, policy based management the ability to Any you mentioned you had feedback from customers. express the way you want things to stay, have them stay that way to adopt a more of a getups Yeah, and you know, Joe, you've also gotten automation in the management suite. in terms of providing all the things you necessary that you would find necessary to make the five form successful And I know a piece of that is, you know, moving that along to be open source. When the process of open sourcing the rest of it, as you've seen One of the bigger concerns talking to customers in general about kubernetes configurations just the way you want. Now, anything you can share about how this solution is of the, you know, core pricing. Be able to do even though when you look So the way you monitor these environments, where you secure them the way they scale and expand, a CM's position and what do we need in the community space, you know, So kubernetes itself is the automation platform you talked about, you know, early on in the second. Look, you know, what should we be expecting to see from a CME down the So they're going to continue to rev, you know, words, you want customers to understand where we are today and where we need to go down the road. So I think the you know, the market and industry has decided communities is the platform of future right? Congratulations to you in the team on the rapid progression All right, Stay tuned for more coverage from que con club native con 2020 in Europe, the virtual addition on

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

GeraldPERSON

0.99+

MicrosoftORGANIZATION

0.99+

JoePERSON

0.99+

five monthsQUANTITY

0.99+

EuropeLOCATION

0.99+

Red HatORGANIZATION

0.99+

AmericaLOCATION

0.99+

Russell IndustriesORGANIZATION

0.99+

Red Hat CloudORGANIZATION

0.99+

2020DATE

0.99+

mid OctoberDATE

0.99+

each clusterQUANTITY

0.99+

Joe FitzgeraldPERSON

0.99+

single clusterQUANTITY

0.99+

over 100 customersQUANTITY

0.99+

Native Computing FoundationORGANIZATION

0.99+

AsiaLOCATION

0.99+

oneQUANTITY

0.99+

AnsellORGANIZATION

0.98+

KubeConEVENT

0.98+

five formQUANTITY

0.98+

ACMORGANIZATION

0.97+

single clustersQUANTITY

0.97+

more than one clusterQUANTITY

0.97+

end of AprilDATE

0.97+

todayDATE

0.97+

Coop KhanORGANIZATION

0.96+

1000 customersQUANTITY

0.95+

ansellORGANIZATION

0.94+

secondQUANTITY

0.94+

fourQUANTITY

0.94+

Cooper naviesORGANIZATION

0.92+

firstQUANTITY

0.92+

CubeORGANIZATION

0.91+

Ecosystem PartnersORGANIZATION

0.9+

One thingQUANTITY

0.89+

Red hatORGANIZATION

0.88+

few years agoDATE

0.87+

twoQUANTITY

0.87+

red hatORGANIZATION

0.87+

OneQUANTITY

0.86+

Native Con Europe 2020EVENT

0.85+

stew MinutemanPERSON

0.85+

CloudNativeCon Europe 2020EVENT

0.82+

next couple of yearsDATE

0.79+

Red Hat SummitEVENT

0.79+

thousands of customersQUANTITY

0.78+

three major areasQUANTITY

0.75+

past couple of yearsDATE

0.74+

SummitEVENT

0.74+

redheadORGANIZATION

0.7+

con 2020EVENT

0.68+

que con cognitive con 2020EVENT

0.66+

RossPERSON

0.65+

EsoORGANIZATION

0.61+

MawrORGANIZATION

0.56+

Red HatTITLE

0.55+

ACMTITLE

0.53+

CloudORGANIZATION

0.43+

Guatam Chatterjee, Tech Mahindra & Satyendra Gupta, Gov. of India | AWS Public Sector Partner Awards


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of AWS Public Sector Partner Awards. Brought to you by Amazon Web Services. >> Hi, I'm Stu Miniman, and welcome back to theCUBE's coverage of the AWS Public Sector Partner Awards. We're going to be digging in. This award is for the most customer obsessed migration and happy to welcome to the program two first time guests coming to us from India. First of all, from the partner with Tech Mahindra, we have Gautam Chatterjee. He is the vice president with Tech Mahindra, who's the winner of the award, and they've brought along their customer for this, that is Satyendra Gupta, who is the director of the CPWD, which is the Central Public Works Department, part of the government of India. Gentlemen, thank you so much for joining us. >> Thank you. >> All right, if we could, let's start with just a quick summary of what your organizations do. Gautam, we'll start with you. Tech Mahindra, I think most of our audience, you know, should be aware, you know, large, very well known organization. Congratulations to you and the team on the win. Tell us what your part of Tech Mahindra does. >> Okay. So, Tech Mahindra is a five billion dollar organization, and it's a part of Mahindra and Mahindra. Which is approximately at $22 billion evaluation worldwide. So, Tech Mahindra is primarily into IT services and consulting services for the information technology and information technology related works across the globe. We have got multiple offices, almost around 90 locations across the country, and we have gotten to operations worldwide in different verticals and different geographies. So, as a part of Tech Mahindra, I manage the central government that is the public sector business for Tech Mahindra, based out of New Delhi, in India. And we handle the complete large public sector organizations and different ministries which are coming into the government of India. >> Wonderful! Satyendra, obviously public works, relatively self explanatory, but, tell us a little bit about your organization, your roll, and, if you could, introduce the project that your group worked with Tech Mahindra on. >> Okay, so, Central Public Works Department is a 165 year old organization that was aided by large technology. In 1854 was when this organization started working. The primary responsibility of this organization is to build the consistent works of the government of India. Primarily in the buildings sector. We see predominantly, Tech Mahindra will see predominantly you aiding the department, is that technical add-on to the government of India regarding these concepts and matters. Right, so, this department is spread across the country, from north, and in the south, Kerala. And from Gujarat in the west to another place in the east. This department has departments across the country. We had to use, with all tech with all the top companies we had thought (indistinct) is that only the building but we created and perfected from the government of India, like, the stadiums. That is not so many, wanted something that would have been very useful regarding the tsunami. Tsunami came so the government, the projects we picked up would be constantly small houses that we'd have to give it to. And CPWD, using the info technology since long, but we have it all along (indistinct) in value. Now, last year, it had been decided that we would implement the IT system in the CPWD very hard softwares and will be implementing a single use form, and everything will be connected to each other, too. So, this is what the internet for part of the implementation is. As far as myself is concerned, I am in charge of the implementation of this year for the system in the department. From it's inception to the end, and detailing the whole of the process until all the onboarding of the Tech Mahindra, and the implementation of. And then, from there after waiting a minute to, in the department to make it adaptable, we tell everybody. These are the roles that I have. >> All right, Gautam, if you could, migration's obviously a big part of what I expect Tech Mahindra is helping customers with. Help frame up, you know, the services that you're doing, talk a little bit, if you could, the underlying AWS component of it, and, you know, specifically, give us a little bit about Tech Mahindra's role in the public works project that we were just talking about. >> Okay. So, coming to the relationship and the journey which you have started for the CPWD project, it's around a year, year and a half work when you have started interacting with CPWD. By understanding their business challenges and the business department, which is primarily automating the whole processes. And there are multiple applications, multiple processes which they wanted to automate. Now, definitely once their automation comes into the picture, you have to take place the complete automation of the applications, the complete automations of the infrastructure, and the complete automations of the UI part of it. That is the user perceptions, user interface, right? So, all three has been covered by this company to automation process. As a part of the system integrations business, our main objective is to plan and bring the respective OEMs, who are the best of the great, our technology providers, to bring them to utilize those platforms, and to utilize those course applications, so that, by utilizing those technologies and applications, we can automate the complete process and provide the complete drill down management view to CPWD for their inter-operations and application. In the process of doing that, what we have done, we have brought in SAP as an ace for HANA implementation, which is the primary business applications which will be implemented in CPWD. The inter-user log-in and user interface will be done through a portal, and that portal will be utilizing the Liferay user portal, which will be the front end user interface. There will be an eTendering application, which will be also through one of my large general partners, who will be working together for us for the eTendering applications, which is also a part of ours, and 40 of the whole automation process. And inter-application, eTendering, the portal, and all the applications, as a matter of fact, will be hosted to the cloud on AWS platform. Now, once you're talking about the AWS platform, that means it will implement the complete infrastructure of the service, and the complete platform as a service. So, all the computed storage, everything will be deploying from the AWS cloud, and necessarily all the platform in terms of your database applications, all third-party tools to do the performance testing, management, monitoring. Everything will be provided as a platform of the service by AWS. So, we, engaged AWS from the beginning itself, the AWS team, and SMP team, both major OEMs worked with us very hand and gloves from day one. And we had multiple interactions with the customer. We understood the challenges. We understood the number of users, number of iterations, number of redundancy, number of high, I mean, the kind of high availability they will require in times of the business difficulty of the applications, and based on which, together, along with AWS, Tech Mahindra, and SAP, all three of us together, and I have the complete solutions, architecture, and the optimizations of the whole solutions, so that overall impact comes to CPWD as the customer, the ultimate results, and the business output they deserve. You know? So, that is where we actually interacted. We have got the interactions with AWS solutions team, AWS architect team, along with our interface architect and the solutions team, who worked very closely along with the customers, them desizing so that it exactly matches the requirement not only for today, down the line for the next four years, because the complete implementation cycle is 18 months, and after that, Tech Mahindra is a prime service provider. We'll provide the four years after implementation support to CPWD, because we all understand that any government department, they need government understanding. These kind of business applications implementation is a transformation. Now, this transformation definitely cannot happen overnight. It has to happen through a process, through a cycle, and through a phase, because there will be the users who will be the proactive users who will start using the inter-applications from the beginning, and, gradually, the more and more success, the more and more user friendliness will come into the whole picture. Then, participation for multiple users, multiple stakeholders will come on board. The moment that comes in, the users load, the user's participation and user's load, both into the platforms, both into the infrastructure will keep on changing, keep on increasing, and that is why our role will be how to manage the complete infrastructure, how to manage the complete platform throughout the journey of this transformation of five and a half years. And that is what the exact role as a prime and large MSP Tech Mahindra will perform for the next five and a half years along with AWS, along with CPWD, and along with SAP. (coughs) >> All right, well, Satyendra, Gautam just laid out, I think, a lot of the reasons why they won the customer obsessed award from AWS on this. You know, I think back to earlier in my career and you talk about NSAP rollout, and it's not only the length of time that it takes for the rollout, and the finance involved, but what Gautam was talking about is the organizational impact and adoptions. So, I would love to hear from your side. You know, what were the goals that you had coming into this? It sounds like getting greater adoption inside the organization for using these services. Give us your insight as to, you know, how that roll-out has been going, the goals you had, how you're meeting them, any success metrics that you use internally to talk about how the project has gone so far. >> We implement the Atlas System in the CPWD, the activities going on since a long time. It was more than one and a half years had been passed, we have angers, one of them concerning our ideas and the way we transform our business processes. They have some certain ideas and that the app implementation is the last one. Most of them have been implemented and we have started, started to get ideas to implement some, but we had bad interactions with all the leading IT service providers in the country, along with all the leading cloud service providers in the country, and this, of course, all the leading EIP services, OEMs, EIP, OEMs, so and so. But, it's a long journey, we have a trial approximately half of the deadline from there. To inform returning process, Tech Mahindra has been appointed as the system integrator and they have come with all the sorts of the services that they are offering, for example, they plan to use SAP, and EIP will be in there, as well. This "one life" system for the portal, eTendering, is a primary credit, has been done. And overall everything has been hosted on the AWS cloud platform. So, it's just that, when could we have. And, everybody knows that Amazon is the leading cloud service provider with the largest of the facilities available with us, so, during this journey, we have got lots of support from the AWS via lots of the credit regarding us to get it set up with the AWS team, and continuously boosted our office and explained each of our queries on this, and now, from the march onwards, Tech Mahindra has started the implementation process we are in. More than four months have been passed since then. And we have covered a lot. The whole objective of this implementation is all our activities will be done on this EIP system, only that if somebody is working in the CPWD, you will activate that. Work in the CPWD on the EIP, or you will not be able to work at all. This is a light goal and whole system. But, all of our system is going to be automatic. Earlier, we were having a different idea because when we were working in the silos, everything we wanted to be integrated with each other, and the time that will be invested to make the entry of the different activity at a different time and with the applications, applicants are not talking to each other, they are working in the silos, but that will go away. So, what we are expecting everything will be on the EIP system, as well, and we are expecting the efficiency of the CPWD unit is going to be increased tremendously. Apart from this, they will handle a more number of the works compared to what they were handling and the time in it since. And everything will revolved around the click of the buttons and we need not to go and ask from anybody to give the reports, et cetera. So, problem management must peak, too. By the click of the button, we will also be able to get all the inputs, all the reports with what is going on across the country. And that idea. So, it is going to be really a transformation to the working of the department, and, in whole, the entire public work center of this country is going to be benefited out of this. This has been like a lighthouse today. This EIP implemented in the CPWD is the lighthouse up ahead, so there are more than 30 public work departments, said public work departments are working, so this is going to create and open a window for everybody there. Once it is a success of this implementation, we'll have it far reaching implication on the implementation of that EIP system or a similar idea for implications in the public works or in the whole country. So, so, there's lots of these stakes our there. To any and, hopefully, with the help of Tech Mahindra, with the help of SAP, AWS, and Amazon, one day they will be able to implement successfully and we will, we are going to get the benefit out of. Everybody is going benefit, not only the Central Public Works Department, but all of our stakeholders. All the stakeholders in terms of businesses, in terms of their reach to the Public Works, and there is a new door to open because the IT had not been leveraged the way in the Public Works Department in the central department or the state government. The other IT system hadn't used EIP. It is going, it's a lighthouse headed to success. We'll have a far reaching implication for everybody. >> Well, I tell you, Satyendra, that's been the promise of cloud, that we should be able to do something, and the scalability and repeatability is something that we should be able to go. Gautam, I want to give you the final word on this. You know, speak to how does cloud, how do we enable this to be able to scale throughout many groups within the organization without having to be, you know, as much work, you know. I think about traditional IT, it's, well, okay, I spend a project, I spend so much time on it, and then every time I need to repeat it, I kind of, you know, have that same amount of work. The, you know, labor should go down as we scale out in a cloud environment. Is that, what you feel, the case? You know, help us understand how this lighthouse account will expand. >> Okay. So, any cloud, you know, have initiative nowadays into any organization. It depends. It primarily benefits in both the ways. Number one, the organization doesn't require to invest up front on the capital expenditure part of it. That's very important. Number two, the organization has got the flexibility to scale up and scale down based on the customer requirements. Within a click of the mouse. It doesn't take any time. Because the inter-positioning of the infrastructure is available with the cloud infrastructure service provider. And, similarly, the scaling of the platforms, that's also available with the cloud infrastructure provider. So, once you do the complete mapping requirement and the sizing for the entire tenure of the project, then the provisioning and deprovisioning is not a matter of time, it can happen with a click of mouse. That's number one. Number two, it's become a challenging activity for any government organization to have their own IT set-up. To manage such a huge, mammoth task of the entire infrastructure, applications, services, troubleshooting, 24 by 7, everything. So, that's not expected from the large government organizations, as such, because that's not their business. Their business is to run the country, their business to run the organization, their business to grow the country's different ideas. And, the IT services organizations, like Tech Mahindra, is there to support those kind of automation processes. And, the platforms which are available on the cloud nowadays, that's the ease of inter-applications, inter-management, monitoring, availability of the entire infrastructure, that makes use of the whole, complete system. So, it all works together. It's not a thing that the system integration organizations already will do the all new reform. It has to happen in synergies. So, application has to work together, infrastructure has to be available together, the management, monitoring has to happen, scaling up, scaling down has to happen, all kinds of updates, upgrades, and badges down the line for the company, continuing of the whole contract has to happen so that the system, once up and running and benefited, it's performing at least for a period of the next five years, as the tenure of the contract, in multiple department happens. Now, what Mr. Gupta was saying, it's very very true that CPWD is the kind of motherly organizations for all public works departments in the country. And, all the public works departments in the country are eagerly looking at this project. Now, it is very important for all of us, not only for Tech Mahindra, Tech Mahindra, SAP, Liferay, and AWS, together, to work and make this project as a success, because it is not a reason that, as a simple customer, this project has to be successful. It's a flexible project for the government of India, and it's been monitored by Didac Lee, the government of India officials, and top ranking bodies on a day in and day out basis, number one. Number two, if we become successful together in this project, there will be an avenue for what Satyendra Gupta has said. That all state PWDs will be open to everybody. They will try and adopt, and they will try and implement a similar kind of system to all the respective states in the country. So it's a huge opportunity in terms of technology enhancement, automations, infrastructure, applications, and moreover, as a service provider, to provide the services to all these bodies, together, which, I feel, it's a huge huge opportunity for all of us together, and we are confident that we will work together, hand in gloves, the way we have done from the day one of this initiative, and we'll take it forward. >> All right, well Satyendra, thank you so much for sharing the details of your project, wish you the best of luck with that going forward. And, Gautam, congratulations again to Tech Mahindra for winning the most customer obsessed migration solution. Thank you both for joining. >> Both: Thank you. >> Thank you very much. >> Thank you very much. >> All right, and thank you for joining. I'm Stu Miniman, this is theCUBE's coverage of AWS Public Sector Partner Awards. Thanks for watching. >> Gautam: Thank you very much. (bright upbeat music)

Published Date : Aug 6 2020

SUMMARY :

the globe, it's theCUBE. First of all, from the and the team on the win. is the public sector and, if you could, introduce the project in the department to make it role in the public works project and 40 of the whole automation process. and it's not only the and the time that will be and the scalability and the management, monitoring has to happen, again to Tech Mahindra of AWS Public Sector Partner Awards. Gautam: Thank you very much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SatyendraPERSON

0.99+

AWSORGANIZATION

0.99+

Satyendra GuptaPERSON

0.99+

GautamPERSON

0.99+

Tech MahindraORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Tech MahindraORGANIZATION

0.99+

Gautam ChatterjeePERSON

0.99+

IndiaLOCATION

0.99+

MahindraORGANIZATION

0.99+

GuptaPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

last yearDATE

0.99+

CPWDORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

LiferayORGANIZATION

0.99+

New DelhiLOCATION

0.99+

GujaratLOCATION

0.99+

Central Public Works DepartmentORGANIZATION

0.99+

Didac LeePERSON

0.99+

1854DATE

0.99+

18 monthsQUANTITY

0.99+

Guatam ChatterjeePERSON

0.99+

SAPORGANIZATION

0.99+

40QUANTITY

0.99+

bothQUANTITY

0.99+

more than 30 public work departmentsQUANTITY

0.99+

More than four monthsQUANTITY

0.99+

BothQUANTITY

0.99+

five billion dollarQUANTITY

0.99+

Central Public Works DepartmentORGANIZATION

0.99+

more than one and a half yearsQUANTITY

0.99+

five and a half yearsQUANTITY

0.98+

165 year oldQUANTITY

0.98+

7QUANTITY

0.98+

this yearDATE

0.98+

HANATITLE

0.98+

eTenderingTITLE

0.98+

KeralaLOCATION

0.98+

four yearsQUANTITY

0.98+

Krishna Doddapaneni and Pirabhu Raman, Pensando | Future Proof Your Enterprise 2020


 

(upbeat music) >> Narrator: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to this CUBE conversation. We're digging in with Pensando. Talking about the technologies that they're using. And happy to welcome to the program, two of Pensando's technical leaders. We have Krishna Doddapaneni, he's the Vice President of Software. And we have here Pirabhu Raman, he's a Principal Engineer, both with Pensando. Thank you so much for joining us. >> Thank you Stu. >> All right. >> Thank you for having us here >> Krishna, you run the Software Team. So let's start there and talk about really the mission and shortly obviously, bring us through a little bit of architecturally what Pensando was doing. >> To get started, Pensando we are building a platform, which can automate and manage the network storage and security services. So when we talk about software here, it's like the better software as you start from all the way from bootloader, to all the way it goes to microservices controller. So the fundamentally the company is building a domain specific processor called a DSP, that goes on the card called DSC. And that card goes into a server in a PCIe slot. Since we go into a server and we act as a NIC, we have to do drivers for Windows, all the OS' Windows, Linux, ESX and FreeBSD. And on the card itself, the chip itself, there are two fundamental pieces of the chip. One is the P4 pipelines, where we run all our applications, if you can think like in the firewalls, in the virtualization, all security applications. And then there's Arm SoC, which we have to bring up the platform and where we run the control plane and data and management plane so that's one piece of the software. The other big piece of software is called PSM. Which kind of, if you think about it in data center, you don't want to manage, one DSC at a time or one server at a time. We want to manage all thousands of servers, using a single management and control point. And that's where the test for the PSM comes from. >> Yeah, excellent. You talked about a pretty complex solution there. One of the big discussion points in the networking world and I think in general has been really the role of software. I think we all know, it got a little overblown. The discussion of software, does not mean that hardware goes away. I wrote a piece, many years ago, if you look at how hyperscalars do things, how they hyper optimize. They don't just buy the cheapest, most generic thing. they tend to configure things and they just roll it out in massive scale. So your team is well known for, really from a chip standpoint, I think about the three Cisco spin-ins. If you dug underneath the covers, yes there was software, but there was an Async there. So, when I look at what you're doing in Pensando, you've got software and there is a chip, at the end of the day. It looks, the first form factor of this looks like, a network card, the NIC that fits in there. So give us in there some of the some of the challenges of software and there's so much diversity in hardware these days. Everything getting ready for AI and GPUs. And you listed off a bunch of pieces when you were talking about the architecture. So give us that software/hardware dynamic, if you would. >> I mean, if you look at where the industry has been going towards, right, I mean, the Moore's law has been ending and Dennard scale is a big on Dennard scaling. So if you want to set all the network in certain security services on x86, you will be wasting a bunch of x86 cycles. The customer, why does he buy x86? He buys x86 to run his application. Not to run IO or do security for IO or policies for IO. So where we come in is basically, we do this domain specific processor, which will take away all the IO part of it, and the computer, just the compute of the application is left for x86. The rest is all offloaded to what we call Pensando. So NIC is kind of one part of what we do. NIC is how we connect to the server. But what we do inside the card is, firewalls, all the networking functions: SDNs, load balancing in all the storage functions, NVMe virtualization, and encryption of all the packets, data of data at rest and data of data in motion. All these services is what we do in this part. And you know, yes, it's an Async. But if you look at what we do inside, it's not a fixed Async. We did work on the previous spin-ins as you said, with Async, but there's a fundamental difference between that Async can this Async. In those Asyncs for example, there's a hard coded routing table or there's a hard coded ACL table. This Async is a completely programmable. It's more like it's a programmable software that we have domain specific language called P4. We use that P4 to program the Async. So the way I look at it, it's an Async, but it's mostly software driven completely. And from all the way from controllers, to what programs you run on the chip, is completely software driven. >> Excellent. Pirabhu of course, the big announcement here, HPE. You've now got the product. It's becoming generally available this month. We'd watch from the launch of Pensando, obviously, having HPE as not only an investor, but they're an OEM of the product. They've got a huge customer base. Maybe help explain, from the enterprise standpoint, if I'm buying ProLion, where now does, am I going to be thinking about Pensando? What specific use cases? How does this translate to the general and enterprise IP buyer? >> We cover of whole breadth of use cases, at the very basic level, if your use cases or if your company is not ready for all the different features, you could buy it as a basic NIC and start provisioning it, and you will get all the basic network functions. But at the same time in addition to the standard network functions, you will get always on telemetry. Like you will get rich set of metrics, you will get packet capture capabilities, which will help you very much in troubleshooting issues, when they happen, or you can leave them always on as well. So, you can do some of these tap kind of functionalities, which financial services do. And all these things you will get without any impact on the workload performance. Like the customers' application don't see any performance impact when any of these capabilities are turned on. So once this is as a standard network function, but beyond this when you are ready for enforcing policies at the edge or you're ready for enforcing stateful firewalls, distributed firewalling capabilities, connection tracking, some of the other things, like Krishna touched upon NVMe virtualization, there are all sorts of other features you can add on top of. >> Okay, so it sounds like what we're really democratizing some of those cloud services or cloud like services for the network, down to the end device, if I have this right. >> Exactly. >> Maybe if you could, networking, we know, our friends in network. We tend to get very acronym driven, to overlays and underlays and various layers of the stack there. When we talk about innovation, I'd love to hear from both of you, what are some of those kind of key innovations, if you were to highlight just one or two? Pirabhu, maybe you can go first and then Krishna would would love your follow up from that. >> Sure, there are many innovations, but just to highlight a few of them, right. Krishna touched upon P4, but even on the P4, P4 is very much focused on manipulating the packets, packets in and packets out, but we enhanced it so that we can address it in such a way that from memory in-packet out, packet in-memory out. Those kind of capabilities so that we can interface it with the host memory. So those innovations we are taking it to the standard and they are in the process of getting standardized as well. In addition to this, our software stack, we touched upon the always on telemetry capabilities. You could do flow based packet captures, NetFlow, you could get a lot of visibility and troubleshooting information. The management plane in itself, has some of the state of the art capabilities. Like it's distributed, highly available, and it makes it very easy for you to manage thousands of these servers. Krishna, do you want to add something more? >> Yes, the biggest thing of the platform is that when we did underlays and overlays, as you said there, everything was like fixed. So tomorrow, you wake up and come with a new protocol, or you may come up with a new way to do storage, right? Normally, in the hardware world, what happens is, Oh, you have to I have to sell you this new chip. That is not what we are doing. I mean, here, whatever we ship on this Async, you can continue to evolve and continue to innovate, irrespective of changing standards. If NVMe goes from one dot two to one dot three, or you come up with a new encapsulation of VXLAN, you do whatever encapsulations, whatever TLVs you would want to, you don't need to change the hardware. It's more about downloading new firmware, and upgrading the new firmware and you get the new feature. That is that's one of the key innovation. That's why most of the cloud providers like us, that we are not tied to hardware. It's more of software programmable processor that we can keep on adding features in the future. >> So one way to look at it, is like, you get the best of both worlds kind of a thing. You get power and performance of Async, but at the same time you get the flexibility of closer to that of a general purpose processor. >> Yeah, so Krishna, since you own the software piece of thing, help us understand architecturally, how you can deploy something today but be ready for whatever comes in the future. That's always been the challenge is, Gee, maybe if I wait another six months, there'll be another generation something, where I don't want to make sure that I miss some window of opportunity. >> Yeah, so it's a very good question. I mean, basically you can keep enhancing your features with the same performance and power and latency and throughput. But the other important thing is how you upgrade the software. I mean today whenever you have Async. When you have changed the Async, obviously, you have to pull the card out and you put the new card in. Here, when you're talking upgrading software, we can upgrade software while traffic is going through. With very minimal disruption, in the order of sub second. Right, so you can change your protocol, for example, tomorrow, we change from VXLAN to your own innovative protocol, you can upgrade that without disrupting any existing network or storage IO. I mean, that's where the power of the platform is very useful. And if you look at it today, where cloud providers are going right, and the cloud providers, you don't want to, because there are customers who are using that server, and they're deploying their application, they don't want to disturb that application, just because you decided to do some new innovative feature. The platform capability is that you could upgrade it, and you can change your mind sometime in the future. But whatever existing traffic is there, the traffic will continue to flow and not disrupt your app. >> All right, great. Well, you're talking about clouds one of the things we look at is multi cloud and multi vendor. Pirabhu, we've got the announcement with HPE now, ProLion and some of their other platforms. Tell us how much work will it be for you to support things like Dell servers or I think your team's quite familiar with the Cisco UCS platform. Two pieces on that number one: how easy or hard is it to do that integration? And from an architectural design? Does a customer need to be homogeneous from their environment or is whatever cloud or server platform they're on independent, and we should be able to work across those? >> Yeah, first off, I should start with thanking HPE. They have been a great partner and they have been quick to recognize the synergy and the potential of the synergy. And they have been very helpful towards this integration journey. And the way we see it, a lot of the work has already been done in terms of finding out the integration issues with HPE. And we will build upon this integration work that has been done so that we can quickly integrate with other manufacturers like Dell and Cisco. We definitely want to integrate with other server manufacturers as well, because that is in the interest of our customers, who want to consume Pensando in a heterogenous fashion, not just from one server manufacturer. >> Just want to add one thing to what Pirabhu's saying. Basically, the way we think about it is that, there's x86 and then the all the IO, the infrastructure services, right. So for us, as long as you get power from the server, and you can get packets and IO across the PCIe bus, we are kind of, we want to make it a uniform layer. So the Pensando, if you think about it, is a layer that can work across servers, and could work inside the public cloud and when we have, one of our customers using this in hybrid cloud. So we want to be the base where we can do all the storage network and security services, irrespective of the server and where the server is placed. Whether it's placed in the call log, it's placed in the enterprise data center, or it's placed in the public cloud. >> All right, so I guess Krishna, you said first x86. Down the road, is there opportunity to go beyond Intel processors? >> Yes. I mean, we already support AMD, which is another form of x86. But other architecture doesn't prevent us from any servers. As long as you follow the PCIe standard, we should, it's more of a testing matrix issue. It's not about support of any other OS, we should be able to support it. And initially, we also tested once on PowerPC. So any kind of CPU architecture, we should be able to support. >> Okay, so walk me up the application stack a little bit though. Things like virtualization, containerization. There's the question of does it work but does it optimize? Any of us live through those waves of, Oh, okay, well it kind of worked, but then there was a lot of time to make things like the origin networking work well in virtualization and then in containerization. So how about your solution? >> I mean you should look at, a good example is AWS, like what AWS does with Nitro. So on Nitro, you do EBS, you do security, and you do VPC. In all the services is effectively, we think about it, all of those can be encapsulated in one DSC card. And obviously, when it comes to this kind of implementation on one card, right, the first question you would ask what happens to the noisy neighbor? So we have the right QOS mechanisms to make sure all the services go through the same card, at the same time giving guarantees to the customer that (mumbles) especially in the multi-tenant environment, whatever you're doing on one VPC will not affect the other VPC. And the advantage of the platform that what we have is very highly scalable and highly performing. Scale will not be the issue. I mean, if you look at existing platforms, even if you look at the cloud, because when you're doing this product, obviously, we'll do benchmarking with the cloud and enterprises. With respect to scale, performance and latency, we did the measurements and we are order of magnitude compared to (sneezes) given the existing clouds and currently whatever enterprise customers have. >> Excellent, so Pirabhu, I'm curious, from the enterprise standpoint, are there certain applications, I think about like, from an analytic standpoint, Splunk is so heavily involved in data that might be a natural fit or other things where it might not be fully tested out with anything kind of that ISV world that we need to think about. >> So if we're talking in terms of partner ecosystems, our enterprise customers do use many of the other products as well. And we are trying to integrate with other products so that we can get the maximum value. So if you look at it, you could get rich metrics and visualization capabilities from our product, which can be very helpful for the partner products because they don't have to install an agent and they can get the same capability across bare metal virtual stack as well as containers. So we are integrating with various partners including some CMDB configuration management database products, as well as data analytics or network traffic analytics products. Krishna, do you want to add anything? >> Yeah, so I think it's just not the the analytics products. We're also integrating with VMware. Because right now VMware is a computer orchestrated and we want to be the network policy orchestrator. In the future, we want to integrate with Kubernetes and OpenShift. So we want to add integration so that our platform capability can be easily consumable irrespective of what kind of workload you use or what kind of traffic analytics tool you use or what kind of data link that you use in your enterprise data center. >> Excellent, I think that's a good view forward as to where some of the work is going on the future integration. Krishna and Pirabhu, thank you so much for joining us. Great to catch up. >> Thank you Stu. >> Thanks for having us. >> All right. I'm Stu Miniman. Thank you for watching theCUBE. (gentle music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, he's the Vice President of Software. really the mission and shortly obviously, it's like the better software as you start One of the big discussion to what programs you run on the chip, Pirabhu of course, the big and you will get all the or cloud like services for the network, Maybe if you could, networking, and it makes it very easy for you and you get the new feature. but at the same time you comes in the future. and you can change your clouds one of the things And the way we see it, So the Pensando, if you think about it, Down the road, is there opportunity As long as you follow the PCIe standard, There's the question of does it work the first question you would ask from the enterprise standpoint, So if you look at it, you In the future, we want to integrate on the future integration. Thank you for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KrishnaPERSON

0.99+

CiscoORGANIZATION

0.99+

PirabhuPERSON

0.99+

DellORGANIZATION

0.99+

BostonLOCATION

0.99+

Pirabhu RamanPERSON

0.99+

PensandoORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

oneQUANTITY

0.99+

LinuxTITLE

0.99+

twoQUANTITY

0.99+

ESXTITLE

0.99+

tomorrowDATE

0.99+

Two piecesQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

Krishna DoddapaneniPERSON

0.99+

AWSORGANIZATION

0.99+

one serverQUANTITY

0.99+

WindowsTITLE

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

one cardQUANTITY

0.99+

AMDORGANIZATION

0.99+

FreeBSDTITLE

0.99+

six monthsQUANTITY

0.99+

todayDATE

0.98+

firstQUANTITY

0.98+

threeQUANTITY

0.98+

IntelORGANIZATION

0.98+

both worldsQUANTITY

0.97+

thousandsQUANTITY

0.97+

OneQUANTITY

0.97+

one partQUANTITY

0.97+

one pieceQUANTITY

0.97+

one thingQUANTITY

0.96+

AsyncTITLE

0.95+

this monthDATE

0.95+

first formQUANTITY

0.94+

thousands of serversQUANTITY

0.94+

two fundamental piecesQUANTITY

0.93+

HPEORGANIZATION

0.92+

HPETITLE

0.91+

x86TITLE

0.9+

one wayQUANTITY

0.89+

PensandoLOCATION

0.89+

singleQUANTITY

0.88+

ProLionORGANIZATION

0.88+

AsyncsTITLE

0.86+

one server manufacturerQUANTITY

0.85+

VMwareTITLE

0.82+

CUBEORGANIZATION

0.8+

Sanjay Poonen, VMware | AWS Summit Online 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hello, welcome back to theCUBE's coverage, CUBE Virtual's coverage, CUBE digital coverage, of AWS Summit, virtual online, Amazon Summit's normally in face-to-face all around the world, it's happening now online, follow the sun. Of course, we want to bring theCUBE coverage like we do at the events digitally, and we've got a great guest that usually comes on face-to-face, he's coming on virtual, Sanjay Poonen, the chief operating officer of VMware. Sanjay great to see you, thanks for coming in virtually, you look great. >> Hey, John thank you very much. Always a pleasure to talk to you. This is the new reality. We both happen to live very close to each other, me in Los Altos, you in Palo Alto, but here we are in this new mode of communication. But the good news is I think you guys at theCUBE were pioneering a lot of digital innovation, the AI platform, so hopefully it's not much of an adjustment for you guys to move digital. >> It's not really a pivot, just move the boat, put the sails up and sail into the next generation, which brings up really the conversation that we're seeing, which is this digital challenge, the virtual world, it's virtualization, Sanjay, it sounds like VMware. Virtualization spawned so much opportunity, it created Amazon, some say, I'd say. Virtualizing our world, life is now integrated, we're immersed into each other, physical and digital, you got edge computing, you got cloud native, this is now a clear path to customers that recognize with the pandemic challenges of at-scale, that they have to operate their business, reset, reinvent, and grow coming out of this pandemic. This has been a big story that we've been talking about and a lot of smart managers looking at projects saying, I'm doubling down on that, and I'm going to move the resources from this, the people and budget, to this new reality. This is a tailwind for the folks who were prepared, the ones that have the experience, the ones that did the work. theCUBE, thanks for the props, but VMware as well. Your thoughts and reaction to this new reality, because it has to be cloud native, otherwise it doesn't work, your thoughts. >> Yeah, I think, John, you're right on. We were very fortunate as a company to invent the term virtualization for an x86 architecture and the category 20 years ago when Diane founded this great company. And I would say you're right, the public cloud is the instantiation of virtualization at its sort of scale format and we're excited about this Amazon partnership, we'll talk more about that. This new world of doing everything virtual has taken the same concepts to whole new levels. We are partnering very closely with companies like Zoom, because a good part of this is being able to deliver video experiences in there, we'll talk about that if needed. Cloud native security, we announced an acquisition today in container security that's very important because we're making big moves in security, security's become very important. I would just say, John, the first thing that was very important to us as we began to shelter in place was the health of our employees. Ironically, if I go back to, in January I was in Davos, in fact some of your other folks who were on the show earlier, Matt Garman, Andy, we were all there in January. The crisis already started in China, but it wasn't on the world scene as much of a topic of discussion. Little did we know, three, four weeks later, fast forward to February things were moving so quickly. I remember a Friday late in February where we were just about to go the next week to Las Vegas for our in-person sales kickoffs. Thousands of people, we were going to do, I think, five or 6,000 people in Las Vegas and then another 3,000 in Barcelona, and then finally in Singapore. And it had not yet been categorized a pandemic. It was still under this early form of some worriable virus. We decided for the health and safety of our employees to turn the entire event that was going to happen on Monday to something virtual, and I was so proud of the VMware team to just basically pivot just over the weekend. To change our entire event, we'd been thinking about video snippets. We have to become in this sort of virtual, digital age a little bit like TV producers like yourself, turn something that's going to be one day sitting in front of an audience to something that's a lot shorter, quicker snippets, so we began that, and the next thing we began doing over the next several weeks while the shelter in place order started, was systematically, first off, tell our employees, listen, focus on your health, but if you're healthy, turn your attention to serving your customers. And we began to see, which we'll talk about hopefully in the context of the discussion, parts of our portfolio experience a tremendous amount of interest for a COVID-centered world. Our digital workplace solutions, endpoint security, SD-WAN, and that trifecta began to be something that we began to see story after story of customers, hospitals, schools, governments, retailers, pharmacies telling us, thank you, VMware, for helping us when we needed those solutions to better enable our people on the front lines. And all VMware's role, John, was to be a digital first responder to the first responder, and that gave tremendous amount of motivation to all of our employees into it. >> Yeah, and I think that's a great point. One of the things we've been talking about, and you guys have been aligned with this, you mentioned some of those points, is that as we work at home, it points out that digital and technology is now part of lifestyle. So we used to talk about consumerization of IT, or immersion with augmented reality and virtual reality, and then talk about the edge of the network as an endpoint, we are at the edge of the network, we're at home, so this highlights some of the things that are in demand, workspaces, VPN provisioning, these new tools, that some cases we've been hearing people that no one ever thought of having a forecast of 100% VPN penetration. Okay, you did the AirWatch deal way back when you first started, these are now fruits of those labors. So I got to ask you, as managers of your customer base are out there thinking, okay, I got to double down on the right growth strategy for this post-pandemic world, the smart managers are going to look at the technologies enabled for business outcome, so I have to ask you, innovation strategies are one thing, saying it, putting it place, but now more than ever, putting them in action is the mandate that we're hearing from customers. Okay I need an innovation strategy, and I got to put it into action fast. What do you say to those customers? What is VMware doing with AWS, with cloud, to make those innovation strategies not only plausible but actionable? >> That's a great question, John. We focused our energy, before even COVID started, as we prepared for this year, going into sales kickoffs and our fiscal year, around five priorities. Number one was enabling the world to be multicloud, private cloud and public cloud, and clearly our partnership here with Amazon is the best example of that and they are our preferred cloud partner. Secondly, building modern apps with microservices and cloud native, what we call app modernization. Thirdly, which is a key part to the multicloud, is building out the entire network stack, data center networking, the firewalls, the load bouncing in SD-WAN, so I'd call that cloud network. Number four, the modernization of workplace with an additional workspace solution, Workspace ONE. And five, intrinsic security from all aspects of security, network, endpoint, and cloud. So those five priorities were what we began to think through, organize our portfolio, we call them solution pillars, and for any of your viewers who're interested, there's a five-minute version of the VMware story around those five pillars that you can watch on YouTube that I did, you just search for Sanjay Poonen and five-minute story. But then COVID hit us, and we said, okay we got to take these strategies now and make them more actionable. Exactly your question, right? So a subset of that portfolio of five began to become more actionable, because it's pointless going and talking about stuff and it's like, hey, listen, guys, I'm a house on fire, I don't care about the curtains and all the wonderful art. You got to help me through this crisis. So a subset of that portfolio became kind of what was those, think about now your laptop at home, or your endpoint at home. People wanted, on top of their Zoom call, or surrounding their Zoom call, a virtual desktop managed easily, so we began to see Workspace ONE getting a lot of interest from our customers, especially the VDI part of that portfolio. Secondly, that laptop at home needed to be secured. Traditional, old, legacy AV solutions that've worked, enter Carbon Black, so Workspace ONE plus Carbon Black, one and two. Third, that laptop at home needs network acceleration, because we're dialoguing and, John, we don't want any latency. Enter SD-WAN. So the trifecta of Workspace ONE, Carbon Black and VeloCloud, that began to see even more interest and we began to hone in our portfolio around those three. So that's an example of where you have a general strategy, but then you apply it to take action in the midst of a crisis, and then I say, listen, that trifecta, let's just go and present what we can do, we call that the business continuity or business resilience part of our portfolio. We began to start talking to customers, and saying, here's our business continuity solution, here's what we could do to help you, and we targeted hospitals, schools, governments, pharmacies, retailers, the ones who're on the front line of this and said again, that line I said earlier, we want to be a digital first responder to you, you are the real first responder. Right before this call I got off a CIO call with the CIO of a major hospital in the northeast area. What gives me great joy, John, is the fact that we are serving them. Their beds are busting at the seam, in serving patients-- >> And ransomware's a huge problem you guys-- >> We're serving them. >> And great stuff there, Sanjay, I was just on a call this morning with a bunch of folks in the security industry, thought leaders, was in DC, some generals were there, some real thought leaders, trying to figure out security policy around biosecurity, COVID-19, and this invisible disruption, and they were equating it to like the World Wars. Big inflection point, and one of the generals said, in those times of crisis you need alliances. So I got to ask you, COVID-19 is impactful, it's going to have serious impact on the critical nature of it, like you said, the house is on fire, don't worry about the curtains. Alliances matter more than ever when you need to come together. You guys have an ecosystem, Amazon's got an ecosystem, this is going to be a really important test to the alliances out there. How do you view that as you look forward? You need the alliances to be successful, to compete and win in the new world as this invisible enemy, if you will, or disruptor happens, what's your thoughts? >> Yeah, I'll answer in a second, just for your viewers, I sneezed, okay? I've been on your show dozens of time, John, but in your live show, if I sneezed, you'd hear the loud noise. The good news in digital is I can mute myself when a sneeze is about to happen, and we're able to continue the conversation, so these are some side benefits of the digital part of it. But coming to your question on alliance, super important. Ecosystems are how the world run around, united we stand, divided we fall. We have made ecosystems, I've always used this phrase internally at VMware, sort of like Isaac Newton, we see clearly because we stand on the shoulders of giants. So VMware is always able to be bigger of a company if we stand on the shoulders of bigger giants. Who were those companies 20 years ago when Diane started the company? It was the hardware economy of Intel and then HP and Dell, at the time IBM, now Lenovo, Cisco, NetApp, DMC. Today, the new hardware companies Amazon, Azure, Google, whoever have you, we were very, I think, prescient, if you would, to think about that and build a strategic partnership with Amazon three or four years ago. I've mentioned on your show before, Andy's a close friend, he was a classmate over at Harvard Business School, Pat, myself, Ragoo, really got close to Andy and Matt Garman and Mike Clayville and several members of their teams, Teresa Carlson, and began to build a partnership that I think is one of the most incredible success stories of a partnership. And Dell's kind of been a really strong partner with us on private cloud, having now Amazon with public cloud has been seminal, we do regular meetings and build deep integration of, VMware Cloud and AWS is not some announcement two or three years ago. It's deep engineering between, Bask's now in a different role, but in his previous role, that and people like Mark Lohmeyer in our team. And that deep engineering allows us to know and tell customers this simple statement, which both VMware and Amazon reps tell their customers today, if you have a workload running on vSphere, and you want to move that to Amazon, the best place, the preferred place for that is VMware Cloud and Amazon. If you try to refactor that onto a native VC 2, it's a waste of time and money. So to have the entire army of VMware and Amazon telling customers that statement is a huge step, because it tells customers, we have 70 million virtual machines running on-prem. If customers are looking to move those workloads to Amazon, the best place for that VMware Cloud and AWS, and we have some credible customer case studies. Freddie Mac was at VMworld last year. IHS Markit was at VMworld last year talking about it. Those are two examples and many more started it, so we would like to have every VMware and Amazon customer that's thinking about VMware to look at this partnership as one of the best in the industry and say very similar to what Andy I think said on stage at the time of this announcement, it doesn't have to be now a trade-off between public and private cloud, you can get the best of both worlds. That's what we're trying to do here-- >> That's a great point, I want to get your thoughts on leadership, as you look at COVID-19, one of our tracks we're going to be promoting heavily on theCUBE.net and our sites, around how to manage through this crisis. Andy Jassy was quoted on the fireside chat, which is coming up here in North America, but I saw it yesterday in New Zealand time as I time shifted over there, it's a two-sided door versus a one-sided door. That was kind of his theme is you got to be able to go both ways. And I want to get your thoughts, because you might know what you're doing in certain contexts, but if you don't know where you're going, you got to adjust your tactics and strategies to match that, and there's and old expression, if you don't know where you're going, every road will take you there, okay? And so a lot of enterprise CXOs or CEOs have to start thinking about where they want to go with their business, this is the growth strategy. Then you got to understand which roads to take. Your thoughts on this? Obviously we've been thinking it's cloud native, but if I'm a decision maker, I want to make sure I have an architecture that's going to carry me forward to the future. I need to make sure that I know where I'm going, so I know what road I'm on. Versus not knowing where I'm going, and every road looks good. So your thoughts on leadership and what people should be thinking around knowing what their destination is, and then the roads to take? >> John, I think it's the most important question in this time. Great leaders are born through crisis, whether it's Winston Churchill, Charles de Gaulle, Roosevelt, any of the leaders since then, in any country, Mahatma Gandhi in India, the country I grew up, Nelson Mandela, MLK, all of these folks were born through crisis, sometimes severe crisis, they had to go to jail, they were born through wars. I would say, listen, similar to the people you talked about, yeah, there's elements of this crisis that similar to a World War, I was talking to my 80 year old father, he's doing well. I asked him, "When was the world like this?" He said, "Second World War." I don't think this crisis is going to last six years. It might be six or 12 months, but I really don't think it'll be six years. Even the health care professionals aren't. So what do we learn through this crisis? It's a test of our leadership, and leaders are made or broken during this time. I would just give a few guides to leaders, this is something tha, Andy's a great leader, Pat, myself, we all are thinking through ways by which we can exercise this. Think of Sully Sullenberger who landed that plane on the Hudson. Did he know when he flew that airbus, US Airways airbus, that few flock of birds were going to get in his engine, and that he was going to have to land this plane in the Hudson? No, but he was making decisions quickly, and what did he exude to his co-pilot and to the rest of staff, calmness and confidence and appropriate communication. And I think it's really important as leaders, first off, that we communicate, communicate, communicate, communicate to our employees. First, our obligation is first to our employees, our family first, and then of course to our company employees, all 30,000 at VMware, and I'm sure similarly Andy does it to his, whatever, 60, 70,000 at AWS. And then you want to be able to communicate to them authentically and with clarity. People are going to be reading between the lines of everything you say, so one of the things I've sought to do with my team, all the front office functions report to me, is do half an hour Zoom video conferences, in the time zone that's convenient to them, so Japan, China, India, Europe, in their time zone, so it's 10 o'clock my time because it's convenient to Japan, and it's just 10 minutes of me speaking of what I'm seeing in the world, empathizing with them but listening to them for 20 minutes. That is communication. Authentically and with clarity, and then turn your attention to your employees, because we're going stir crazy sitting at home, I get it. And we've got to abide by the ordinances with whatever country we're in, turn your attention to your customers. I've gotten to be actually more productive during this time in having more customer conference calls, video conference calls on Zoom or whatever platform with them, and I'm looking at this now as an opportunity to engage in a new way. I have to be better prepared, like I said, these are shorter conversations, they're not as long. Good news I don't have to all over the place, that's better for my family, better for the carbon emission of the world, and also probably for my life long term. And then the third thing I would say is pick one area that you can learn and improve. For me, the last few years, two, three years, it's been security. I wanted to get the company into security, as you saw today we've announced mobile, so I helped architect the acquisition of Carbon Black, very similar to kind of the moves I've made six years ago around AirWatch, very key part to all of our focus to getting more into security, and I made it a personal goal that this year, at the start of the year, before COVID, I was going to meet 1,000 CISOs, in the Fortune 1000 Global 2000. Okay, guess what, COVID happens, and quite frankly that goal's gotten a little easier, because it's much easier for me to meet a lot more people on Zoom video conferences. I could probably do five, 10 per day, and if there's 200 working days in a day, I can easily get there, if I average about five per day, and sometimes I'm meeting them in groups of 10, 20. >> So maybe we can get you on theCUBE more often too, 'cause you have access to a video camera. >> That is my growth mindset for this year. So pick a growth mindset area. Satya Nadella puts this pretty well, "Move from being a know-it-all to a learn-it-all." And that's the mindset, great company. Andy has that same philosophy for Amazon, I think the great leaders right now who are running these cloud companies have that growth mindset. Pick an area that you can grow in this time, and you will find ways to do it. You'll be able to learn online and then be able to teach in some fashion. So I think communicate effectively, authentically, turn your attention to serving your customers, and then pick some growth area that you can learn yourself, and then we will come out of this crisis collectively, individuals and as partners, like VMware and Amazon, and then collectively as a society, I believe we'll come out stronger. >> Awesome great stuff, great insight there, Sanjay. Really appreciate you sharing that leadership. Back to the more of technical questions around leadership is cloud native. It's clear that there's going to be a line in the sand, if you will, there's going to be a right side of history, people are going to have to be on the right side of history, and I believe it's cloud native. You're starting to see this emersion. You guys have some news, you just announced today, you acquired a Kubernetes security startup, around Kubernetes, obviously Kubernetes needs security, it's one of those key new enablers, disruptive enablers out there. Cloud native is a path that is a destination opportunity for people to think about, why that acquisition? Why that company? Why is VMware making this move? >> Yeah, we felt as we talked about our plans in security, backing up to things I talked about in my last few appearances on your show at VMworld, when we announced Carbon Black, was we felt the security industry was broken because there was too many point benders, and we figured there'd be three to five control points, network, endpoint, cloud, where we could play a much more pronounced role at moving a lot of these point benders, I describe this as not having to force our customers to go to a doctor and say I've got to eat 5,000 tablets to get healthy, you make it part of your diet, you make it part of the infrastructure. So how do we do that? With network security, we're off to the races, we're doing a lot more data center networking, firewall, load bouncing, SD-WAN. Really, reality is we can eat into a lot of the point benders there that I've just been, and quite frankly what's happened to us very gratifying in the network security area, you've seen the last few months, some firewall vendors are buying SD-WAN players, kind of following our strategy. That's a tremendous validation of the fact that the network security space is being disrupted. Okay, move to endpoint security, part of the reason we acquired Carbon Black was to unify the client side, Workspace ONE and Carbon Black should come together, and we're well under way in doing that, make Carbon Black agentless on the server side with vSphere, we're well on the way to that, you'll see that very soon. By the way both those things are something that the traditional endpoint players can't do. And then bring out new forms of workload. Servers that are virtualized by VMware is just one form of work. What are other workloads? AWS, the public clouds, and containers. Container's just another workload. And we've been looking at container security for a long time. What we didn't want to do was buy another static analysis player, another platform and replatform it. We felt that we could get great technology, we have incredible grandeur on container cell. It's sort of Red Hat and us, they're the only two companies who are doing Kubernetes scales. It's not any of these endpoint players who understand containers. So Kubernetes, VMware's got an incredible brand and relevance and knowledge there. The networking part of it, service mesh, which is kind of a key component also to this. We've been working with Google and others like Istio in service mesh, we got a lot of IP there that the traditional endpoint players, Symantec, McAfee, Trend, CrowdStrike, don't know either Kubernetes or service mesh well. We add now container security into this, we really distinguish ourselves further from the traditional endpoint players with bringing together, not just the endpoint platform that can do containers, but also Kubernetes service mesh. So why is that important? As people think about their future in containers, they'll want to do this at the runtime level, not at the static level. They'll want to do it at build time And they'll want to have it integrated with some of their networking capabilities like service mesh. Who better to think about that IP and that evolution than VMware, and now we bring, I think it's 12 to 14 people we're bringing in from this acquisition. Several of them in Israel, some of them here in Palo Alto, and they will build that platform into the tech that VMware has onto the Carbon Black cloud and we will deliver that this year. It's not going to be years from now. >> Did you guys talk about the-- >> Our capability, and then we can bring the best of Carbon Black, with Tanzu, service mesh, and even future innovation, like, for example, there's a big movement going around, this thing call open policy agent OPA, which is an open source effort around policy management. You should expect us to embrace that, there could be aspects of OPA that also play into the future of this container security movement, so I think this is a really great move for Patrick and his team, I'm very excited. Patrick is the CEO of Carbon Black and the leader of that security business unit, and he came to me and said, "Listen, one of the areas "we need to move in is container security "because it's the number one request I'm hearing "from our CESOs and customers." I said, "Go ahead Patrick. "Find out who are the best player you could acquire, "but you have to triangulate that strategy "with the Tanzu team and the NSX team, "and when you have a unified strategy what we should go, "we'll go an make the right acquisition." And I'm proud of what he was able to announce today. >> And I noticed you guys on the release didn't talk about the acquisition amount. Was it not material, was it a small amount? >> No, we don't disclose small, it's a tuck-in acquisition. You should think of this as really bringing us some tech and some talent, and being able to build that into the core of the platform of Carbon Black. Carbon Black was the real big move we made. Usually what we do, you saw this with AirWatch, right, anchor on a fairly big move. We paid I think 2.1 billion for Carbon Black, and then build and build and build on top of that, partner very heavily, we didn't talk about that. If there's time we could talk about it. We announced today a security alliance with top SIEM players, in what's called a sock alliance. Who's announced in there? Splunk, IBM QRadar, Google Chronicle, Sumo Logic, and Exabeam, five of the biggest SIEM players are embracing VMware in endpoint security, saying, Carbon Black is who we want to work with. Nobody else has that type of partnership, so build, partner, and then buy. But buy is always very carefully thought through, we're not one of these companies like CA of the past that just bought every company and then it becomes a graveyard of dead acquisition. Our view is we're very disciplined about how we think about acquisition. Acquisitions for us are often the last resort, because we'd prefer to build and partner. But sometimes for time-to-market reasons, we acquire, and when we acquire, it's thoughtful, it's well-organized within VMware, and we take care of our people, 'cause we want, I mean listen, why do acquisitions fail? Because the good people leave. So we're excited about this team, the team in Israel, and the team in Palo Alto, they come from Octarine. We're going to integrate them rapidly into the platform, and this is a good evidence of VMware investing more in security, and our Q3 earnings pulled, John, I said, sorry, we said that the security business was a billion dollar business at VMware already, primarily from network, but some from endpoint. This is evidence of us putting more fuel behind that fire. It's only been six, seven months and Patrick's made his first acquisition inside Carbon Black, so you're going to see us investing more in security, it's an important priority for the company, and I expect us to be a very prominent player in these three pillars, network security, endpoint security, endpoint is both client and the workload, and cloud. Network, endpoint, cloud, they are the three areas where we think there's lots of room for innovation in security. >> Well, we'll be watching, we'll be reporting and analyzing the moves. Great playbook, by the way. Love that organic partnering and then key acquisitions which you build around, it's a great playbook, I think it's very relevant for this time. The most important question I have to ask you, Sanjay, and this is a personal question, because you're the leader of VMware, I noticed that, we all know you're into music, you've been putting music online, kind of a virtual band. You've also hired a CUBE alumni, Victoria Verango from McAfee who also puts up music, you've got some musicians, but you kind of know how to do the digital moves there, so the question is, will the music at VMworld this year be virtual? >> Oh, man. Victoria is actually an even better musician than me. I'm excited about his marketing gifts, but I'm also excited to watch him. But yeah, you've heard him sing, he's got a voice that's somewhat similar to Sting, so we, just for fun, in our Diwali, which is an Indian celebration last year, Tom Corn, myself, and a wonderful lady named Divya, who's got a beautiful voice, had sung a song, which was off the soundtrack of the Bollywood movie, "Secret Superstar," and we just for fun decided to record that in our three separate homes, and put that out on YouTube. You can listen, it's just a two or three-minute run, and it kind of went a little bit viral. And I was thinking to myself, hey, if this is one way by which we can let the VMware community know that, hey, you know what, art conquers COVID-19, you can do music even socially distant, and bring out the spirit of VMware, which is community. So we might build on that idea, Victoria and I were talking about that last night and saying, hey, maybe we do a virtual music kind of concert of maybe 10 or 15 or 20 voices in the various different countries. Record piece of a song and music and put it out there. I think these are just ways by which we're having fun in a virtual setting where people get to see a different side of VMware where, and the intent here, we're all amateurs, John, we're not like great. There are going to be mistakes in this music. If you listen to that audio, it sounds a little tinny, 'cause we're recording it off our iPhone and our iPad microphone. But we'll do the best we can, the point is just to show the human spirit and to show that we care, and at the end of the day, see, the COVID-19 virus has no prejudice on color of skin, or nationality, or ethnicity. It's affecting the whole world. We all went into the tunnel at different times, we will come out of this tunnel together and we will be a stronger human fabric when we're done with this, We shall absolutely overcome. >> Sanjay, give us a quick update to end the segment on your thoughts around VMworld. It's one of the biggest events, we look forward to it. It's the only even left standing that theCUBE's been to every year of theCUBE's existence, we're looking forward to being part of theCUBE virtual. It's been announced it's virtual. What are some of the thinking going on at the highest levels within the VMware community around how you're going to handle VMworld this year? >> Listen, when we began to think about it, we had to obviously give our customers and folks enough notice, so we didn't want to just spring that sometime this summer. So we decided to think through it carefully. I asked Robin, our CMO, to talk to many of the other CMOs in the industry. Good news is all of these are friends of ours, Amazon, Microsoft, Google, Salesforce, Adobe, and even some smaller companies, IBM did theirs. And if they were in the first half of the year, they had to go virtual 'cause we're sheltered in place, and IBM did theirs, Okta did theirs, and we began to watch how they were doing this. We're kind of in the second half, because we were August, September, and we just sensed a lot of hesitancy from our customers that wanted to get on a plane to come here, and even if we got just 500, 1,000, a few thousand, it wasn't going to be the same and there would always be that sort of, even if we were getting back to that, some worry, so we figured we'd do something that might be semi-digital, and we may have some people that roam, but the bulk of it is going to be digital, and we changed the dates to be a little later. I think it's September 20th to 29th. Right now it's all public now, we announced that, and we're going to make it a great program. In some senses like we're becoming TV producer. I told our team we got to be like Disney or ESPN or whoever your favorite show is, YouTube, and produce a really good several-hour program that has got a different way in which digital content is provided, smaller snippets, very interesting speakers, great brand names, make the content clear, crisp and compelling. And if we do that, this will be, I don't know, maybe it's the new norm for some period of time, or it might be forever, I don't know. >> John: We're all learning. >> In the past we had huge conferences that were busting 50, 70, 100,000 and then after the dot-com era, those all shrunk, they're like smaller conferences, and now with advent of companies like Amazon and Salesforce, we have huge events that, like VMworld, are big events. We may move to a environment that's a lot more digital, I don't know what the future of in-presence physical conferences are, but we, like others, we're working with AWS in terms of their future with Reinvent, what Microsoft's doing with Ignite, what Google's doing with Next, what Salesforce's going to do with Dreamforce, all those four companies are good partners of ours. We'll study theirs, we'll work together as a community, the CMOs of all those companies, and we'll come together with something that's a very good digital experience for our customers, that's really what counts. Today I did a webinar with a partner. Typically when we did a briefing in our briefing center, 20 people came. There're 100 people attending this, I got a lot more participation in this QBR that I did with this SI partner, one of the top SIs in the world, in an online session with them, than would I have gotten if they'd all come to Palo Alto. That's goodness. Should we take the best of that world and some physical presence? Maybe in the future, we'll see how it goes. >> Content quality. You know, you know content. Content quality drives everything online, good engagement creates community, that's a nice flywheel. I think you guys will figure it out, you've got a lot of great minds there, and of course, theCUBE virtual will be helping out as we can, and we're rethinking things too-- >> We count on that, John-- >> We're going to be open minded to new ideas, and, hey, whatever's the best content we can deliver, whether it's CUBE, or with you guys, or whoever, we're looking forward to it. Sanjay, thanks for spending the time on this CUBE Keynote coverage of AWS Summit. Since it's digital we can do longer programs, we can do more diverse content. We got great customer practitioners coming up, talking about their journey, their innovation strategies. Sanjay Poonen, COO of VMware, thank you for taking your precious time out of your day today. >> Thank you, John, always a pleasure. >> Thank you. Okay, more CUBE, virtual CUBE digital coverage of AWS Summit 2020, theCUBE.net is we're streaming, and of course, tons of videos on innovation, DevOps, and more, scaling cloud, scaling on-premise hybrid cloud, and more. We got great interviews coming up, stay with us our all-day coverage. I'm John Furrier, thanks for watching. (upbeat music)

Published Date : May 13 2020

SUMMARY :

leaders all around the world, all around the world, This is the new reality. and I'm going to move and the next thing we began doing and I got to put it into action fast. and all the wonderful art. You need the alliances to be successful, and began to build a and then the roads to take? and then of course to So maybe we can get you and then be able to teach in some fashion. to be a line in the sand, part of the reason we and the leader of that didn't talk about the acquisition amount. and the team in Palo Alto, I have to ask you, Sanjay, and to show that we care, standing that theCUBE's been to but the bulk of it is going to be digital, In the past we had huge conferences and we're rethinking things too-- We're going to be and of course, tons of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Mark LohmeyerPERSON

0.99+

PatrickPERSON

0.99+

LenovoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RobinPERSON

0.99+

Charles de GaullePERSON

0.99+

MicrosoftORGANIZATION

0.99+

JohnPERSON

0.99+

Sanjay PoonenPERSON

0.99+

Victoria VerangoPERSON

0.99+

NSXORGANIZATION

0.99+

DellORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

fiveQUANTITY

0.99+

Mike ClayvillePERSON

0.99+

Teresa CarlsonPERSON

0.99+

VMwareORGANIZATION

0.99+

IsraelLOCATION

0.99+

Andy JassyPERSON

0.99+

DMCORGANIZATION

0.99+

Matt GarmanPERSON

0.99+

AdobeORGANIZATION

0.99+

sixQUANTITY

0.99+

Tom CornPERSON

0.99+

SingaporeLOCATION

0.99+

SanjayPERSON

0.99+

Mahatma GandhiPERSON

0.99+

Satya NadellaPERSON

0.99+

DisneyORGANIZATION

0.99+

Winston ChurchillPERSON

0.99+

JanuaryDATE

0.99+

six yearsQUANTITY

0.99+

Sully SullenbergerPERSON

0.99+

Los AltosLOCATION

0.99+

12QUANTITY

0.99+

BarcelonaLOCATION

0.99+

FebruaryDATE

0.99+

VictoriaPERSON

0.99+

NetAppORGANIZATION

0.99+

Nelson MandelaPERSON

0.99+

Carbon BlackORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

HPORGANIZATION

0.99+

5,000 tabletsQUANTITY

0.99+

ChinaLOCATION

0.99+

John FurrierPERSON

0.99+

North AmericaLOCATION

0.99+

Las VegasLOCATION

0.99+

yesterdayDATE

0.99+

20 minutesQUANTITY

0.99+

McAfeeORGANIZATION

0.99+

AWSORGANIZATION

0.99+